This converts all fedora links in our samples to `mirrors.kernel.org`.
This mirror works best from around the world, so lets avoid the wild
mix of local mirrors and instead use kernel.org.
This mirror is also well-managed and properly funded, so we should not
run into too many problems with it.
As long as this matches the build environment, this does not make
a differenece, but let us not depend on this.
This will be useful when automatically transforming dnf to rpm
pipelines, as the platform_module_id is needed as input to
osbuild-composer's dnf-json tool.
Performed using this script:
```
cat $1 | jq '(.stages[]? | select(.name == "org.osbuild.dnf") | .options.module_platform_id) |= . + "platform:f30"' | sponge $1
cat $1 | jq '(.build.pipeline.stages[]? | select(.name == "org.osbuild.dnf") | .options.module_platform_id) |= . + "platform:f30"' | sponge $1
```
Signed-off-by: Tom Gundersen <teg@jklm.no>
Pipelines encode which source content they need in the form of
repository metadata checksums (or rpm checksums). In addition, they
encode where they fetch that source content from in the form of URLs.
This is overly specific and doesn't have to be in the pipeline's hash:
the checksum is enough to specify an image.
In practice, this precluded using alternative ways of getting at source
packages, such as local mirrors, which could speed up development.
Introduce a new osbuild API: sources. With it, a stage can query for a
way to fetch source content based on checksums.
The first such source is `org.osbuild.dnf`, which returns repository
configuration for a metadata checksum. Note that the dnf stage continues
to verify that the content it received matches the checksum it expects.
Sources are implemented as programs, living in a `sources` directory.
They are run on the host (i.e., uncontained) right now. Each source gets
passed options, which are taken from a new command line argument to
osbuild, and an array of checksums for which to return content.
This API is only available to stages right now.
Downloading the gpg key is fragile and kept causing our tests to fail.
In general, we want to limit the network access, so let's just embed
the gpg keys directly in the pipeline.
Fixes#133.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This key carries no information and is never used anywhere. The json
files are not meant to be human readable, so simply drop this.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Require "checksum" option for each repository, which contains the
checksum of the `repodata/repomd.xml` file. This file (indirectly)
contains checksums for all packages.
Verify that the metadata dnf downloaded to install packages matches that
checksum. This way, this stage will give an error when a reposiory
changed between putting together the pipeline and running it.
Don't pass through arbitrary options. This means that pipeline repo
objects don't have the same options as dnf repo files anymore:
1. Hard code repo name to repo id. The name has no influence on the
resulting image and should thus not appear in a pipeline.
2. Set gpgcheck=1 when gpgkey is given. It defaults to false, which
means that all sample and test pipelines didn't verify packages. It
would have failed anyway, because the container doesn't have the key
referenced in /etc. Change all gpgkeys to refer to the key id and import
them manually.
3. Don't allow lists for baseurl and gpgkey. We can add that if we need
it at some point.
We've been effectively using the basearch of the host, making the stage
non-reproducible: if the same pipeline was run on machines with
different architectures, it would produce different results. However,
pipelines producing different outputs must be different. Thus, this
patch includes the basearch in the pipeline.
In principle, this allows cross-arch builds. dnf should be the only
stage running binaries from the target tree. This is not yet tested.
Whenever an assembler is not specified, the output tree is instead
saved to the content store, in a directory named after the pipeline
id.
This should render the io.weldr.tree assembler redundant.
In order to build the samples as before, specify the content store
as the input directory to build any pipeline that uses the
io.weldr.untree stage.
Signed-off-by: Tom Gundersen <teg@jklm.no>
The boot loader snippets were not being generated on f29, we may want
to revisit that, but for now let's work against f30.
Signed-off-by: Tom Gundersen <teg@jklm.no>
These are meant to test the various assembers and stages and to show how pipelines
can be created. However, they are not meant to necessarily be the best way to create
any given image.
Note that some of the pipelines are dependent on each other.