This replaces the round-robin mirror at fedoraproject.org, as that was
proving to be quite unreliable.
This is a short-term fix before add metalink support.
Signed-off-by: Tom Gundersen <teg@jklm.no>
For the sake of backwards compatibility, legacy support was enabled
by default. Flip this around, so that leaving the parameter out
means disabling it.
This is more intuitive, and will pave the way for dropping support
for the value being a bool in the future.
`osbuild-composer` always passes the argumnet explicitly, though
still always as a boolean.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Using a metalink resolves to a specific mirror at runtime, and
downloads each rpm from that repository.
We want to move to using the org.osbuild.files source, which means
that we must save the url to each rpm in the source definition, which
will be determined by which mirror is used to generate the config.
If we use metalinks to generate the source configuration, the mirror
used will be arbitrary. Instead, we want to pick the best mirror
explicitly, ideally in a way that is independent of the location
depsolving happens in (which will be different from the location
the rpms are downloaded to).
We can choose explicitly by passing baseurl rather than metalink
to dnf, so move in that direction now by replacing all metalinks
by baseurls in our dnf configuration.
Signed-off-by: Tom Gundersen <teg@jklm.no>
We now support sources and pipelines being passed to osbuild as one.
This will make the transformation from dnf to rpm stage simpler, as
the source object will then be different for each stage, so having
a shared one as now would be cumbersome.
Signed-off-by: Tom Gundersen <teg@jklm.no>
As long as this matches the build environment, this does not make
a differenece, but let us not depend on this.
This will be useful when automatically transforming dnf to rpm
pipelines, as the platform_module_id is needed as input to
osbuild-composer's dnf-json tool.
Performed using this script:
```
cat $1 | jq '(.stages[]? | select(.name == "org.osbuild.dnf") | .options.module_platform_id) |= . + "platform:f30"' | sponge $1
cat $1 | jq '(.build.pipeline.stages[]? | select(.name == "org.osbuild.dnf") | .options.module_platform_id) |= . + "platform:f30"' | sponge $1
```
Signed-off-by: Tom Gundersen <teg@jklm.no>
Drop the rpm downloading and instead use the files source. This gives
us caching for free, and is the last missing step before we can
deprecate the dnf stage.
The main benefit of the rpm over the dnf stage is that we pin the package
versions rather than the repo metadata version. This will allow us to
support continuously changing repositories as individual packages are much
less likely to change than the repos iteself, and old packages are meant
to stay around for some time, unlike the repo metadata which is instantly
swapped out.
Depsolving is also slow on the first run, which we were always hitting as
the depsolving was always happening in a fresh container.
Based on a patch by Lars Karlitski.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Pipelines encode which source content they need in the form of
repository metadata checksums (or rpm checksums). In addition, they
encode where they fetch that source content from in the form of URLs.
This is overly specific and doesn't have to be in the pipeline's hash:
the checksum is enough to specify an image.
In practice, this precluded using alternative ways of getting at source
packages, such as local mirrors, which could speed up development.
Introduce a new osbuild API: sources. With it, a stage can query for a
way to fetch source content based on checksums.
The first such source is `org.osbuild.dnf`, which returns repository
configuration for a metadata checksum. Note that the dnf stage continues
to verify that the content it received matches the checksum it expects.
Sources are implemented as programs, living in a `sources` directory.
They are run on the host (i.e., uncontained) right now. Each source gets
passed options, which are taken from a new command line argument to
osbuild, and an array of checksums for which to return content.
This API is only available to stages right now.
Commit 283281f broke compression by appending the argument last to the
tar command line. It needs to appear before the file.
Fix that and add a test.
[teg: add minor fix]
We've been using a generic `osbuild-run`, which sets up the build
environment (and works around bugs) for all build roots. It is already
getting unwieldy, because it tries to detect the OS for some things it
configures. It's also about to cause problems for RHEL, which doesn't
currently support a python3 shebang without having /etc around.
This patch changes the `build` key in a pipeline to not be a pipeline
itself, but an object with `runner` and `pipeline` keys. `pipeline` is
the build pipeline, as before. `runner` is the name of the runner to
use. Runners are programs in the `runners` subdirectory.
Three runners are included in this patch. They're copies of osbuild-run
for now (except some additions for rhel82). The idea is that each of
them only contains the minimal setup code necessary for an OS, and that
we can review what's needed when updating a build root.
Also modify the `--build-pipeline` command line switch to accept such a
build object (instead of a pipeline) and rename it accordingly, to
`--build-env`.
Correspondingly, `OSBUILD_TEST_BUILD_PIPELINE` → `OSBUILD_TEST_BUILD_ENV`.
The tests from the integration_tests directory, were superseded
by the new stage tests.
The Vagrant integration seems not to have been working since
ea68bb0c26, as a test-setup.py was
dropped there, which it relies on. Remove it for now. If we want
that back, we should consider that in a separate PR.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Downloading the gpg key is fragile and kept causing our tests to fail.
In general, we want to limit the network access, so let's just embed
the gpg keys directly in the pipeline.
Fixes#133.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Background:
grub2 works in three stages:
- The first stage is found in the first 440 bytes of the master
boot record, and its only purpose is to load and execute the
second stage. This stage is static, and just copied from the rpm
without modification.
- The second stage is found in the gap between the MBR and the
first partition, and may be up to 31kB in size. This stage is
specific to the host and must contain the instructions for
finding the right file system and subdirectory for the grub2
config and modules on the host, as well as the modules needed
to do this.
- The third stage is found in the `normal` module, which loads
grub2.conf, which in turn may load more modules and perform
arbitrary instructions.
Problem:
grub2-install is responsible for installing all these stages on the
target image. This goes against our design, as modifications outside
the filesystem should happen in the assembler, but modifications to
the filesystem should happen in a stage. In particular, we don't
want the contents of the image to differ in any way from the output
tree that is stored in our content store (the output of our last
stage). This causes a practical problem at the moment, as our
selinux stage is ran before the assembler, and as such the grub
modules do not get selinux labels applied.
It turns out that we could split grub2-install in two as we want,
by passing `--no-bootsector` to it to install only the modules,
and copy/genereta the two first stages as files under /boot and
then run `grub2-bios-setup` to write the stages from /boot into
the image where they belong.
Regrettably, this does not work as both `grub2-install` and
`grub2-bios-setup` introspect the system and block devices they
are being run on to generate the right configuration. This is not
what we want, as we would like to specifcy the config explicitly
and run them independently of the target image. The specific bug
we get in both cases is that the canonical path containing our
object store cannot be found.
Before osbuild this was not a problem, as other installers would
instal and assemble everything directly in the target image as a
loopback device. Something we explicitly do not want to do.
Solution:
This patch essentially reimplements grub2-install, or rather the
parts of it that we need. One change in behavior from the upstream
tool is that we no longer write the level one and level two boot
loaders to /boot before moving them into place, but just write them
directly where they belong (so they do not end up on the
filesystem).
The parts that copy files into /boot are now in the grub2 installer
and the parts that write the level one/two bootloaders are in the
qemu assembler.
This achieves a few principles I think we should always adher to:
- never run tools from the target image (no chroot)
- don't read/copy files from the target image that was written
by other stages. We already try to avoid sharing state, and
by treating the image as write-only, we avoid accidentally
sharing state through the target tree.
Based-on-suggestions-from: Javier Martinez Canillas <javierm@redhat.com>
With-god-like-debugging-and-fixes-by: Lars Karlitski <lubreni@redhat.com>
Signed-off-by: Tom Gundersen <teg@jklm.no>
Otherwise, sfdik would pick one at random. We want our images to be
reproducible to the extent possible, so we must move all randomness
out of the assemblers when we can.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Opt in to supporting the most common ones, if we want to support more
we can add support as the need arises.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This key carries no information and is never used anywhere. The json
files are not meant to be human readable, so simply drop this.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Require "checksum" option for each repository, which contains the
checksum of the `repodata/repomd.xml` file. This file (indirectly)
contains checksums for all packages.
Verify that the metadata dnf downloaded to install packages matches that
checksum. This way, this stage will give an error when a reposiory
changed between putting together the pipeline and running it.
Don't pass through arbitrary options. This means that pipeline repo
objects don't have the same options as dnf repo files anymore:
1. Hard code repo name to repo id. The name has no influence on the
resulting image and should thus not appear in a pipeline.
2. Set gpgcheck=1 when gpgkey is given. It defaults to false, which
means that all sample and test pipelines didn't verify packages. It
would have failed anyway, because the container doesn't have the key
referenced in /etc. Change all gpgkeys to refer to the key id and import
them manually.
3. Don't allow lists for baseurl and gpgkey. We can add that if we need
it at some point.
We've been effectively using the basearch of the host, making the stage
non-reproducible: if the same pipeline was run on machines with
different architectures, it would produce different results. However,
pipelines producing different outputs must be different. Thus, this
patch includes the basearch in the pipeline.
In principle, this allows cross-arch builds. dnf should be the only
stage running binaries from the target tree. This is not yet tested.
XZ compression is really slowing down our tests. Additionally, we dump
the resulting image right after the tests are done (at least on CI).
Let's just dump the compression.
The locale stage now cannot be used to set the keymap. Use the keymap
stage instead. Also, the stage was refactored to look like keymap and
timezone stages just to be consistent (systemd-firstboot is now used).
Don't try to guess how much room the filesystem will take up. In
practice, most people will want to specify a size anyway, depending on
their use case.
As is typical for osbuild, there are no convenience features for the
pipeline (it's not meant to be written manually). `size` must be given
in bytes and it must be a multiple of 512.
Let the image be responsible for running its own test, and simply
listen for the output from the testsuite.
Hook this up with a standard f30 image that contains a simple boot
test case, using systemctl to verify that all services started
correctly.
This replaces the old web-server test, giving similar functionality.
The reason for the change is twofold: this way the tests are fully
specificed in the pipeline, so easier to reproduce. Moreover, this
is less intrusive, as the test does not require network support in
the image.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Move the decision whether the root fs should be mounted ro or rw
into the pipeline configuration.
Update the pipelines accordingly.
Signed-off-by: Tom Gundersen <teg@jklm.no>
The testing script is getting too big and not very well organized. In
this commit a new module `integration_tests` is introduced that contains
parts of the original testing script split into multiple files. The
content should be the same, the only difference is that now you can run
the tests by invoking `python3 -m test`.