For stages testing it is too slow to rebuild the a image containing
@Core packages every time. Let's just reuse the a image for all tests.
This should speed up the test running time a LOT.
The stage testing is based on an output from the tree-diff tool. During
one test two pipelines are run and their outputs are compared using
tree-diff. The diff is then compared with expected diff included in
the repository.
The testosbuild.TestCase class creates a fresh store for each test,
because tests should run independent of each other.
This can lead to long waiting times while developing a new test case.
Allow overriding the store used with OSBUILD_TEST_STORE. This should
never be used where tests are actually run. It is a development-only
feature.
Background:
grub2 works in three stages:
- The first stage is found in the first 440 bytes of the master
boot record, and its only purpose is to load and execute the
second stage. This stage is static, and just copied from the rpm
without modification.
- The second stage is found in the gap between the MBR and the
first partition, and may be up to 31kB in size. This stage is
specific to the host and must contain the instructions for
finding the right file system and subdirectory for the grub2
config and modules on the host, as well as the modules needed
to do this.
- The third stage is found in the `normal` module, which loads
grub2.conf, which in turn may load more modules and perform
arbitrary instructions.
Problem:
grub2-install is responsible for installing all these stages on the
target image. This goes against our design, as modifications outside
the filesystem should happen in the assembler, but modifications to
the filesystem should happen in a stage. In particular, we don't
want the contents of the image to differ in any way from the output
tree that is stored in our content store (the output of our last
stage). This causes a practical problem at the moment, as our
selinux stage is ran before the assembler, and as such the grub
modules do not get selinux labels applied.
It turns out that we could split grub2-install in two as we want,
by passing `--no-bootsector` to it to install only the modules,
and copy/genereta the two first stages as files under /boot and
then run `grub2-bios-setup` to write the stages from /boot into
the image where they belong.
Regrettably, this does not work as both `grub2-install` and
`grub2-bios-setup` introspect the system and block devices they
are being run on to generate the right configuration. This is not
what we want, as we would like to specifcy the config explicitly
and run them independently of the target image. The specific bug
we get in both cases is that the canonical path containing our
object store cannot be found.
Before osbuild this was not a problem, as other installers would
instal and assemble everything directly in the target image as a
loopback device. Something we explicitly do not want to do.
Solution:
This patch essentially reimplements grub2-install, or rather the
parts of it that we need. One change in behavior from the upstream
tool is that we no longer write the level one and level two boot
loaders to /boot before moving them into place, but just write them
directly where they belong (so they do not end up on the
filesystem).
The parts that copy files into /boot are now in the grub2 installer
and the parts that write the level one/two bootloaders are in the
qemu assembler.
This achieves a few principles I think we should always adher to:
- never run tools from the target image (no chroot)
- don't read/copy files from the target image that was written
by other stages. We already try to avoid sharing state, and
by treating the image as write-only, we avoid accidentally
sharing state through the target tree.
Based-on-suggestions-from: Javier Martinez Canillas <javierm@redhat.com>
With-god-like-debugging-and-fixes-by: Lars Karlitski <lubreni@redhat.com>
Signed-off-by: Tom Gundersen <teg@jklm.no>
Otherwise, sfdik would pick one at random. We want our images to be
reproducible to the extent possible, so we must move all randomness
out of the assemblers when we can.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Opt in to supporting the most common ones, if we want to support more
we can add support as the need arises.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This key carries no information and is never used anywhere. The json
files are not meant to be human readable, so simply drop this.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Use the unittest module from the standard library. Also, ensure that
separate runs of this test don't share a osbuild store and clean up
after themselves.
With contributions from Ondřej Budai and Tom Gundersen.
Treat outputs like we treat trees: store them in the object store. This
simplifies using osbuild and allows returning a cached version if one is
available.
This makes the `--output` parameter redundant. Remove it.
Introduce and output id, which is the checksum over a full pipeline,
including all stages and the assembler. The id of a pipeline did not
include assemblers before. To be less confusing, rename the existing id
to "tree id".
Require "checksum" option for each repository, which contains the
checksum of the `repodata/repomd.xml` file. This file (indirectly)
contains checksums for all packages.
Verify that the metadata dnf downloaded to install packages matches that
checksum. This way, this stage will give an error when a reposiory
changed between putting together the pipeline and running it.
Don't pass through arbitrary options. This means that pipeline repo
objects don't have the same options as dnf repo files anymore:
1. Hard code repo name to repo id. The name has no influence on the
resulting image and should thus not appear in a pipeline.
2. Set gpgcheck=1 when gpgkey is given. It defaults to false, which
means that all sample and test pipelines didn't verify packages. It
would have failed anyway, because the container doesn't have the key
referenced in /etc. Change all gpgkeys to refer to the key id and import
them manually.
3. Don't allow lists for baseurl and gpgkey. We can add that if we need
it at some point.
We've been effectively using the basearch of the host, making the stage
non-reproducible: if the same pipeline was run on machines with
different architectures, it would produce different results. However,
pipelines producing different outputs must be different. Thus, this
patch includes the basearch in the pipeline.
In principle, this allows cross-arch builds. dnf should be the only
stage running binaries from the target tree. This is not yet tested.
XZ compression is really slowing down our tests. Additionally, we dump
the resulting image right after the tests are done (at least on CI).
Let's just dump the compression.
The locale stage now cannot be used to set the keymap. Use the keymap
stage instead. Also, the stage was refactored to look like keymap and
timezone stages just to be consistent (systemd-firstboot is now used).
Don't try to guess how much room the filesystem will take up. In
practice, most people will want to specify a size anyway, depending on
their use case.
As is typical for osbuild, there are no convenience features for the
pipeline (it's not meant to be written manually). `size` must be given
in bytes and it must be a multiple of 512.
The best practice for creating a pipeline should be to include at least
one level of build-pipelines. This makes sure that the tools used to
generate the target image are well-defined.
In principle one could add several layers, though in pracite, one would
hope that the envinment used to build the buildroot does not affect the
final image (and as we anyway cannot recurr indefinitely, we fall back
to simply using the host system in this case).
This only makes sense, if the contents of the host system truly does not
affect the generated image, and as such we do not include any information
about the host when computing the hash that identifies a pipeline.
In fact, any image could be used in its place, as long as the required
tools are present. This commit takes advantage of that fact. Rather than
run a pipeline with the host as the build root, take a second pipeline
to generate the buildroot, but do not include this when computing the
pipeline id (so it is different from simply editing the original JSON).
This is necessary so we can use the same pipelines on significantly
different host systems (run with different --bulid-pipeline arguments).
In particular, it allows our test pipelines that generate f30 images
to be run unmodified on Travis (which runs Ubuntu).
Signed-off-by: Tom Gundersen <teg@jklm.no>
Make sure we test the version of osbuild in the current checkout,
rather than the system instance.
Also default to using in-place directories for the object store
and output images. Using a tmpfs does not scale, especially on
CI infrastructue with limited memory.
The behavior can still be overridden by the environment variable,
as before, only the default changes.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Let the image be responsible for running its own test, and simply
listen for the output from the testsuite.
Hook this up with a standard f30 image that contains a simple boot
test case, using systemctl to verify that all services started
correctly.
This replaces the old web-server test, giving similar functionality.
The reason for the change is twofold: this way the tests are fully
specificed in the pipeline, so easier to reproduce. Moreover, this
is less intrusive, as the test does not require network support in
the image.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Move the decision whether the root fs should be mounted ro or rw
into the pipeline configuration.
Update the pipelines accordingly.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Take this as an argumnet to __init__ in the same way that `base`
is.
This avoids us having to deal with the case of someone setting a
stage before the build, which does not work as the stage id will
be wrong.
Signed-off-by: Tom Gundersen <teg@jklm.no>
The testing script is getting too big and not very well organized. In
this commit a new module `integration_tests` is introduced that contains
parts of the original testing script split into multiple files. The
content should be the same, the only difference is that now you can run
the tests by invoking `python3 -m test`.
This also changes the structure of the object store, though the
basic idea is the same.
The object store contains a directory of objects, which are content
addressable filesystem trees. Currently we only ever use their
content-hash internally, but the idea for this is basically Lars
Karlitski and Kay Sievers' `treesum()`. We may exopse this in the
future.
Moreover, it contains a directory of refs, which are symlinks named
by the stage id they correspond to (as before), pointing to an object
generated from that stage-id.
The ObjectStore exposes three method:
`has_tree()`: This checks if the content store contains the given tree.
If so, we can rely on the tree remaining there.
`get_tree()`: This is meant to be used with a `with` block and yields
the path to a read-only instance of the tree with the given id. If the
tree_id is passed in as None, an empty directory is given instead.
`new_tree()`: This is meant to be used with a `with` block and yields
the path to a directory in which the tree by the given id should be
created. If a base_id is passed in, the tree is initialized with the
tree with the given id. Only when the block is exited successfully
is the tree written to the content store, referenced by the id in
question.
Use this in Pipeline.run() to avoid regenerating trees unneccessarily.
In order to trigger a regeneration, the content store must currently
be manually flushed.
Update the travis test to run the noop pipeline twice, verifying that
the stage is only run the first time.
Signed-off-by: Tom Gundersen <teg@jklm.no>
RPM packages are now kept in output directory after build so that we
know exactly which packages to copy to the test. The test directory now
contains special directory for RPMs. Fedora developer portal is
referenced from README file.
The repository now contains a Vagrantfile for running the testing script
against an RPM package created locally using `make rpm`. To run this
test use `make vagrant-test`. setup.py was also modified to adhere to
packaging guidelines and not to install system-level executables.
The lincense is now included in the Python package using the MANIFEST.in
file.