Move the `test_boot` test from Travis over to the Github-Actions based
CI. This is the last test on Travis, and the Travis CI can now be
disabled, if we so wish.
This test leaves a valid `travis.yml` file around, since Travis will
still be enabled on the repository. We should first disable Travis and
then drop this file, if we want to get rid of it.
Continue our effort to move to Github-actions. This imports the runtime
tests from Travis into Github-actions. The `test_boot` test is still
left on travis, since it requires stacked KVM, which is not yet
available on github-actions.
Even on ubuntu we can build rpm-based pipelines without bootstrapping
via fedora 27. Drop the build env from the travis config and from our
samples directory.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Now that RPMs are built natively in Fedora containers via GitHub
Actions, we no longer need to build them in Travis CI.
Signed-off-by: Major Hayden <major@redhat.com>
Building rpm directly on Ubuntu is somewhat possible but it's fragile and
it cannot cover all quirks of rpm build. This commit switches the rpm build
tests to using fedora containers. This isn't also the best solution but
imho it's much better than the previous one.
This source adds support for downloaded files. The files are
indexed by their content hash, and the only option is their URL.
The main usecase for this will be downloading rpms. Allowing depsolving
to be done outside of osbuild, network access to be restricted and
downloaded rpms to be reused between runs.
Each source is now passed two additional arguments, a cache directory
and an output directory. Both are in the source's namespace, and
the source is responsible for managing them. Each directory may
contain contents from previous runs, but neither is ever guaranteed
to do so.
Downloaded contents may be saved to the cache and resued between
runs, and the requested content should be written to the output dir.
If secrets are used, the source must only ever write contents to
the output that corresponds to the available secrets (rather than
contents from the cache from previous runs).
Each stage is passed an additional argument, a sources directory.
The directory is read-only, and contains a subdirectory named after
each used source, which will contain the requseted contents when
the `Get()` call returns (if the source uses this functionality).
Based on a patch by Lars Karlitski.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Pipelines encode which source content they need in the form of
repository metadata checksums (or rpm checksums). In addition, they
encode where they fetch that source content from in the form of URLs.
This is overly specific and doesn't have to be in the pipeline's hash:
the checksum is enough to specify an image.
In practice, this precluded using alternative ways of getting at source
packages, such as local mirrors, which could speed up development.
Introduce a new osbuild API: sources. With it, a stage can query for a
way to fetch source content based on checksums.
The first such source is `org.osbuild.dnf`, which returns repository
configuration for a metadata checksum. Note that the dnf stage continues
to verify that the content it received matches the checksum it expects.
Sources are implemented as programs, living in a `sources` directory.
They are run on the host (i.e., uncontained) right now. Each source gets
passed options, which are taken from a new command line argument to
osbuild, and an array of checksums for which to return content.
This API is only available to stages right now.
The nbd device might not be ready after `qemu-nbd --connect` returns,
leading to access errors such as this further down:
sfdisk: cannot open /dev/nbd12: Inappropriate ioctl for device
Fix this by polling the device with `nbd-client --check <device>`.
Also, the nbd device might not be released after `qemu-nbd --disconnect`
returns. Fix this by using `nbd-client --disconnect`, which waits.
This introduces a new test dependency on nbd-client (in the ndb package
on Fedora).
We've been using a generic `osbuild-run`, which sets up the build
environment (and works around bugs) for all build roots. It is already
getting unwieldy, because it tries to detect the OS for some things it
configures. It's also about to cause problems for RHEL, which doesn't
currently support a python3 shebang without having /etc around.
This patch changes the `build` key in a pipeline to not be a pipeline
itself, but an object with `runner` and `pipeline` keys. `pipeline` is
the build pipeline, as before. `runner` is the name of the runner to
use. Runners are programs in the `runners` subdirectory.
Three runners are included in this patch. They're copies of osbuild-run
for now (except some additions for rhel82). The idea is that each of
them only contains the minimal setup code necessary for an OS, and that
we can review what's needed when updating a build root.
Also modify the `--build-pipeline` command line switch to accept such a
build object (instead of a pipeline) and rename it accordingly, to
`--build-env`.
Correspondingly, `OSBUILD_TEST_BUILD_PIPELINE` → `OSBUILD_TEST_BUILD_ENV`.
The tests from the integration_tests directory, were superseded
by the new stage tests.
The Vagrant integration seems not to have been working since
ea68bb0c26, as a test-setup.py was
dropped there, which it relies on. Remove it for now. If we want
that back, we should consider that in a separate PR.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Similar to the existing test, but uses qemu-nbd to mount the generated
image.
Using unittest.TestCase.subTest() for now, which means that the tests
aren't very independent. I think this is fine in this case, because
we're testing images independently from each other, reusing the base
tree in the store.
The stage testing is based on an output from the tree-diff tool. During
one test two pipelines are run and their outputs are compared using
tree-diff. The diff is then compared with expected diff included in
the repository.
Use the unittest module from the standard library. Also, ensure that
separate runs of this test don't share a osbuild store and clean up
after themselves.
With contributions from Ondřej Budai and Tom Gundersen.
Treat outputs like we treat trees: store them in the object store. This
simplifies using osbuild and allows returning a cached version if one is
available.
This makes the `--output` parameter redundant. Remove it.
We only want to upgrade to a new version of pip after explicitly
opting in. Otherwise, PRs may randmly start failing just because
pylint was upgraded, but unrelated to the code change.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Python buffers stdout and stderr if they're not connected to the
terminal. This messes with the ordering of osbuild and stage output,
becasue stages produce output more quickly.
Globally disable output buffering on travis.
Both tests work in CI just fine so we should run them every time. I
introduce them as a separate jobs because jobs run in parallel so it
takes less time even though it does not share object store.
Also drop some redundant tests, there is no need to run the same
tests several times. It was useful to get travis up and running as
one is a subset of another, so helps tracking down problems, but
we don't need that for the common case.
Signed-off-by: Tom Gundersen <teg@jklm.no>
The build pipeline, is a sub-pipeline used to generate the build
tree to use rather than the current root directory. This can be
nested arbitrarily deep, but ultimately we will fall back to the
current logic when no build property is found.
Just like the tree after the last stage of a regular pipeline ends
up in the object store, so does currently each build tree (as the
build sub-pipeline really is just a regular pipeline in its own
right). We may want to avoid both these instances of the implicit
storing semantics, and rather make it something the caller opts-in
to. However, for now that is left as a future optimization.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This also changes the structure of the object store, though the
basic idea is the same.
The object store contains a directory of objects, which are content
addressable filesystem trees. Currently we only ever use their
content-hash internally, but the idea for this is basically Lars
Karlitski and Kay Sievers' `treesum()`. We may exopse this in the
future.
Moreover, it contains a directory of refs, which are symlinks named
by the stage id they correspond to (as before), pointing to an object
generated from that stage-id.
The ObjectStore exposes three method:
`has_tree()`: This checks if the content store contains the given tree.
If so, we can rely on the tree remaining there.
`get_tree()`: This is meant to be used with a `with` block and yields
the path to a read-only instance of the tree with the given id. If the
tree_id is passed in as None, an empty directory is given instead.
`new_tree()`: This is meant to be used with a `with` block and yields
the path to a directory in which the tree by the given id should be
created. If a base_id is passed in, the tree is initialized with the
tree with the given id. Only when the block is exited successfully
is the tree written to the content store, referenced by the id in
question.
Use this in Pipeline.run() to avoid regenerating trees unneccessarily.
In order to trigger a regeneration, the content store must currently
be manually flushed.
Update the travis test to run the noop pipeline twice, verifying that
the stage is only run the first time.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Travis uses Ubuntu, which does not ship dnf, so introduce a yum
stage that allows us to test actual generation of trees on Travis.
We use this to generate a tree containing the tools necessary to
create abritrary Fedora-based build images in the future. We base
this on Fedora 27, as that is the last version that is installable
using yum rather than dnf.
In the future, once we support pipelines with nested build-images,
rather than just using the host OS as the build image, this will
allow us to bootstrap arbitrary pipelines on Travis.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Let's always use the latest available Ubuntu release for our CI, we
are interested in potentially building old images, and using old
images as bulid images, but having an old distro as host is not
necessarily an aim. If we want to test with a greater diversity of
distros (which we do), we should do that in VM's, this should just
be for the simple/quick case.
Also restructure a bit to allow for more (named) tests.
Signed-off-by: Tom Gundersen <teg@jklm.no>