Based on the UEFI sample (f30-base-uefi.json). NB: the inclusion
of the dracut-config-generic is needed to disable "host-only" for
dracut so the initramfs will include the virtio_blk block device
driver that is needed to mount the root file system when running
the image in qemu.
We've been using a generic `osbuild-run`, which sets up the build
environment (and works around bugs) for all build roots. It is already
getting unwieldy, because it tries to detect the OS for some things it
configures. It's also about to cause problems for RHEL, which doesn't
currently support a python3 shebang without having /etc around.
This patch changes the `build` key in a pipeline to not be a pipeline
itself, but an object with `runner` and `pipeline` keys. `pipeline` is
the build pipeline, as before. `runner` is the name of the runner to
use. Runners are programs in the `runners` subdirectory.
Three runners are included in this patch. They're copies of osbuild-run
for now (except some additions for rhel82). The idea is that each of
them only contains the minimal setup code necessary for an OS, and that
we can review what's needed when updating a build root.
Also modify the `--build-pipeline` command line switch to accept such a
build object (instead of a pipeline) and rename it accordingly, to
`--build-env`.
Correspondingly, `OSBUILD_TEST_BUILD_PIPELINE` → `OSBUILD_TEST_BUILD_ENV`.
Downloading the gpg key is fragile and kept causing our tests to fail.
In general, we want to limit the network access, so let's just embed
the gpg keys directly in the pipeline.
Fixes#133.
Signed-off-by: Tom Gundersen <teg@jklm.no>
It is similar to the official Fedora cloud base image except for few
minor differences. The reason for this divergence is that we don't want
to include all hacks that are currently present in the official
kickstart file. You can see it here as a reference:
https://pagure.io/fedora-kickstarts/blob/master/f/fedora-cloud-base.ks#_149
build-from-yum.json is the one that's being used for testing on Ubuntu.
Remove base-from-yum.json, because it's confusing to have two similarly
named pipelines like this.
Otherwise, sfdik would pick one at random. We want our images to be
reproducible to the extent possible, so we must move all randomness
out of the assemblers when we can.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This allows given packages to be excluded from the transaction. This
is useful if you want to install a group with certain exceptions.
A common thing to do in kicktstart files is:
```
rm -f /boot/*-rescue*
```
By instead excluding the dracut-rescue-config package we end up
with:
```
"deleted_files": [
"/etc/kernel/postinst.d",
"/usr/lib/dracut/dracut.conf.d/02-rescue.conf",
"/usr/lib/kernel/install.d/51-dracut-rescue.install",
"/boot/initramfs-0-rescue-ffffffffffffffffffffffffffffffff.img",
"/boot/vmlinuz-0-rescue-ffffffffffffffffffffffffffffffff"
],
```
Signed-off-by: Tom Gundersen <teg@jklm.no>
Opt in to supporting the most common ones, if we want to support more
we can add support as the need arises.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This key carries no information and is never used anywhere. The json
files are not meant to be human readable, so simply drop this.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Require "checksum" option for each repository, which contains the
checksum of the `repodata/repomd.xml` file. This file (indirectly)
contains checksums for all packages.
Verify that the metadata dnf downloaded to install packages matches that
checksum. This way, this stage will give an error when a reposiory
changed between putting together the pipeline and running it.
This is similar to the previous commit for the dnf stage.
Don't pass through arbitrary options. This means that pipeline repo
objects don't have the same options as yum repo files anymore:
1. Hard code repo name to repo id. The name has no influence on the
resulting image and should thus not appear in a pipeline.
2. Set gpgcheck=1 when gpgkey is given. It defaults to false, which
means that all sample and test pipelines didn't verify packages. It
would have failed anyway, because the container doesn't have the key
referenced in /etc. Change all gpgkeys to refer to the key id and import
them manually.
3. Don't allow lists for baseurl and gpgkey. We can add that if we need
it at some point.
Also be less verbose.
Don't pass through arbitrary options. This means that pipeline repo
objects don't have the same options as dnf repo files anymore:
1. Hard code repo name to repo id. The name has no influence on the
resulting image and should thus not appear in a pipeline.
2. Set gpgcheck=1 when gpgkey is given. It defaults to false, which
means that all sample and test pipelines didn't verify packages. It
would have failed anyway, because the container doesn't have the key
referenced in /etc. Change all gpgkeys to refer to the key id and import
them manually.
3. Don't allow lists for baseurl and gpgkey. We can add that if we need
it at some point.
We've been effectively using the basearch of the host, making the stage
non-reproducible: if the same pipeline was run on machines with
different architectures, it would produce different results. However,
pipelines producing different outputs must be different. Thus, this
patch includes the basearch in the pipeline.
In principle, this allows cross-arch builds. dnf should be the only
stage running binaries from the target tree. This is not yet tested.
Both tests work in CI just fine so we should run them every time. I
introduce them as a separate jobs because jobs run in parallel so it
takes less time even though it does not share object store.
Don't try to guess how much room the filesystem will take up. In
practice, most people will want to specify a size anyway, depending on
their use case.
As is typical for osbuild, there are no convenience features for the
pipeline (it's not meant to be written manually). `size` must be given
in bytes and it must be a multiple of 512.
Let the image be responsible for running its own test, and simply
listen for the output from the testsuite.
Hook this up with a standard f30 image that contains a simple boot
test case, using systemctl to verify that all services started
correctly.
This replaces the old web-server test, giving similar functionality.
The reason for the change is twofold: this way the tests are fully
specificed in the pipeline, so easier to reproduce. Moreover, this
is less intrusive, as the test does not require network support in
the image.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Adds a new systemd unit to the image that will be pulled in by default,
run a given command, forward the output to a virtio serial port and
shutdown the machine.
We add a sample that uses this to verify that systemd conciders the
machine successfully booted. A simple way to run this test from the
commandline is to use
`$ socat UNIX-LISTEN:qemu.sock -`
to listen for either `running` for success or `degraded` or
`maintenance` for failure.
The image should then be booted using something like
`$ qemu-kvm -m 1024 -nographic -monitor none -serial none -chardev socket,path=qemu.sock,id=char0 -device virtio-serial -device virtserialport,chardev=char0,id=test0 -snapshot base.qcow2`
Signed-off-by: Tom Gundersen <teg@jklm.no>
This gives shell access into the image on a given tty. Useful for
testing and debugging, while minimally affecting the image.
Note that this must never be used in production, as it allows root
access without a password.
For instance this could be used to verify that an image was fully
booted:
```
[teg@teg-x270 osbuild]$ qemu-kvm -m 1024 -nographic -serial mon:stdio -snapshot base.qcow2
sh-5.0# systemctl is-system-running --wait
running
```
Signed-off-by: Tom Gundersen <teg@jklm.no>
Move the decision whether the root fs should be mounted ro or rw
into the pipeline configuration.
Update the pipelines accordingly.
Signed-off-by: Tom Gundersen <teg@jklm.no>
grub2-mkrelpath uses /proc/self/mountinfo to find the source of the file
system it is installed to. This breaks in a container.
Add org.osbuild.fix-bls which goes through /boot/loader/entries and
fixes paths by removing anything before /boot.
Each pipeline is now self-contained without references to another.
However, as the final stage in a pipeline is saved to the content
store, we are able to reuse it if one pipeline is the prefix of
another, as described in the previous commit. This makes the
concept of a base redundant.
The ObjectStore must take a directory as argument, never None, so
the conditional assertion for this in Pipeline.run() is ok to
remove.
Signed-off-by: Tom Gundersen <teg@jklm.no>
The build pipeline, is a sub-pipeline used to generate the build
tree to use rather than the current root directory. This can be
nested arbitrarily deep, but ultimately we will fall back to the
current logic when no build property is found.
Just like the tree after the last stage of a regular pipeline ends
up in the object store, so does currently each build tree (as the
build sub-pipeline really is just a regular pipeline in its own
right). We may want to avoid both these instances of the implicit
storing semantics, and rather make it something the caller opts-in
to. However, for now that is left as a future optimization.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Travis uses Ubuntu, which does not ship dnf, so introduce a yum
stage that allows us to test actual generation of trees on Travis.
We use this to generate a tree containing the tools necessary to
create abritrary Fedora-based build images in the future. We base
this on Fedora 27, as that is the last version that is installable
using yum rather than dnf.
In the future, once we support pipelines with nested build-images,
rather than just using the host OS as the build image, this will
allow us to bootstrap arbitrary pipelines on Travis.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Since we no longer use grub2-mkconfig, but write static configuration
we can drop most of the helpers.
The partitin table id was never used in the first place. We use
filesystem UUIDs, not partition UUIDs to name our root/boot partitions.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Compute a hash based on the content of a stage, together with the
hash of its parent stage.
The output of a pipeline is saved by the id of the last stage.
This is largely equivalent to the current logic, where it is the
pipeline that contains the id, but this means that the ids are
indepedent of how pipelines are split, the only thing that matters
is the sequence of stages, not whether or not they are in one or
several interdependent pipelines.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This removes the possibility of passing in arbitrary input data. We
now restrict ourselves to explicitly specified files/directories or
a base tree given by its pipeline id.
This drops the tar/tree stages/assemblers, as the tree/untree ones
are implicit in osbuild, and if we wish to also support compressed
trees, then we should add that to osbuild core as an option.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This allows one pipline to build on top of another. When the pipeline
id of one pipeline is specified in another, the tree is initialized
with the output of the given pipeline.
The caller must ensure that the base pipeline has alreday been run,
and its content is in the content-store.
This renders the io.weldr.untree stage and the --input argument both
redundant.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Whenever an assembler is not specified, the output tree is instead
saved to the content store, in a directory named after the pipeline
id.
This should render the io.weldr.tree assembler redundant.
In order to build the samples as before, specify the content store
as the input directory to build any pipeline that uses the
io.weldr.untree stage.
Signed-off-by: Tom Gundersen <teg@jklm.no>
The boot loader snippets were not being generated on f29, we may want
to revisit that, but for now let's work against f30.
Signed-off-by: Tom Gundersen <teg@jklm.no>
These are meant to test the various assembers and stages and to show how pipelines
can be created. However, they are not meant to necessarily be the best way to create
any given image.
Note that some of the pipelines are dependent on each other.