Introduce and output id, which is the checksum over a full pipeline,
including all stages and the assembler. The id of a pipeline did not
include assemblers before. To be less confusing, rename the existing id
to "tree id".
We only want to upgrade to a new version of pip after explicitly
opting in. Otherwise, PRs may randmly start failing just because
pylint was upgraded, but unrelated to the code change.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Python buffers stdout and stderr if they're not connected to the
terminal. This messes with the ordering of osbuild and stage output,
becasue stages produce output more quickly.
Globally disable output buffering on travis.
Require "checksum" option for each repository, which contains the
checksum of the `repodata/repomd.xml` file. This file (indirectly)
contains checksums for all packages.
Verify that the metadata dnf downloaded to install packages matches that
checksum. This way, this stage will give an error when a reposiory
changed between putting together the pipeline and running it.
This is similar to the previous commit for the dnf stage.
Don't pass through arbitrary options. This means that pipeline repo
objects don't have the same options as yum repo files anymore:
1. Hard code repo name to repo id. The name has no influence on the
resulting image and should thus not appear in a pipeline.
2. Set gpgcheck=1 when gpgkey is given. It defaults to false, which
means that all sample and test pipelines didn't verify packages. It
would have failed anyway, because the container doesn't have the key
referenced in /etc. Change all gpgkeys to refer to the key id and import
them manually.
3. Don't allow lists for baseurl and gpgkey. We can add that if we need
it at some point.
Also be less verbose.
Don't pass through arbitrary options. This means that pipeline repo
objects don't have the same options as dnf repo files anymore:
1. Hard code repo name to repo id. The name has no influence on the
resulting image and should thus not appear in a pipeline.
2. Set gpgcheck=1 when gpgkey is given. It defaults to false, which
means that all sample and test pipelines didn't verify packages. It
would have failed anyway, because the container doesn't have the key
referenced in /etc. Change all gpgkeys to refer to the key id and import
them manually.
3. Don't allow lists for baseurl and gpgkey. We can add that if we need
it at some point.
We've been effectively using the basearch of the host, making the stage
non-reproducible: if the same pipeline was run on machines with
different architectures, it would produce different results. However,
pipelines producing different outputs must be different. Thus, this
patch includes the basearch in the pipeline.
In principle, this allows cross-arch builds. dnf should be the only
stage running binaries from the target tree. This is not yet tested.
Both tests work in CI just fine so we should run them every time. I
introduce them as a separate jobs because jobs run in parallel so it
takes less time even though it does not share object store.
XZ compression is really slowing down our tests. Additionally, we dump
the resulting image right after the tests are done (at least on CI).
Let's just dump the compression.
In tests we often use tar assembler as final stage. This means we
compress the image tree and decompress it right away. For this purposes
it is nice to have option to not have any compression. Actually,
this could very drastically improve CI running time.
A better option would be not to use tar at all and instead let osbuild
just dump the resulting tree. However, we felt this behaviour needs
more discussion and we need a fix asap.
The locale stage now cannot be used to set the keymap. Use the keymap
stage instead. Also, the stage was refactored to look like keymap and
timezone stages just to be consistent (systemd-firstboot is now used).
We use chroot connection type to "connect" to the target filesystem
where ansible should run the playbook. However, the target is not booted
system, it's just an image of not-yet-booted one. Unfortunately, many
ansible modules cannot be used inside not-booted system. Also, the core
principle of osbuild is to never boot the currently built image.
Therefore we decided to remove the ansible stage.
If ansible is needed in the future, there is a possibility to add a new
ansible stage, which would run the playbook during the first boot.
Directory /tmp is hosted on tmpfs. Therefore the image size could be
limited by memory size. By moving the image to /var/tmp we assure that
the file is hosted on disk allowing us to build bigger images or build
images on memory-constrained machines (e.g. CI).
In BuildRoot a new mount /var pointing to temporary directory in host's
/var/tmp is created. This enables us to have temporary storage inside
the container which is not hosted on tmpfs. Thanks to that we can move
larger files out of the part of filesystem which is hosted on tmpfs to
save up memory on machines with low memory capacity.
Don't try to guess how much room the filesystem will take up. In
practice, most people will want to specify a size anyway, depending on
their use case.
As is typical for osbuild, there are no convenience features for the
pipeline (it's not meant to be written manually). `size` must be given
in bytes and it must be a multiple of 512.
Assemblers are always run in their own, clean environment and can be
sure that there's only one instance of themselves running. Remove the
extra layer of temporary directory and use static names.
Also drop some redundant tests, there is no need to run the same
tests several times. It was useful to get travis up and running as
one is a subset of another, so helps tracking down problems, but
we don't need that for the common case.
Signed-off-by: Tom Gundersen <teg@jklm.no>
The best practice for creating a pipeline should be to include at least
one level of build-pipelines. This makes sure that the tools used to
generate the target image are well-defined.
In principle one could add several layers, though in pracite, one would
hope that the envinment used to build the buildroot does not affect the
final image (and as we anyway cannot recurr indefinitely, we fall back
to simply using the host system in this case).
This only makes sense, if the contents of the host system truly does not
affect the generated image, and as such we do not include any information
about the host when computing the hash that identifies a pipeline.
In fact, any image could be used in its place, as long as the required
tools are present. This commit takes advantage of that fact. Rather than
run a pipeline with the host as the build root, take a second pipeline
to generate the buildroot, but do not include this when computing the
pipeline id (so it is different from simply editing the original JSON).
This is necessary so we can use the same pipelines on significantly
different host systems (run with different --bulid-pipeline arguments).
In particular, it allows our test pipelines that generate f30 images
to be run unmodified on Travis (which runs Ubuntu).
Signed-off-by: Tom Gundersen <teg@jklm.no>
Make sure we test the version of osbuild in the current checkout,
rather than the system instance.
Also default to using in-place directories for the object store
and output images. Using a tmpfs does not scale, especially on
CI infrastructue with limited memory.
The behavior can still be overridden by the environment variable,
as before, only the default changes.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Let the image be responsible for running its own test, and simply
listen for the output from the testsuite.
Hook this up with a standard f30 image that contains a simple boot
test case, using systemctl to verify that all services started
correctly.
This replaces the old web-server test, giving similar functionality.
The reason for the change is twofold: this way the tests are fully
specificed in the pipeline, so easier to reproduce. Moreover, this
is less intrusive, as the test does not require network support in
the image.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This is achieved using "jobs" from packit-as-a-service, more
specifically the propose_downstream job. Furthermore
sync_from_downstream job is configured to keep the spec file
synchronized and prevent merge conflicts for new releases.
Also a small change in Makefile was necessary as it does not reflect the
current state of the spec file in Fedora dist-git (tarball name is
different). The spec file itself is not modified in this commit, because
it will be synchronized automatically using Packit.
Adds a new systemd unit to the image that will be pulled in by default,
run a given command, forward the output to a virtio serial port and
shutdown the machine.
We add a sample that uses this to verify that systemd conciders the
machine successfully booted. A simple way to run this test from the
commandline is to use
`$ socat UNIX-LISTEN:qemu.sock -`
to listen for either `running` for success or `degraded` or
`maintenance` for failure.
The image should then be booted using something like
`$ qemu-kvm -m 1024 -nographic -monitor none -serial none -chardev socket,path=qemu.sock,id=char0 -device virtio-serial -device virtserialport,chardev=char0,id=test0 -snapshot base.qcow2`
Signed-off-by: Tom Gundersen <teg@jklm.no>
This gives shell access into the image on a given tty. Useful for
testing and debugging, while minimally affecting the image.
Note that this must never be used in production, as it allows root
access without a password.
For instance this could be used to verify that an image was fully
booted:
```
[teg@teg-x270 osbuild]$ qemu-kvm -m 1024 -nographic -serial mon:stdio -snapshot base.qcow2
sh-5.0# systemctl is-system-running --wait
running
```
Signed-off-by: Tom Gundersen <teg@jklm.no>
We used to let mkfs.ext4 initialize the filesystem for us, but it
turns out that the metadata attributes of the root directory were
not being initialized from the source tree. In particular, this
meant that the SELinu labels were left as unconfined_t, rather
than root_t, which would not allow us to boot in enforcing mode.
An alternative approach might be to fixup the root inode manually,
while still doing the rest using mkfs.ext4, but let's leave that
for the future if it turns out to be worth it.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Move the decision whether the root fs should be mounted ro or rw
into the pipeline configuration.
Update the pipelines accordingly.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Default to True, which is what dnf defaults to, but allow it to be
overridden in the pipeline. Whether this option should be used should
be a distro policy, but for now we just want it to get images compatible
with the official fedora ones.
Signed-off-by: Tom Gundersen <teg@jklm.no>
We don't want non-functional configuration in the pipelne, we want to
restrict ourselves to options that changes the final image.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Import modules between files using the syntax `from . import foobar`,
renaming what used to be `FooBar` to `foobar.FooBar` when moved to a
separate file.
In __init__.py only import what is meant to be public API.
Signed-off-by: Tom Gundersen <teg@jklm.no>
grub2-mkrelpath uses /proc/self/mountinfo to find the source of the file
system it is installed to. This breaks in a container.
Add org.osbuild.fix-bls which goes through /boot/loader/entries and
fixes paths by removing anything before /boot.
Make the order of argumnets in line with how it is used (and also
how it is conceptionally closer to the pipeline json document).
This makes no practical difference as the two arguments were both
just used for computing the hash.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Each pipeline is now self-contained without references to another.
However, as the final stage in a pipeline is saved to the content
store, we are able to reuse it if one pipeline is the prefix of
another, as described in the previous commit. This makes the
concept of a base redundant.
The ObjectStore must take a directory as argument, never None, so
the conditional assertion for this in Pipeline.run() is ok to
remove.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Don't do this only for the base, but for any prefix of the current
pipeline.
Note that if two pipelines share a prefix, but one is not the prefix
of another, no sharing is possible. Only a proper prefix can be
reused by another pipeline, as only the result of the last pipeline
is saved to the object store (this restriction could be changed in
the future).
Signed-off-by: Tom Gundersen <teg@jklm.no>