Don't try to guess how much room the filesystem will take up. In
practice, most people will want to specify a size anyway, depending on
their use case.
As is typical for osbuild, there are no convenience features for the
pipeline (it's not meant to be written manually). `size` must be given
in bytes and it must be a multiple of 512.
Assemblers are always run in their own, clean environment and can be
sure that there's only one instance of themselves running. Remove the
extra layer of temporary directory and use static names.
Also drop some redundant tests, there is no need to run the same
tests several times. It was useful to get travis up and running as
one is a subset of another, so helps tracking down problems, but
we don't need that for the common case.
Signed-off-by: Tom Gundersen <teg@jklm.no>
The best practice for creating a pipeline should be to include at least
one level of build-pipelines. This makes sure that the tools used to
generate the target image are well-defined.
In principle one could add several layers, though in pracite, one would
hope that the envinment used to build the buildroot does not affect the
final image (and as we anyway cannot recurr indefinitely, we fall back
to simply using the host system in this case).
This only makes sense, if the contents of the host system truly does not
affect the generated image, and as such we do not include any information
about the host when computing the hash that identifies a pipeline.
In fact, any image could be used in its place, as long as the required
tools are present. This commit takes advantage of that fact. Rather than
run a pipeline with the host as the build root, take a second pipeline
to generate the buildroot, but do not include this when computing the
pipeline id (so it is different from simply editing the original JSON).
This is necessary so we can use the same pipelines on significantly
different host systems (run with different --bulid-pipeline arguments).
In particular, it allows our test pipelines that generate f30 images
to be run unmodified on Travis (which runs Ubuntu).
Signed-off-by: Tom Gundersen <teg@jklm.no>
Make sure we test the version of osbuild in the current checkout,
rather than the system instance.
Also default to using in-place directories for the object store
and output images. Using a tmpfs does not scale, especially on
CI infrastructue with limited memory.
The behavior can still be overridden by the environment variable,
as before, only the default changes.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Let the image be responsible for running its own test, and simply
listen for the output from the testsuite.
Hook this up with a standard f30 image that contains a simple boot
test case, using systemctl to verify that all services started
correctly.
This replaces the old web-server test, giving similar functionality.
The reason for the change is twofold: this way the tests are fully
specificed in the pipeline, so easier to reproduce. Moreover, this
is less intrusive, as the test does not require network support in
the image.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This is achieved using "jobs" from packit-as-a-service, more
specifically the propose_downstream job. Furthermore
sync_from_downstream job is configured to keep the spec file
synchronized and prevent merge conflicts for new releases.
Also a small change in Makefile was necessary as it does not reflect the
current state of the spec file in Fedora dist-git (tarball name is
different). The spec file itself is not modified in this commit, because
it will be synchronized automatically using Packit.
Adds a new systemd unit to the image that will be pulled in by default,
run a given command, forward the output to a virtio serial port and
shutdown the machine.
We add a sample that uses this to verify that systemd conciders the
machine successfully booted. A simple way to run this test from the
commandline is to use
`$ socat UNIX-LISTEN:qemu.sock -`
to listen for either `running` for success or `degraded` or
`maintenance` for failure.
The image should then be booted using something like
`$ qemu-kvm -m 1024 -nographic -monitor none -serial none -chardev socket,path=qemu.sock,id=char0 -device virtio-serial -device virtserialport,chardev=char0,id=test0 -snapshot base.qcow2`
Signed-off-by: Tom Gundersen <teg@jklm.no>
This gives shell access into the image on a given tty. Useful for
testing and debugging, while minimally affecting the image.
Note that this must never be used in production, as it allows root
access without a password.
For instance this could be used to verify that an image was fully
booted:
```
[teg@teg-x270 osbuild]$ qemu-kvm -m 1024 -nographic -serial mon:stdio -snapshot base.qcow2
sh-5.0# systemctl is-system-running --wait
running
```
Signed-off-by: Tom Gundersen <teg@jklm.no>
We used to let mkfs.ext4 initialize the filesystem for us, but it
turns out that the metadata attributes of the root directory were
not being initialized from the source tree. In particular, this
meant that the SELinu labels were left as unconfined_t, rather
than root_t, which would not allow us to boot in enforcing mode.
An alternative approach might be to fixup the root inode manually,
while still doing the rest using mkfs.ext4, but let's leave that
for the future if it turns out to be worth it.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Move the decision whether the root fs should be mounted ro or rw
into the pipeline configuration.
Update the pipelines accordingly.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Default to True, which is what dnf defaults to, but allow it to be
overridden in the pipeline. Whether this option should be used should
be a distro policy, but for now we just want it to get images compatible
with the official fedora ones.
Signed-off-by: Tom Gundersen <teg@jklm.no>
We don't want non-functional configuration in the pipelne, we want to
restrict ourselves to options that changes the final image.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Import modules between files using the syntax `from . import foobar`,
renaming what used to be `FooBar` to `foobar.FooBar` when moved to a
separate file.
In __init__.py only import what is meant to be public API.
Signed-off-by: Tom Gundersen <teg@jklm.no>
grub2-mkrelpath uses /proc/self/mountinfo to find the source of the file
system it is installed to. This breaks in a container.
Add org.osbuild.fix-bls which goes through /boot/loader/entries and
fixes paths by removing anything before /boot.
Make the order of argumnets in line with how it is used (and also
how it is conceptionally closer to the pipeline json document).
This makes no practical difference as the two arguments were both
just used for computing the hash.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Each pipeline is now self-contained without references to another.
However, as the final stage in a pipeline is saved to the content
store, we are able to reuse it if one pipeline is the prefix of
another, as described in the previous commit. This makes the
concept of a base redundant.
The ObjectStore must take a directory as argument, never None, so
the conditional assertion for this in Pipeline.run() is ok to
remove.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Don't do this only for the base, but for any prefix of the current
pipeline.
Note that if two pipelines share a prefix, but one is not the prefix
of another, no sharing is possible. Only a proper prefix can be
reused by another pipeline, as only the result of the last pipeline
is saved to the object store (this restriction could be changed in
the future).
Signed-off-by: Tom Gundersen <teg@jklm.no>
Take this as an argumnet to __init__ in the same way that `base`
is.
This avoids us having to deal with the case of someone setting a
stage before the build, which does not work as the stage id will
be wrong.
Signed-off-by: Tom Gundersen <teg@jklm.no>
The testing script is getting too big and not very well organized. In
this commit a new module `integration_tests` is introduced that contains
parts of the original testing script split into multiple files. The
content should be the same, the only difference is that now you can run
the tests by invoking `python3 -m test`.
This stage allows to add or modify users. For now, this includes all
fields available in passwd, setting auxiliary groups, and setting an ssh
key.
Based on a patch by Martin Sehnoutka <msehnout@redhat.com>.
Renaming a directory over an existing one is only an error if the
existing one is not empty, in which case ENOEMPTY is thrown.
Tested with:
>>> os.mkdir("foo")
>>> os.mkdir("bar")
>>> os.rename("foo", "bar")
# no error
>>> open("foo/a", "w").write("a")
1
>>> try: os.rename("bar", "foo")
... except OSError as e: e.errno == errno.ENOTEMPTY
...
True
The build pipeline, is a sub-pipeline used to generate the build
tree to use rather than the current root directory. This can be
nested arbitrarily deep, but ultimately we will fall back to the
current logic when no build property is found.
Just like the tree after the last stage of a regular pipeline ends
up in the object store, so does currently each build tree (as the
build sub-pipeline really is just a regular pipeline in its own
right). We may want to avoid both these instances of the implicit
storing semantics, and rather make it something the caller opts-in
to. However, for now that is left as a future optimization.
Signed-off-by: Tom Gundersen <teg@jklm.no>
On some hosts, systemd-tmpfiles will generate an nsswitch.conf
configuring DNS to be done via systemd-resolved, but this will
require the container to be booted and resolved to be running.
In other cases, a proper fall-back is configured, so this is not
a problem, but on some hosts this means DNS does not work.
Conversely, the default behavior with no nsswitch.conf at all
works just fine, always using nss-dns.
Let's simply delete the file if it is there, and rely on the
default.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This also changes the structure of the object store, though the
basic idea is the same.
The object store contains a directory of objects, which are content
addressable filesystem trees. Currently we only ever use their
content-hash internally, but the idea for this is basically Lars
Karlitski and Kay Sievers' `treesum()`. We may exopse this in the
future.
Moreover, it contains a directory of refs, which are symlinks named
by the stage id they correspond to (as before), pointing to an object
generated from that stage-id.
The ObjectStore exposes three method:
`has_tree()`: This checks if the content store contains the given tree.
If so, we can rely on the tree remaining there.
`get_tree()`: This is meant to be used with a `with` block and yields
the path to a read-only instance of the tree with the given id. If the
tree_id is passed in as None, an empty directory is given instead.
`new_tree()`: This is meant to be used with a `with` block and yields
the path to a directory in which the tree by the given id should be
created. If a base_id is passed in, the tree is initialized with the
tree with the given id. Only when the block is exited successfully
is the tree written to the content store, referenced by the id in
question.
Use this in Pipeline.run() to avoid regenerating trees unneccessarily.
In order to trigger a regeneration, the content store must currently
be manually flushed.
Update the travis test to run the noop pipeline twice, verifying that
the stage is only run the first time.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Rather than hard-coding this to /, let the caller provide the
directory path to use.
In the past, we needed to give special treatment to /, as it had
to be bind-mounted before being used by nspawn, to work around a
check they had, refusing to use the host root in the container.
We no longer pass the directory directly to nspawn, but rather
mount the subdirs we want ourselves, so that no longer applies.
The callers pass in /, so the behavior is unchanged.
Signed-off-by: Tom Gundersen <teg@jklm.no>
packit is a service for continous delivery into Fedora repositories. It
should help us synchronize upstream repository on Github with downstream
repository on src.fedoraproject.org.
Travis uses Ubuntu, which does not ship dnf, so introduce a yum
stage that allows us to test actual generation of trees on Travis.
We use this to generate a tree containing the tools necessary to
create abritrary Fedora-based build images in the future. We base
this on Fedora 27, as that is the last version that is installable
using yum rather than dnf.
In the future, once we support pipelines with nested build-images,
rather than just using the host OS as the build image, this will
allow us to bootstrap arbitrary pipelines on Travis.
Signed-off-by: Tom Gundersen <teg@jklm.no>
We want the same functionality, but we now impleent it ourselves.
In addition to bind-mounting in /usr into the target container
(which is all nspawn does), we also add /bin, /sbin, /lib and
/lib64, if they exist and are not symlinks (presuambly into
/usr).
This means we can work on distros who have not implemented the
usr-move, like Ubuntu Bionic (used by Travis).
Signed-off-by: Tom Gundersen <teg@jklm.no>