Adds a new systemd unit to the image that will be pulled in by default,
run a given command, forward the output to a virtio serial port and
shutdown the machine.
We add a sample that uses this to verify that systemd conciders the
machine successfully booted. A simple way to run this test from the
commandline is to use
`$ socat UNIX-LISTEN:qemu.sock -`
to listen for either `running` for success or `degraded` or
`maintenance` for failure.
The image should then be booted using something like
`$ qemu-kvm -m 1024 -nographic -monitor none -serial none -chardev socket,path=qemu.sock,id=char0 -device virtio-serial -device virtserialport,chardev=char0,id=test0 -snapshot base.qcow2`
Signed-off-by: Tom Gundersen <teg@jklm.no>
This gives shell access into the image on a given tty. Useful for
testing and debugging, while minimally affecting the image.
Note that this must never be used in production, as it allows root
access without a password.
For instance this could be used to verify that an image was fully
booted:
```
[teg@teg-x270 osbuild]$ qemu-kvm -m 1024 -nographic -serial mon:stdio -snapshot base.qcow2
sh-5.0# systemctl is-system-running --wait
running
```
Signed-off-by: Tom Gundersen <teg@jklm.no>
We used to let mkfs.ext4 initialize the filesystem for us, but it
turns out that the metadata attributes of the root directory were
not being initialized from the source tree. In particular, this
meant that the SELinu labels were left as unconfined_t, rather
than root_t, which would not allow us to boot in enforcing mode.
An alternative approach might be to fixup the root inode manually,
while still doing the rest using mkfs.ext4, but let's leave that
for the future if it turns out to be worth it.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Move the decision whether the root fs should be mounted ro or rw
into the pipeline configuration.
Update the pipelines accordingly.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Default to True, which is what dnf defaults to, but allow it to be
overridden in the pipeline. Whether this option should be used should
be a distro policy, but for now we just want it to get images compatible
with the official fedora ones.
Signed-off-by: Tom Gundersen <teg@jklm.no>
We don't want non-functional configuration in the pipelne, we want to
restrict ourselves to options that changes the final image.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Import modules between files using the syntax `from . import foobar`,
renaming what used to be `FooBar` to `foobar.FooBar` when moved to a
separate file.
In __init__.py only import what is meant to be public API.
Signed-off-by: Tom Gundersen <teg@jklm.no>
grub2-mkrelpath uses /proc/self/mountinfo to find the source of the file
system it is installed to. This breaks in a container.
Add org.osbuild.fix-bls which goes through /boot/loader/entries and
fixes paths by removing anything before /boot.
Make the order of argumnets in line with how it is used (and also
how it is conceptionally closer to the pipeline json document).
This makes no practical difference as the two arguments were both
just used for computing the hash.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Each pipeline is now self-contained without references to another.
However, as the final stage in a pipeline is saved to the content
store, we are able to reuse it if one pipeline is the prefix of
another, as described in the previous commit. This makes the
concept of a base redundant.
The ObjectStore must take a directory as argument, never None, so
the conditional assertion for this in Pipeline.run() is ok to
remove.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Don't do this only for the base, but for any prefix of the current
pipeline.
Note that if two pipelines share a prefix, but one is not the prefix
of another, no sharing is possible. Only a proper prefix can be
reused by another pipeline, as only the result of the last pipeline
is saved to the object store (this restriction could be changed in
the future).
Signed-off-by: Tom Gundersen <teg@jklm.no>
Take this as an argumnet to __init__ in the same way that `base`
is.
This avoids us having to deal with the case of someone setting a
stage before the build, which does not work as the stage id will
be wrong.
Signed-off-by: Tom Gundersen <teg@jklm.no>
The testing script is getting too big and not very well organized. In
this commit a new module `integration_tests` is introduced that contains
parts of the original testing script split into multiple files. The
content should be the same, the only difference is that now you can run
the tests by invoking `python3 -m test`.
This stage allows to add or modify users. For now, this includes all
fields available in passwd, setting auxiliary groups, and setting an ssh
key.
Based on a patch by Martin Sehnoutka <msehnout@redhat.com>.
Renaming a directory over an existing one is only an error if the
existing one is not empty, in which case ENOEMPTY is thrown.
Tested with:
>>> os.mkdir("foo")
>>> os.mkdir("bar")
>>> os.rename("foo", "bar")
# no error
>>> open("foo/a", "w").write("a")
1
>>> try: os.rename("bar", "foo")
... except OSError as e: e.errno == errno.ENOTEMPTY
...
True
The build pipeline, is a sub-pipeline used to generate the build
tree to use rather than the current root directory. This can be
nested arbitrarily deep, but ultimately we will fall back to the
current logic when no build property is found.
Just like the tree after the last stage of a regular pipeline ends
up in the object store, so does currently each build tree (as the
build sub-pipeline really is just a regular pipeline in its own
right). We may want to avoid both these instances of the implicit
storing semantics, and rather make it something the caller opts-in
to. However, for now that is left as a future optimization.
Signed-off-by: Tom Gundersen <teg@jklm.no>
On some hosts, systemd-tmpfiles will generate an nsswitch.conf
configuring DNS to be done via systemd-resolved, but this will
require the container to be booted and resolved to be running.
In other cases, a proper fall-back is configured, so this is not
a problem, but on some hosts this means DNS does not work.
Conversely, the default behavior with no nsswitch.conf at all
works just fine, always using nss-dns.
Let's simply delete the file if it is there, and rely on the
default.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This also changes the structure of the object store, though the
basic idea is the same.
The object store contains a directory of objects, which are content
addressable filesystem trees. Currently we only ever use their
content-hash internally, but the idea for this is basically Lars
Karlitski and Kay Sievers' `treesum()`. We may exopse this in the
future.
Moreover, it contains a directory of refs, which are symlinks named
by the stage id they correspond to (as before), pointing to an object
generated from that stage-id.
The ObjectStore exposes three method:
`has_tree()`: This checks if the content store contains the given tree.
If so, we can rely on the tree remaining there.
`get_tree()`: This is meant to be used with a `with` block and yields
the path to a read-only instance of the tree with the given id. If the
tree_id is passed in as None, an empty directory is given instead.
`new_tree()`: This is meant to be used with a `with` block and yields
the path to a directory in which the tree by the given id should be
created. If a base_id is passed in, the tree is initialized with the
tree with the given id. Only when the block is exited successfully
is the tree written to the content store, referenced by the id in
question.
Use this in Pipeline.run() to avoid regenerating trees unneccessarily.
In order to trigger a regeneration, the content store must currently
be manually flushed.
Update the travis test to run the noop pipeline twice, verifying that
the stage is only run the first time.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Rather than hard-coding this to /, let the caller provide the
directory path to use.
In the past, we needed to give special treatment to /, as it had
to be bind-mounted before being used by nspawn, to work around a
check they had, refusing to use the host root in the container.
We no longer pass the directory directly to nspawn, but rather
mount the subdirs we want ourselves, so that no longer applies.
The callers pass in /, so the behavior is unchanged.
Signed-off-by: Tom Gundersen <teg@jklm.no>
packit is a service for continous delivery into Fedora repositories. It
should help us synchronize upstream repository on Github with downstream
repository on src.fedoraproject.org.
Travis uses Ubuntu, which does not ship dnf, so introduce a yum
stage that allows us to test actual generation of trees on Travis.
We use this to generate a tree containing the tools necessary to
create abritrary Fedora-based build images in the future. We base
this on Fedora 27, as that is the last version that is installable
using yum rather than dnf.
In the future, once we support pipelines with nested build-images,
rather than just using the host OS as the build image, this will
allow us to bootstrap arbitrary pipelines on Travis.
Signed-off-by: Tom Gundersen <teg@jklm.no>
We want the same functionality, but we now impleent it ourselves.
In addition to bind-mounting in /usr into the target container
(which is all nspawn does), we also add /bin, /sbin, /lib and
/lib64, if they exist and are not symlinks (presuambly into
/usr).
This means we can work on distros who have not implemented the
usr-move, like Ubuntu Bionic (used by Travis).
Signed-off-by: Tom Gundersen <teg@jklm.no>
Call update-ca-certificates if the binary is found, generating SSL
certificates in /etc in i similar way on Debian-based systems as
is being done on RedHat-based ones.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This is a RHism, that is not available on Debian-based systems.
Do not make it a hard reqirement, as pipelines may be able to
function just fine without it.
In a follow-up commit we will also check for the Debian-based
equivalent.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Let's always use the latest available Ubuntu release for our CI, we
are interested in potentially building old images, and using old
images as bulid images, but having an old distro as host is not
necessarily an aim. If we want to test with a greater diversity of
distros (which we do), we should do that in VM's, this should just
be for the simple/quick case.
Also restructure a bit to allow for more (named) tests.
Signed-off-by: Tom Gundersen <teg@jklm.no>