This disables buffering for the standard output stream for python
executables spawn within the build root. This should help with
the ordering of text output in stages: when stdout is buffered,
debug messages via `print` will be end up in that buffer. When
executables are run in the stage, via `subprocess.run` their
stdout has its own buffering, which will be flushed at the end
of the run. If stdout was not manually flushed before invoking
the executable, the output of the tool will be emitted before
anything in the buffer. For example:
print("stage")
subprocess.run(["echo", "tool"])
Will lead have the following ordering:
"tool"
"stage"
To avoid this, without having to manually flush the stdout
buffer before every `subprocess.run`, disable buffering for
python binaries run inside the build root.
Since low-level primitives (os.read) are used to read from the stdout
pipe, manual text decoding was necessary there anyway. The `encoding`
argument meant that we could forgo the manual decoding for the call
to `communicate`. But this meant that text handling is not uniform.
Therefore, remove the `encoding` argument from the `Popen` call and
manual decode all the text.
There was a bug in mke2fs (fixed in versionv 1.45.7, with commit
6fa8edd0) where mkfs.ext4 would fail because the default config,
created on the fly, would contain a syntax error. The program
would abort with:
Syntax error in mke2fs config file (<default>, line #22)
Unknown code prof 17
To avoid this error, we try to bind mount the config from the build
root.
Commit d028ea5b16 introduced bug when introducing the `store`
argument to `Stage.run`, instead of passing `var=var`, i.e.
`var` is being passed as keyword argument, it is now being
passed as a positional one. Since the `path=/run/osbuild`
keyword argument comes before the `var=/var/tmp` argument,
`var` is now being passed as `path` instead of var.
Since `var` is always being passed in throughout the entire
codebase, make it a positional argument, and move it before
`path`.
Adapt the tests to pass `var` as positional argument.
All runners stopped calling `api.setup_stdio` (commit c40b414), and
thus all output of runners and also modules is now redirected to a
pipe (created via Popen and subprocess.PIPE for stdout).
Text was read from that pipe via `stdout.read(4096)`, which means
that it is now buffered in chunks of 4096, where it previously was
line buffered in the case that osbuild was run in the terminal and
--json was not specified. This is very annoying for anyone wanting
to follow osbuild's output in real-time.
Restore the previous behavior by using `os.read`, which should be
a small wrapper around read(3), which does not block until all the
requested data is available but returns early (short reads). This
means, new text will be forwarded as soon is it is available in the
pipe. Increase the read buffer to 32768 while at it, which is what
Popen is using in Python 3.9.
Create a new CompletedBuild object that wraps and is very similar
to the subprocess.CompletedProcess, i.e. it has a process member
but also has shortcuts for returncode. Additionally, the output
of the process is not only forwarded to the monitor, but also
captured and then handed to CompletedBuild, so its output member
will actually contain the full build output. To be compatible
with the previously returned CompletedProcess, `stderr`, `stdout`
members exist on CompletedBuild that also return `output`.
In case that bubblewrap fails to, e.g. because it fails to execute
the runner, it will print an error message to stderr. Currently,
this output is not capture and thus not logged. To fix that, the
`BuildRoot.run` method now takes a monitor object and will stream
stdout/stderr to the log via the monitor.
Make sure "/sys/fs/selinux" is read-only, otherwise libselinux and
tools will assume that SELinux is available and active and in turn
use /sys/fs/selinux to e.g. verify the file systems labels; this
will then prevent setting unknown labels via `setfiles`.
Change the default of libdir to /usr/lib/osbuild and
remove redundant logic. Additionally, change how the
python package is detected.
Instead of checking if libdir is None, check if
/usr/lib/osbuild is empty - i.e. if the user has specified
a different directory than the default.
Run the container in a new network namespace, to isolate the host's
network from that of the container. Stages, assemblers and the tools
they execute are not supposed to assume network access is available
and this isolation will make sure of that.
The `BuildRoot` wants to create temporary directories in two
locations, `rundir` (supplied as `path`) and `vardir`. Make
sure these directories exist before trying to create temporary
directories in them.
The current way API end points, i.e. sockets for API providers,
are provided to the sandbox is via a temporary directory that
is created by `BuildRoot` which later gets bind-mounted to a well
known path, i.e. /run/osbuild/api inside the sandbox. API providers
are expected to create their socket in that temporary directory.
Now that `BuildRoot` has a `regsiter_api` method and each API has
an `endpoint` property, the socket of each API provider, no matter
where it is located, will get bind-mounted individually inside
the sandbox at /run/osbuild/api using the `endpoint` identifier.
For backwards compatibility reasons the temporary api directory
will still be created by `BuildRoot`, but it is no longer bind
mounted inside the container. This paves the way to remove that
directory completely once all API providers are converted to not
use that directory anymore.
Add a new `register_api` method that is meant to be used by clients
to register API end point providers, i.e. instances of `api.BaseAPI`.
When the context of the `BuildRoot` is enter, all providers are
activated, i.e. their context is entered. In case `regsiter_api` is
called with an already active context, the provider will immediately
be activated. In both cases their lifetime is thus bound to the
context of the `BuildRoot`. This also means that they are cleaned-up
with the `BuildRoot`, i.e. when its context is exited.
This swaps the `systemd-nspawn` implementation for `bubblewrap` to
contain sub-processes. It also adjusts the `BuildRoot` implementation
to reduce the number of mounts required to keep locally.
This has the following advantages:
* We know exactly how the build-root looks like. Only the bits and
pieces we select will end up in the build-root. We can let RPM
authors know what environment their post-install scripts need to
run in, and we can reliably test this.
* We no longer need any D-Bus access or access to other PID1
facilities. Bubblewrap allows us to execute from any environment,
including containers and sandboxes.
* Bubblewrap setup is significantly faster than nspawn. This is a
minor point though, since nspawn is still fast enough compared to
the operations we perform in the container.
* Bubblewrap does not require root.
At the same time, we have a bunch of downsides which might increase the
workload in the future:
* We now control the build-root, which also means we have to make sure
it works on all our supported architectures, all quirks are
included, and all required resources are accessible from within the
build-root.
The good thing here is that we have lots of previous-art we can
follow, and all the other ones just play whack-a-mole, so we can
join that fun.
The `bubblewrap` project is used by podman and flatpak, it is packaged
for all major distributions, and looks like a stable dependency.
When applying labels inside the container that are unknown to the
host, the process needs to have the CAP_MAC_ADMIN capability in order
to do so, otherwise the kernel will prevent setting those unknown
labels. See the previous commit for more details.
Drop the `kwargs` forwarding from buildroot.run() to subprocess.run().
We do not use it other than for `stdin=subprocess.DEVNULL`. Set that
option directly instead.
Doing the kwargs forwarding mixes the argument namespaces and is very
hard to read. It is not clear from the call-site which argument goes to
buildroot.run() and which to subprocess.run().
Lastly, it requires us to manually fetch `check` just to make pylint
happy. Lets just drop this dance and make the API explicit.
This reverts commit 33844711cd.
There are systems were our runners have no standard python3 location
available. They will fix the environment before invoking any further
utilities. Therefore, we cannot rely on `python3 foo.py` to work in our
ad-hoc containers.
This simply reverts the behavior back to using the shebang.
Prepare the PYTHONPATH of the build-root container to include the path
to the osbuild library. This way, we no longer need any symlinks or
bind-mounts for the individual modules.
Use the python-interpreter explicitly to invoke the runners. This works
around inconsistencies between scripts imported from the host, and the
interpreter taken from a build-root.
With `--keep-unit` we now run with the privileges and resources of the
caller. We no longer require external services to extend our privileges.
This also means we no longer have to configure our unit sandbox
manually, but simply rely on kernel sandboxing to do the right thing.
This adds one more flags to `systemd-nspawn`:
--keep-unit
This prevents nspawn from creating its own scope unit and
instead uses the scope of the caller. Since we want nspawn to
run with the privileges of the caller, this is fitting for our
use case.
Furthermore, this makes nspawn work without a running system
bus, since it no longer needs to talk to systemd pid1.
(introduced with systemd-v209)
With this in place, osbuild can be run from within docker containers (or
other containers without systemd as pid1). This still requires some
extra setup, but this can all be done in the container manager.
Two cleanups for the context-managers we use:
* Use `contextlib.AbstractContextManager` if possible. This class
simply provides a default `__enter__` implementation which just
returns `self`. So use it where applicable.
Additionally, it provides an abstract `__exit__` method and thus
allows static checks for an existance of `__exit__` in the dependent
class. We might use that everywhere, but this is a separate
decision, so not included here.
* Explicitly return `None` from `__exit__`. The python docs state:
If an exception is supplied, and the method wishes to suppress
the exception (i.e., prevent it from being propagated), it
should return a true value. Otherwise, the exception will be
processed normally upon exit from this method.
That is, unless we want exceptions to be suppressed, we should
never return a `truthy` value. The python contextlib suggest using
`None` as a default return value, so lets just do that.
In particular, the explicit `return exc_type is None` that we use
has no effect at all, since it only returns `True` if no exception
was raised.
This commit cleans this up and just follows what the `contextlib`
module does and returns None everywhere (well, it returns nothing
which apparently is the same as returning `None` in python). It is
unlikely that we ever want to suppress any exceptions, anyway.
We want to run stages and other scripts inside of the nspawn containers
we use to build pipelines. Since our pipelines are meant to be
self-contained, this should imply that the build-root must have osbuild
installed. However, this has not been the case so far for several
reasons including:
1. OSBuild is not packaged for all the build-roots we want to support
and thus we have the chicken-and-egg problem.
2. During testing and development, we want to support using a local
`libdir`.
3. We already provide an API to the container. Importing scripts from
the outside just makes this API bigger, but does not change the
fact that build-roots are not self-contained. Same is true for the
running kernel, and probably much more..
With all this in mind, our strategy probably still is to eventually
package osbuild for the build-root. This would significantly reduce our
API exposure, points-of-failure, and host-reliance. However, this switch
might still be some weeks out.
With this in mind, though, we can expect the ideal setup to have a full
osbuild available in the build-root. Hence, any script we import so far
should be able to access the entire `libdir`. This commit unifies the
libdir handling by installing the symlinks into `libdir` and providing
a single bind-mount of the module-path into `libdir`.
We can always decide to scratch that in the future when we scratch the
libdir-import from the host-root. Until then, I believe this commit
nicely unifies the way we import the module both in a local checkout as
well as in the container.
Add /boot to be mounted from the build tree into the build root,
because the EFI binaries for grub are stored in there and for
ostree grub2 support those need to be copied too.
Currently /var was always backed by /var/tmp, but we may want to
control exactly what it is backed by. The default is the same, so
this is not a behavioral change.
The z Initial Program Loader (zipl) when creating the bootmap in
bootmap_creat (src/zipl/bootmap.c) wants to create a device node
via misc_temp_dev (bootmap_create:1141) for the device that it
is installing the bootloader to[1]. Currently access to loopback
devices is allowed from within the container (it is used to mount
the image), but only read/write access. On s390x also allow the
creation of device nodes, so zipl can do its work and install
the bootloader stages on the "disk".
[1] zipl source at commit dcce14923c3e9615df53773d1d8a3a22cbb23b96
This might (hopefully) fix a race in destructing the asyncio.EventLoop
that's used in all API classes, which leads to warnings about unhandled
exceptions on CI.
This also puts their creation closer to where the client-side sockets
are created.
The workaround of manually linking /lib64 -> /usr/lib64 inside the
container that is needed on s390 is also required on ppc64 because
here the dynamic linker is set to /lib64/ld64.so.2 and the /lib64
link is not created.
Work around a combination of systemd not creating the link from
/lib64 -> /usr/lib64 (see systemd issue #14311) and the dynamic
linker is being set to (/lib/ld64.so.1 -> /lib64/ld64.so.1)
Therefore we manually create the link before calling nspawn
We've been using a generic `osbuild-run`, which sets up the build
environment (and works around bugs) for all build roots. It is already
getting unwieldy, because it tries to detect the OS for some things it
configures. It's also about to cause problems for RHEL, which doesn't
currently support a python3 shebang without having /etc around.
This patch changes the `build` key in a pipeline to not be a pipeline
itself, but an object with `runner` and `pipeline` keys. `pipeline` is
the build pipeline, as before. `runner` is the name of the runner to
use. Runners are programs in the `runners` subdirectory.
Three runners are included in this patch. They're copies of osbuild-run
for now (except some additions for rhel82). The idea is that each of
them only contains the minimal setup code necessary for an OS, and that
we can review what's needed when updating a build root.
Also modify the `--build-pipeline` command line switch to accept such a
build object (instead of a pipeline) and rename it accordingly, to
`--build-env`.
Correspondingly, `OSBUILD_TEST_BUILD_PIPELINE` → `OSBUILD_TEST_BUILD_ENV`.
`osbuild-run` sets up the build root so that programs can be run
correctly in it. It should be run for all programs, not just stages and
assemblers (even though they're the only consumers right now).
Also, conceptually, `osbuild-run` belongs to the build root. We'll
change its implementation based on the build root in a future commit.
The buildroot already sets up `/run/osbuild/api`. It makes sense to have
it manage libdir as well.
A nice side benefit of this is a simplification of the Stage and
Assembler classes, which grew quite complex and contained duplicate
code.
In BuildRoot a new mount /var pointing to temporary directory in host's
/var/tmp is created. This enables us to have temporary storage inside
the container which is not hosted on tmpfs. Thanks to that we can move
larger files out of the part of filesystem which is hosted on tmpfs to
save up memory on machines with low memory capacity.
Import modules between files using the syntax `from . import foobar`,
renaming what used to be `FooBar` to `foobar.FooBar` when moved to a
separate file.
In __init__.py only import what is meant to be public API.
Signed-off-by: Tom Gundersen <teg@jklm.no>