Currently /var was always backed by /var/tmp, but we may want to
control exactly what it is backed by. The default is the same, so
this is not a behavioral change.
The z Initial Program Loader (zipl) when creating the bootmap in
bootmap_creat (src/zipl/bootmap.c) wants to create a device node
via misc_temp_dev (bootmap_create:1141) for the device that it
is installing the bootloader to[1]. Currently access to loopback
devices is allowed from within the container (it is used to mount
the image), but only read/write access. On s390x also allow the
creation of device nodes, so zipl can do its work and install
the bootloader stages on the "disk".
[1] zipl source at commit dcce14923c3e9615df53773d1d8a3a22cbb23b96
This might (hopefully) fix a race in destructing the asyncio.EventLoop
that's used in all API classes, which leads to warnings about unhandled
exceptions on CI.
This also puts their creation closer to where the client-side sockets
are created.
The workaround of manually linking /lib64 -> /usr/lib64 inside the
container that is needed on s390 is also required on ppc64 because
here the dynamic linker is set to /lib64/ld64.so.2 and the /lib64
link is not created.
Work around a combination of systemd not creating the link from
/lib64 -> /usr/lib64 (see systemd issue #14311) and the dynamic
linker is being set to (/lib/ld64.so.1 -> /lib64/ld64.so.1)
Therefore we manually create the link before calling nspawn
We've been using a generic `osbuild-run`, which sets up the build
environment (and works around bugs) for all build roots. It is already
getting unwieldy, because it tries to detect the OS for some things it
configures. It's also about to cause problems for RHEL, which doesn't
currently support a python3 shebang without having /etc around.
This patch changes the `build` key in a pipeline to not be a pipeline
itself, but an object with `runner` and `pipeline` keys. `pipeline` is
the build pipeline, as before. `runner` is the name of the runner to
use. Runners are programs in the `runners` subdirectory.
Three runners are included in this patch. They're copies of osbuild-run
for now (except some additions for rhel82). The idea is that each of
them only contains the minimal setup code necessary for an OS, and that
we can review what's needed when updating a build root.
Also modify the `--build-pipeline` command line switch to accept such a
build object (instead of a pipeline) and rename it accordingly, to
`--build-env`.
Correspondingly, `OSBUILD_TEST_BUILD_PIPELINE` → `OSBUILD_TEST_BUILD_ENV`.
`osbuild-run` sets up the build root so that programs can be run
correctly in it. It should be run for all programs, not just stages and
assemblers (even though they're the only consumers right now).
Also, conceptually, `osbuild-run` belongs to the build root. We'll
change its implementation based on the build root in a future commit.
The buildroot already sets up `/run/osbuild/api`. It makes sense to have
it manage libdir as well.
A nice side benefit of this is a simplification of the Stage and
Assembler classes, which grew quite complex and contained duplicate
code.
In BuildRoot a new mount /var pointing to temporary directory in host's
/var/tmp is created. This enables us to have temporary storage inside
the container which is not hosted on tmpfs. Thanks to that we can move
larger files out of the part of filesystem which is hosted on tmpfs to
save up memory on machines with low memory capacity.
Import modules between files using the syntax `from . import foobar`,
renaming what used to be `FooBar` to `foobar.FooBar` when moved to a
separate file.
In __init__.py only import what is meant to be public API.
Signed-off-by: Tom Gundersen <teg@jklm.no>