This really only makes sense if we are running systemd as PID1
inside the container, but we are not booting a system, just using
it as a glorified chroot.
This means entering the namespaces from the outside will be a bit
more cumbersome, but that was not used much and was never reliable
to begin with.
Signed-off-by: Tom Gundersen <teg@jklm.no>
loop.py is a simple wrapper around the kernel loop API. remoteloop.py
uses this to create a server/clinet pair that communicates over an
AF_UNIX/SOCK_DGRAM socket to allow the server to create loop devices
for the client.
The client passes a fd that should be bound to the resulting loop
device, and a dir-fd where the loop device node should be created.
The server returns the name of the device node to the client.
The idea is that the client is run from whithin a container without
access to devtmpfs (and hence /dev/loop-control), and the server
runs on the host. The client would typically pass its (fake) /dev
as the output directory.
For the client this will be similar to `losetup -f foo.img --show`.
[@larskarlitski: pylint: ignore the new LoopInfo class, because it
only has dynamic attributes. Also disable attribute-defined-outside-init,
which (among other problems) is not ignored for that class.]
Signed-off-by: Tom Gundersen <teg@jklm.no>
Add a directory to each BuildRoot potentially containing a set of
sockets. Also add a helper to create a named bound socket in a given
BuildRoot.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Compute a hash based on the content of a stage, together with the
hash of its parent stage.
The output of a pipeline is saved by the id of the last stage.
This is largely equivalent to the current logic, where it is the
pipeline that contains the id, but this means that the ids are
indepedent of how pipelines are split, the only thing that matters
is the sequence of stages, not whether or not they are in one or
several interdependent pipelines.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This removes the possibility of passing in arbitrary input data. We
now restrict ourselves to explicitly specified files/directories or
a base tree given by its pipeline id.
This drops the tar/tree stages/assemblers, as the tree/untree ones
are implicit in osbuild, and if we wish to also support compressed
trees, then we should add that to osbuild core as an option.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Whenever an assembler is not specified, the output tree is instead
saved to the content store, in a directory named after the pipeline
id.
This should render the io.weldr.tree assembler redundant.
In order to build the samples as before, specify the content store
as the input directory to build any pipeline that uses the
io.weldr.untree stage.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This uniquely identifies a pipeline based on its content. Pipelines
are considered equal modulo whitespace and the order of object
elements.
The intention is that two runs of a pipeline with the same id
generates functionaly equivalent ids. It is up to the writers
of stages and pipelines to ensure this property holds.
Signed-off-by: Tom Gundersen <teg@jklm.no>
We wanted to force an empty output dir to avoid assembly stages using
previous output when creating their new one, and hence creating
dependencies between osbuild runs. We may still do that, but for now
let's remove the restriction as it seems rather arbitrary to protect
people from themselves to this extent.
Signed-off-by: Tom Gundersen <teg@jklm.no>
It's not really useful because it's at the wrong place, after a stage
has torn down all mounts. It also makes the code more complex for too
little benefit.
Don't print systemd-nspawn's messages about starting and stopping
containers.
Also supress a ldconfig warning and only show output from
systemd-sysusers when it fails.
All stages must be able to handle an input_dir argument, as we now
either pass it to all or none for agiven run. Simply set it to
'None' if it is not provided.
Signed-off-by: Tom Gundersen <teg@jklm.no>
The new arguments are passed to the first, respectively last, stage
and are both directories. --input is read only and can be used to
initialize the first stage. --output is r/w and is where the final
stage should place the produced image.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Stages should be as stateless as possible. Don't provide an easy way out
of that.
Only the dnf stage used stage to save the dnf cache. That's only useful
during development and can be solved by pointing to a local repo mirror.
The output makes it hard to see which stage is currently processed and
how to enter the build container. Also, it doesn't include all relevant
logs.
Instead, stream log output into /tmp/output in the build container. Keep
outputting it to stdout, so that osbuild can collect it in the future.
Introduce `run-stage` script, which sets up the build environment before
running the stage. Run `ldconfig`, `systemd-sysusers`, and
`systemd-tmpfiles` in it.
This is useful for debugging, and would be as a very lightweight ssh
session, but one that only insepcts the environment without hooking
into anything.
Use systemd-nspawn's "volatile" mode, which creates a tmpfs for the root
directory. This ensures that we're not accidentally using configuration
from the host.
The only remaining hole is `/etc/pki`.
Anaconda cannot run without its configuation in `/etc`. Recreate the
defaults.
Rather than using unshare, we use nspawn as it gives us more isolation
for free. We are not sure if we will end up with this in the end, but
for the time being let's see how well it works for us.
We have to do a work-around as nspawn refuses to spawn with the current
root as the directory, even in read-only mode, so we bindmount it first
and use the bindmount, in order to trick nspawn.
Rather than treating the dnf-cache specially, give each stage its
own state directory that they can reuse. This should obviously be
used with care by the stages in order to make the builds
reproducible.