This removes the possibility of passing in arbitrary input data. We
now restrict ourselves to explicitly specified files/directories or
a base tree given by its pipeline id.
This drops the tar/tree stages/assemblers, as the tree/untree ones
are implicit in osbuild, and if we wish to also support compressed
trees, then we should add that to osbuild core as an option.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Whenever an assembler is not specified, the output tree is instead
saved to the content store, in a directory named after the pipeline
id.
This should render the io.weldr.tree assembler redundant.
In order to build the samples as before, specify the content store
as the input directory to build any pipeline that uses the
io.weldr.untree stage.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This uniquely identifies a pipeline based on its content. Pipelines
are considered equal modulo whitespace and the order of object
elements.
The intention is that two runs of a pipeline with the same id
generates functionaly equivalent ids. It is up to the writers
of stages and pipelines to ensure this property holds.
Signed-off-by: Tom Gundersen <teg@jklm.no>
We wanted to force an empty output dir to avoid assembly stages using
previous output when creating their new one, and hence creating
dependencies between osbuild runs. We may still do that, but for now
let's remove the restriction as it seems rather arbitrary to protect
people from themselves to this extent.
Signed-off-by: Tom Gundersen <teg@jklm.no>
It's not really useful because it's at the wrong place, after a stage
has torn down all mounts. It also makes the code more complex for too
little benefit.
Don't print systemd-nspawn's messages about starting and stopping
containers.
Also supress a ldconfig warning and only show output from
systemd-sysusers when it fails.
All stages must be able to handle an input_dir argument, as we now
either pass it to all or none for agiven run. Simply set it to
'None' if it is not provided.
Signed-off-by: Tom Gundersen <teg@jklm.no>
The new arguments are passed to the first, respectively last, stage
and are both directories. --input is read only and can be used to
initialize the first stage. --output is r/w and is where the final
stage should place the produced image.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Stages should be as stateless as possible. Don't provide an easy way out
of that.
Only the dnf stage used stage to save the dnf cache. That's only useful
during development and can be solved by pointing to a local repo mirror.
The output makes it hard to see which stage is currently processed and
how to enter the build container. Also, it doesn't include all relevant
logs.
Instead, stream log output into /tmp/output in the build container. Keep
outputting it to stdout, so that osbuild can collect it in the future.
Introduce `run-stage` script, which sets up the build environment before
running the stage. Run `ldconfig`, `systemd-sysusers`, and
`systemd-tmpfiles` in it.
This is useful for debugging, and would be as a very lightweight ssh
session, but one that only insepcts the environment without hooking
into anything.
Use systemd-nspawn's "volatile" mode, which creates a tmpfs for the root
directory. This ensures that we're not accidentally using configuration
from the host.
The only remaining hole is `/etc/pki`.
Anaconda cannot run without its configuation in `/etc`. Recreate the
defaults.
Rather than using unshare, we use nspawn as it gives us more isolation
for free. We are not sure if we will end up with this in the end, but
for the time being let's see how well it works for us.
We have to do a work-around as nspawn refuses to spawn with the current
root as the directory, even in read-only mode, so we bindmount it first
and use the bindmount, in order to trick nspawn.
Rather than treating the dnf-cache specially, give each stage its
own state directory that they can reuse. This should obviously be
used with care by the stages in order to make the builds
reproducible.
Some stages will be chrooting into the target to run things there,
and they will require the standard API VFS to be mounted. Some
tools do that themselves, other do not. In all cases, we would like
to discourage running things in the target tree.
For these reasons do not pre-mount the API VFS, but require the
stages who need it to do the mounting themselves. This is a partial
revert of f6023ed78b.