Remove the constraint that either checkpoints or the output directory
has to be supplied on the command line. Now that we have `--export`
and on-demand building it is perfectly fine to supply neither an output
directory nor checkpoints and implicitly no --export` which corresponds
to a download only mode.
Use the new Manifest.depsolve function to only build the pipelines that
were explicitly requested and their dependencies, taking into account
what is already present in the store.
Since now not all pipeline will be built, there wont be a result entry
for all the pipelines, thus the format version 2 result formatting was
changed to not require the pipeline to be present in result set.
New function that take a list of pipelines and return the list of
pipelines that need to be build, i.e. the pipelines and all their
dependencies that are not already present in the store.
Add corresponding test.
When formatting the the result, switch the default to success,
but then properly propagate the status of the build pipeline.
This should ensure that if there are no pipeline results but
a failed build pipeline, the overall status will be 'failed'.
On the other hand, if no pipelines were built, including tree
or build, the overall status will be 'success'.
When building a version 1 manifest, the assembler would always be
exported, even when not requested via the `--export` command line
option. This was done for backwards compatibility so to not break
tools relying on that behavior. The problem is that support for
this uses a completely different code path and might also now be
confusing behavior. Thus remove the implicit and really only ever
export what was explicitly requested by the caller.
Instead of building everything and then failing if an export is
invalid and could not be found, resolve the exports early and
also ensure that if `--export` is given, `--output-directory`
is present too.
Instead of checkpointing the tree and then accessing the generated
tree inside the store via the `map_object` function we not just
export the tree and use that. This better hides the internals of
the store and also allows us to activate on-demand building that
does not rely on checkpoints being implicitly built like exports.
Require all the tests that compile a manifest to either specify
checkpoints or exports. Convert all the tests that were relying
on implicit exports with v1 manifests to use explicit exports.
The current implementation of `rmtree` will try to fix permissions
when it encounters permission errors during its operation. This is
done by opening the target via `os.open` and then adjusting the
immutable flag and the permission bits. This is a problem when the
target is a broken symlink since open will fail with `ENOENT`. A
simple reproducer of this scenario is:
$ mkdir subdir
$ ln -s foo subdir/broken
$ chmod a-w subdir/
$ python3 -c 'import osbuild; osbuild.util.rmrf.rmtree("subdir")'
Since subdir is not writable, removing `subdir/broken` will fail
with `EPERM` and the `on_error` callback will try to fix it by
invoking `fixperms` on `subdir/broken` which will in `open` since
the target does not exist (broken symlink).
This is fixed by using `O_NOFOLLOW` to open so we will never open
the target. Instead `open` will fail with `ELOOP`; we ignore that
error and in fact we ignore now all errors from `open` since it
does not matter: if fixing the permissions didn't work `unlink`
will just fail (again) with `EPERM` and for symlinks it actually
doesn't matter since "on Linux the permissions of an ordinary
symbolic link are not used in an operations", see symlinks(7).
Since we bind `/proc` inside the container, we leak certain information
that comes with it. One of this is the kernel command line. None of the
decisions done by software running inside the container should depend
on the kernel command line on the host, so overwrite the kernel command
line by creating a temporary directory and mapping it inside the build-
root. For now we default to a simple `root=/dev/osbuild` fake kernel
command line.
Add a simple check for it as well.
Instead of having a temporary directory on the host for `/var` inside
the container, create a generic temporary directory that can be used
for other things and create the `var` inside that.
The idea behind the stage is to provision `var` of the stateroot,
i.e. the `var` the is shared amongst all deployments for a given
os (indicated by `osname`, e.g. `fedora`, `centos`, ...).
For `systemd-tmpfiles` to infer the correct paths, it needs to be
run on the deployment. The `var` of the latter needs to be bind-
mounted to the `var` of the stateroot, because it is shared. This
was always the intention but not what the code did. Fix this by
getting the `var` of the stateroot and bind it to the `var` of
the deployment.
NB: In reality this never mattered since systemd-tmpfiles is also
run during system startup.
This moves the handling of includes to the manifest loader, thus
supporting nested includes. search_dirs is moved to a property of the
Manifest so that it can be tracked during loads.
In addition we need to fix Manifest.path to the actual path that was
loaded instead of whatever the parent include said, so that relative
includes are handled from the proper location of the loaded manifest.
In process_embed_files(), it assumed that a stage had a "type" field,
which breaks if a stage is e.g. a `mpp-if` node, so use .get() instead
of raw dict lookups.
yaml files are essentially compatible with json files, although
they have some advantages, like allowing comments and being easier
for humans to read/write.
This changes the reading of the file to use a yaml parser instead of
a json parser, but still produces json at the end. I tried manually
converting a json file to yaml and running osbuild-mpp, and it produced
an identical file.
By default the yaml parser doesn't respect order so i had to tweak
the loader a bit to use OrderedDict.
For now disable some new warnings from pylint:
- `consider-using-from-import`
- `consider-using-with`
It probably makes sense to re-enable these in the future but
for now lets keep the code as is.
This stage the same args and formats as org.osbuild.untar (and as such
much code is just copied from that stage), except it runs gunzip
instead. I need this to uncompress the aarch64 kernel when directly
uefi-booting it.
This is to showcase it as much as to test its functionality. For this
the tar and xz stage tests have been converted. NB: only the mpp file
for each test is changed but the corresponding manifest is not.
The `sources/org.osbuild.inline` section has been kept otherwise the
ordering in the result manifest would change.
Extract the code that finds a file and opens it from the existing
method that find manifests and opens them. This is so that the
former code can be re-used.
Instead of having custom code that basically duplicates the
functionality of `LoopControl.loop_for_fd` use that instead.
Additionally, the version used in the test had a bug where
it did not re-create the Loop device in the main loop when
it was close due to an error, leading errors in subsequent
usages of the device that would often manifest in CI runs:
fcntl.ioctl(self.fd, self.LOOP_SET_FD, fd)
ValueError: file descriptor cannot be a negative integer (-1)
This applies the default authconfig settings to the tree.
Note that the `/backups` directory is removed. The tool creaset
this, and by default it should not exist, so this should be a
noop. However, if you run this on a tree with existing backups,
they would be lost.
Commit 5b1cd2b made `source` and `target` for mounts optional, but
the corresponding code in `describe` still assumes that the device
will always be present. Fix this so that source will only be used
if it is set.