Require all the tests that compile a manifest to either specify
checkpoints or exports. Convert all the tests that were relying
on implicit exports with v1 manifests to use explicit exports.
The current implementation of `rmtree` will try to fix permissions
when it encounters permission errors during its operation. This is
done by opening the target via `os.open` and then adjusting the
immutable flag and the permission bits. This is a problem when the
target is a broken symlink since open will fail with `ENOENT`. A
simple reproducer of this scenario is:
$ mkdir subdir
$ ln -s foo subdir/broken
$ chmod a-w subdir/
$ python3 -c 'import osbuild; osbuild.util.rmrf.rmtree("subdir")'
Since subdir is not writable, removing `subdir/broken` will fail
with `EPERM` and the `on_error` callback will try to fix it by
invoking `fixperms` on `subdir/broken` which will in `open` since
the target does not exist (broken symlink).
This is fixed by using `O_NOFOLLOW` to open so we will never open
the target. Instead `open` will fail with `ELOOP`; we ignore that
error and in fact we ignore now all errors from `open` since it
does not matter: if fixing the permissions didn't work `unlink`
will just fail (again) with `EPERM` and for symlinks it actually
doesn't matter since "on Linux the permissions of an ordinary
symbolic link are not used in an operations", see symlinks(7).
Since we bind `/proc` inside the container, we leak certain information
that comes with it. One of this is the kernel command line. None of the
decisions done by software running inside the container should depend
on the kernel command line on the host, so overwrite the kernel command
line by creating a temporary directory and mapping it inside the build-
root. For now we default to a simple `root=/dev/osbuild` fake kernel
command line.
Add a simple check for it as well.
Instead of having a temporary directory on the host for `/var` inside
the container, create a generic temporary directory that can be used
for other things and create the `var` inside that.
The idea behind the stage is to provision `var` of the stateroot,
i.e. the `var` the is shared amongst all deployments for a given
os (indicated by `osname`, e.g. `fedora`, `centos`, ...).
For `systemd-tmpfiles` to infer the correct paths, it needs to be
run on the deployment. The `var` of the latter needs to be bind-
mounted to the `var` of the stateroot, because it is shared. This
was always the intention but not what the code did. Fix this by
getting the `var` of the stateroot and bind it to the `var` of
the deployment.
NB: In reality this never mattered since systemd-tmpfiles is also
run during system startup.
This moves the handling of includes to the manifest loader, thus
supporting nested includes. search_dirs is moved to a property of the
Manifest so that it can be tracked during loads.
In addition we need to fix Manifest.path to the actual path that was
loaded instead of whatever the parent include said, so that relative
includes are handled from the proper location of the loaded manifest.
In process_embed_files(), it assumed that a stage had a "type" field,
which breaks if a stage is e.g. a `mpp-if` node, so use .get() instead
of raw dict lookups.
yaml files are essentially compatible with json files, although
they have some advantages, like allowing comments and being easier
for humans to read/write.
This changes the reading of the file to use a yaml parser instead of
a json parser, but still produces json at the end. I tried manually
converting a json file to yaml and running osbuild-mpp, and it produced
an identical file.
By default the yaml parser doesn't respect order so i had to tweak
the loader a bit to use OrderedDict.
For now disable some new warnings from pylint:
- `consider-using-from-import`
- `consider-using-with`
It probably makes sense to re-enable these in the future but
for now lets keep the code as is.
This stage the same args and formats as org.osbuild.untar (and as such
much code is just copied from that stage), except it runs gunzip
instead. I need this to uncompress the aarch64 kernel when directly
uefi-booting it.
This is to showcase it as much as to test its functionality. For this
the tar and xz stage tests have been converted. NB: only the mpp file
for each test is changed but the corresponding manifest is not.
The `sources/org.osbuild.inline` section has been kept otherwise the
ordering in the result manifest would change.
Extract the code that finds a file and opens it from the existing
method that find manifests and opens them. This is so that the
former code can be re-used.
Instead of having custom code that basically duplicates the
functionality of `LoopControl.loop_for_fd` use that instead.
Additionally, the version used in the test had a bug where
it did not re-create the Loop device in the main loop when
it was close due to an error, leading errors in subsequent
usages of the device that would often manifest in CI runs:
fcntl.ioctl(self.fd, self.LOOP_SET_FD, fd)
ValueError: file descriptor cannot be a negative integer (-1)
This applies the default authconfig settings to the tree.
Note that the `/backups` directory is removed. The tool creaset
this, and by default it should not exist, so this should be a
noop. However, if you run this on a tree with existing backups,
they would be lost.
Commit 5b1cd2b made `source` and `target` for mounts optional, but
the corresponding code in `describe` still assumes that the device
will always be present. Fix this so that source will only be used
if it is set.
This should not be needed in any case but can be a sledgehammer
for situations where we cannot properly label a file; it turns
out such a scenario is if a label, lets call it `a1`, is is an
alias to another label, lets call it `l1`. Setting `a1` will
lead to `l1` being read back, and thus copying the label `a1`
will result on the label `l1` being copied instead. Now if the
target distribution does not have `l1` but only has `a1` we
cannot set it and thus will end up with an unlabeled file.
Adds support to configure `yum-plugins`, which currently is a full
alias for `dnf-plugins`, although this might change in the future,
in case dnf options diverge from yum. It allows for both yum and
dnf plugins to be configured at the same time since on RHEL 7 both
files will be present.
Add a new stage for modifying YUM global configuration.
Add a unit test case for the newly added stage.
Because we test stages on Fedora, where there is no YUM, and this stage
is mostly intended for being used with RHEL-7 images, the stage does not
produce error in case the `/etc/yum.conf` file does not exist. It rather
produces a warning and creates the file. Ideally the stage would produce
an error in case the configuration file does not exist, but that would
be impossible to test on recent Fedora.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
It's possible the keys "logging" and "telemetry" can be arbitrary names.
If that's the case, we can change the schema without breaking backwards
compatibility, so defining known keys is safer.