Add a new class `SubIdsDB` as a database of subordinate Ids, like the
ones in `/etc/subuid` and `/etc/subgid`. Methods to read and write
data from these two files are provided.
Add corresponding unit tests.
Add a new member variable `caps` that if not `None` indicates the
capabilities to retain, i.e. all other capabilities not specified
will be dropped via `bubblewrap` (`--cap-drop`).
Add corresponding tests.
This extends the possible ways of passing references to inputs. The
current ways possible are:
1) "plain references", an array of strings:
["ref1", "ref2", ...]
2) "object references", a mapping of keys to objects:
{"ref1": { <options> }, "ref2": { <options> }, ...}
This patch adds a new way:
3) "array of object references":
[{"id": "ref1", "options": { ... }}, {"id": ... }, ]
While osbuild promises to preserves the order for "object references"
not all JSON serialization libraries preserve the order since the
JSON specification does leave this up to the implementation.
The new "array of object references" thus allows for specifying the
references together with reference specific options and this in a
specific order.
Additionally this paves the way for specifying the same input twice,
e.g. in the case of the `org.osbuild.files` input where a pipeline
could then be specified twice with different files. This needs core
rework though, since internally we use dictionaries right now.
Specifically this test checks that the order given in the manifest is
preserved when loaded, i.e. the internal dict has the keys ordered in
the same way, independently in which way they were specified -- list
or object.
The unit test consists of a manifest creating an empty file, which
is then converted to various formats using the `org.osbuild.qemu` stage
in separate pipelines.
The unit test then builds and exports each pipeline with qemu stage and
inspects the resulting image file using `qemu-img info` command and checks
that the test data specified in `checks.json` is a subset of the data
returned by the command.
This is basically a re-implementation of `setfilecon(3)` minus the
translation of human readable context to raw context. Add test for
the new function.
Add a new attribute `config.default` that when set will be written to
`GRUB_DEFAULT`. This should be set to `saved` when a `saved_entry` is
specified so that the functionality will be preserved if the grub cfg
gets regenerated (which is really should not, but we can not prohibit
it).
When the firewall stage is provided with stage options, which set only
the default firewall zone, the `firewall-offline-cmd` command is
executed unconditionally without any parameters. This is because in this
case `ports`, `enabled_services` and `disabled_services` are all an
empty lists. This results in a failure with the following error message:
`Opening of '/etc/sysconfig/system-config-firewall' failed, exiting.`
Make sure that the second invocation of `firewall-offline-cmd` happens
conditionally, only when at least one of the `ports`, `enabled_services`
or `disabled_services` is a non-empty list.
Adjust the stage test to cover this scenario.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
In all the invocation of `subprocess.run` stderr and stdout were both
combined in a shared pipe, but lvm sometimes spits out notices and
informational messages on stderr and thus potentially interfering
with the data we are interested in on stdout. Separate the two.
Extend the firewall stage to allow setting the default firewall zone.
Modify the stage unit test accordingly.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
This stage is needed to write down the FDO DIUN pub key root certs
needed to talk to the manufacturer server to grab the device credentials
for provisioning and later onboarding
Co-Authored-By: Antonio Murdaca <runcom@linux.com>
Add support for emitting signals to host.Service which can be used to
transmit data back to the client during an ongoing method call. This
provides the possibility for the services to send information to their
client counterpart while running. The signal can take file descriptors
as extra parameters to send data on separate files.
During the rework done in commit "use and require explicit exports"
with commit id 7ae4a7e78, the test got overlooked. Add an empty
list of checkpoints to the `obs.compile` invocation as to actually
trigger the osbuild invocation.
Reported-By: Thomas Lavocat <tlavocat@redhat.com>
There is a source test that installs a pre-build, embeded image file
and ensure all the right files are installed. This uses the vfs driver
because then it works everywhere, including the CI (which doesn't do
overlayfs).
Then the is a source test that downloads a minimal image from
a faked registry on localhost.
For the registy API to work the "/v2" entry-point in the webserver has
to be at the root, so there is a symlink in test/data:
v2 -> sources/org.osbuild.skopeo/data/v2
But otherwise the data is localized to sources/org.osbuild.skopeo.
The treesum of a filesystem tree is the content hash of all its
files, its directory structure and file metadata.
By storing trees by their treesum we avoid storing duplicates of
identical trees, at the cost of computing the hashes for every
commit to the store.
This has limited benefit as the likelihood of two trees being
identical is slim, in particular when we already have the ability
to cache based on pipeline/stage ID (i.e., we can avoid rebuilding
trees if the pipelines that built them were the same).
Drop the concept of a treesum entirely, even though I very much
liked the idea in theory...
Signed-off-by: Tom Gundersen <teg@jklm.no>
Pacman is the default package manager for Arch Linux and derivates, the
pacman.conf stage generate a valid pacman.conf configuration file.
Co-Authored-By: Jelle van der Waa <jvanderwaa@redhat.com>
Add a new stage `org.osbuild.dnf-automatic.config` for configuring DNF
Automatic.
The stage changes persistent DNF Automatic configuration. Currently, only
a subset of options can be set:
- 'commands' section
- apply_updates
- upgrade_type
Fix#908
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Add a new stage `org.osbuild.yum.repos` for creating YUM / DNF `.repo`
files in `/etc/yum.repos.d`. All repo-specific options are supported but
only a subset of options which can be set for a repo as well as in the
[main] section are supported.
Add unit test for the new stage.
Fix#907
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Set the container environment variable to indicate to programs
inside the build root that they are indeed running inside a
container (see also https://systemd.io/CONTAINER_INTERFACE/).
Create a well-defined environment with and use that for the build
root. It is not desirable to have the host's environment leak
into the container. Add a test to ensure that this works.
NB: This was probably an oversight when we switched from systemd-
nspawn to bubblewrap.
Wrap the LVM volume group in a LUKS container with the passphrase
`osbuild` (yes, really, super secure). NB: the kernel command line
is changed to include `luks.uuid` which is needed so that dracut
will attempt to open the luks container. This corresponds to an
crypttab entry `luks-uuid UUID`. We cannot use the /etc/crypttab
for ostree based images because the initrd is created at commit
time but they luks volume is created at deployment time, we have
to use the kernel command line instead. See the man page for the
systemd-cryptsetup-generator(8) for more information.
The `cryptsetup` package is included in the build root since it is
needed by the `org.osbuild.luks2.format` stage. All manifests that
are using the `f34-build-v2` build root change as a result.
OSTree tests, especially the fedora-ostree-image one, will soon
need the tight integration with the host for LVM2/LUKS support.
This we cannot run them in github action containers. Move them
to Schutzbot.
Explicitly install the new sub-package until composer gains the
needed requirement.
Add a new signal like callback to the `Loop` class which will be
invoked before the actual loop device is closed, i.e. the loop
device has an open file descriptor to the device node and it is
being closed. Can be used to perform custom cleanup tasks.
Certain udev rules for block devices are problematic for osbuild.
One prominent example is LVM2 related rules that would trigger
a scan and auto-activation of logical volumes. This rules are
triggered for new block devices or when the backing file of an
loop devices changes. The rules will lead to a `lvm pvscan
--cache --activate ay` via the `lvm2-pvscan@.service` systemd
service. This will auto-activate all LVM2 logical volumes and
thus interfering with our own device handling in `devices/
org.osbuild.lvm2.lv`, where we only want to activate a single
logical volume.
Also, if the lvm2 devices get activated after the manual metadata
change done in `org.osbuild.lvm2.metadata` the volume group names
might conflict which results in all lvm2 based tooling to be very,
ver sad and also said stage to hang since the loopback device can
not be detached since the activate logical volumes keep it open.
To work-around this we therefore implement a udev rule inhibition
mechanism: on the osbuild side a lock file is created via the new
class called `UdevInhibitor` in `utils/udev.py`. A custom set of
udev rules in `10-osbuild-inhibitor.rules` is then acting on the
existence of that lock file and if present will opt-out of certain
further processing. See the udev rules file for more details.
In fact, we want this custom inhibition mechanism, for all block
devices that are under osbuild's control, since these rules are
there to provide automatisms and integrations with the host,
something we never want.
NB: this should not affect the detection of devices, since lvm2
does do a scan of devices when we call `lvdisplay` in `lvm2.lv`.
The call chain as of lvm2 git rev f773040:
_lvdisplay_single [tools/lvdisplay.c
process_each_lv [tools/toollib.c
lvmcache_label_scan [lib/cache/lvmcache.c
label_scan [ibidem, here is the device detection!
lvdisplay_full [lib/display/display.c
Add support for `PermitRootLogin` option in the
`org.osbuild.sshd.config` stage.
I kept the "yes" and "no" values for consistency with other stage
options. While it will make the implementation in osbuild-composer
harder, it won't be impossible as we already have a precedence for doing
it this way (e.g. in the `org.osbuild.pam.limits.conf`).
Modify the stage unit tests to check the new option.
Remove the empty `org.osbuild.sshd.config` stage from `a.mpp.json`
since it does not add any value and it actually made the `tree-diff`
tool provide a weird tree diff results.
Fix#910
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Use the new Manifest.depsolve function to only build the pipelines that
were explicitly requested and their dependencies, taking into account
what is already present in the store.
Since now not all pipeline will be built, there wont be a result entry
for all the pipelines, thus the format version 2 result formatting was
changed to not require the pipeline to be present in result set.