'_hawkey.Reldep' object has no attribute 'name' in the version shipped
on RHEL-8. Add code to handle this situation in case it happens.
Default to using named attributes if these are available.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Since the `with_sbom` variable was used only in a single place, we can
simplify the code (and remove one extra line of it) to just directly use
the if condition.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Instead of hard-coding the Python version that the installed
python3-dnf has been built against on the latest Fedora, read the
value from the osbuild-ci container. The container now has the version
written in /osb/libdnf-python-version.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Add a new stage, which allows analyzing the installed packages in a
given filesystem tree using DNF4 API and generating an SPDX v2.3 SBOM
document for it.
One can provide the filesystem tree to be analyzed as a stage input. If
no input is provided, the stage will analyze the filesystem tree of the
current pipeline.
Add tests cases for both usage variants of the stage, as well as the
unit test for stage schema validation.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Extend osbuild-depsolve-dnf, to return JSON with SPDX SBOM that
corresponds to the depsolved package set, if it has been requested.
For now, only DNF4 is supported.
Cover the new functionality with unit test.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
This will allow validating request arguments in the solver method in a
different way for dnf4 and dnf5 and raising an exception if needed.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Add implementation of standard-agnostic model for SBOM, and simple SPDX
v2.3 model. Also add convenience functions for converting DNF4 package
set to the standard-agnostic model and for converting it to SPDX model.
Cover the functionality with unit tests.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
'dnf' Python package can't be installed using pip in the tox
environment. In order to test the code which uses it, we need to use the
system version. Our testing environment uses Fedora as the system,
therefore we can reasonably use the system version of 'dnf' only with
Python version which is on Fedora.
Enable site packages in tox for Python 3.12 when testing osbuild
internals.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
It's sometimes useful to set up a loop device for an already formatted
disk/filesystem image to derive new artifacts from it. In that case, we
want to make sure it's impossible to modify its contents in any way in
that process, both for our own purposes and for other stages operating
on it.
Notably, mounting some filesystems read-only still seem to touch the
disk (like XFS).
The existing jsoncomm is a work of beautiy. For very big arguments
however the used `SOCK_SEQPACKET` hits the limitations of the
kernel network buffer size (see also [0]). This lead to various
workarounds in #824,#1331,#1836 where parts of the request are
encoded as part of the json method call and parts are done via
a side-channel via fd-passing.
This commit changes the code so that the fd channel is automatically
and transparently created and the workarounds are removed. A test
is added that ensures that very big messages can be passed.
[0] https://github.com/osbuild/osbuild/pull/1833
The code currently does not support btrfs subvolumes that are not
directly under the root directory. This commit fixes this by adding
`-p` to `btrfs subvolume create` and adding an integration test.
Closes: https://github.com/osbuild/osbuild/issues/1882
We need to pass loopback devices for these properties, but the schema
says that there will be a `path` property, so osbuild complains.
osbuild is right of course, but this definitely *did* work in an earlier
version, so something changed. Ideally, we'd narrow down here what
happened exactly, but at the same time this approach of just making the
property more generic matches what's done in e.g. the `zipl.inst` stage
where we also use a loopback device.
For reference, this is where we use this stage:
ba45b296ec/src/osbuild-manifests/platform.qemu.ipp.yaml (L100-L119)
The current error message when an export is not found could be
improved by printing what exports are actually availalble to make
it easier for the user to e.g. spot typos.
Adds a new stage that calls update-ca-trust tool with extract argument
to extract CA certificates. It is expected that one or more CAs are
placed in the /etc/pki/ca-trust/source/anchors directory in PEM format.
Filenames do not matter but must be unique enough. See the
update-ca-trust man page for more details on what it does.
Current osbuild will always print some non output even
when run with `--monitor=JSONSeqMonitor` because of the
unconditional `print/sys.stdout.write()` in `main_cli.py`.
This commit adds a new `-q` option to silence this so that something
like osbuild-composer can run `osbuild -q --monitor=JSONSeqMonitor`
to get pure json-seq output during the build.
The use-case is to run `osbuild --monitor-fd` from e.g. bib and
osbuild-composer so that we get pure json from the monitor-fd
and anything that goes on std{out,err} can be logged as it is
most likely error output.
Quick check to see if checkpointing "build" helps with the
runtime. Note that the cache size is already 20GB, I doubled
it for good measure but we probably can go back to 20, just
want to make sure this is not the bottleneck.
Closes: https://github.com/osbuild/osbuild/issues/1874
- Add an extra call to `/bin/false` and explicitly set the `check`
argument for both `run()` calls.
- Compare full call_args_list. This checks that all the options are as
expected, that the `check` argument is set properly, and that the full
order of all the calls is as expected, including the chroot path.
Co-authored-by: Michael Vogt <michael.vogt@gmail.com>
For consistency, use subprocess.run() with check=True for the calls that
were previously using subprocess.check_call().
Update the affected tests to match.
Add a test for the chroot context that mocks subprocess.run() and
subprocess.check_call(). The test verifies that the functions are
called the expected number of times with the expected command (first
arg).
If one of the chroot mounts fails to unmount, keep iterating so that we
don't stop and continue to unmount the rest.
Print an error message with the failed mounts, but don't fail the build.
Since failing to unmount doesn't fail the exiting of the context, and
the context itself doesn't know what will be running in the chroot,
do a lazy unmount.
This commit keeps track of individual errors as curl will only report
the last download operation success/failure via it's exit code.
There is "--fail-early" but the downside of that is that abort all in
progress downloads too.
curl keeps a global parser state. This means that if there are
multiple "cacert =" values they are just overriden and the last
one wins. This is why the `test_curl_download_many_mixed_certs`
test did not work - the second `cacert = ` overwrites the previous
one.
To fix this we need to use `--next` when we need to change options
on a per url (like `cacert`) basis. With `--next` curl starts a
new parser state for the next url (but keeps the options for the
previous ones set). This commit does that in a slightly naive
way by just repeating our options for each url. Technically
we could sort the sources so that we have less repetition but
other then slightly smaller auto-generated files it has no
advantage.
With this commit the `test_curl_download_many_mixed_certs` test
works.
When investigating https://github.com/osbuild/osbuild-composer/pull/4247
we found that it would fail when a download required two sets of
`--cacert` keys. This commits adds a test for this that fails on
the centos9 7.76.1 version.
When we create the device node inside the buildroot so far it's
very minimal - just `/dev/{vg}-{lv}` with the appopriate major/minor.
However when mount runs it will create a mapper device with the
same major/minor under `/dev/mapper/{escaped(vg)}-{escaped(lv)}`
and use that to mount the actual filesystem. Without this additional
device findmnt will not be able to detect the udev attributes of
the source (as the source is just missing from /dev).
This commit create the right mapper in the same way that we
create the non-mapper device node.
When generating the original test certs no `-days` paramter was
passed which resulted in a too low `notAfter` value.
This commit fixes this and uses 100y also updates the README:
```
$ openssl x509 -enddate -noout -in test/data/certs/cert1.pem
notAfter=Aug 2 10:42:40 2124 GMT
$ openssl x509 -enddate -noout -in test/data/certs/cert2.pem
notAfter=Aug 2 10:42:45 2124 GMT
```
This fixes a test failure in https://github.com/osbuild/osbuild/pull/1819
for the `test_curl_download_many_mixed_certs` test.
The libdir is passed down for sources but it is never used in
any of our sources. As this is confusing and we want to eventually
support multiple libdirs remove this code.
It looks like the libdir for soruces was added a long time ago in 8423da3
but there is no indication if/how it is/was supposed to get used and
AFACT from going over the git history it was very used.
SourceService:dispatch() never sends "libdir" to the actual sources,
so it is not an even technically an API break.
This commit extract a helper `get_parent_path()` that is unit
tested and also uses this generated parent_path for the call
to manage_devices_file to be consistent with the exiting behavior
of only including the device that actually contains the VG.
This is needed for bootc where all mounts need to be from the same
physical disk/loop so that bootupd works. The idea is that in the
manifest the new option `vg_partnum` is added and the parent VG
is found via the partition number of the full image similar to
the `partnum` from https://github.com/osbuild/osbuild/pull/1501
A manifest using this feature looks like this:
```json
"devices": {
"disk": {
"type": "org.osbuild.loopback",
"options": {
"filename": "disk.raw",
"partscan": true
}
},
"rootlv": {
"type": "org.osbuild.lvm2.lv",
"parent": "disk",
"options": {
"volume": "rootlv",
"vg_partnum": 4
}
}
}
```
Co-authored-by: Michael Vogt <mvogt@redhat.com>
storage_conf was never not None, so the loading was called every time.
This never crashed because conf was always being set, but it wasn't
working properly regardless.
Add two unit tests for our toml util module.
- Write an object with util.toml, read it with util.toml, and compare
written and read objects.
- Write an object directly as a string, read it with util.toml,
comparing with an expected object.
A test that writes with util.toml, reads as string, and verifies the
read string is difficult to do in a general way, because each toml
module we support writes files in a slightly different way.