This change allows use the more advanced features of bash like
array operations (e.g. `${@:2}` to drop the first two arguments
or similar. On fedora/rhel this is a no-op as it is already using
sh -> bash (afaik).
We currently use the absolute path of these binaries in the
helper. This has some advantages but given that we control the
inputs for PATH in general it seems unnecessary.
We are also slightly inconsistent about this in the codebase but
favor the non absolute path version. A quick count:
```
$ git grep '"chroot"'|wc -l
13
$ git grep '"/usr/sbin/chroot"'|grep -v test_|wc -l
8
```
for `mount` and `umount` it seems this is the only place that uses
the absolute path.
It's not an important change but it has the nice property that it
allows us to use e.g. `testutil.mock_command()` in our tests and
it would be nice to be consistent.
This commit moves the joining of path fragements from f-strings
to pathlib and simplifies some of the map/filter/lambda expressions
into more standard list comprehensions.
'_hawkey.Reldep' object has no attribute 'name' in the version shipped
on RHEL-8. Add code to handle this situation in case it happens.
Default to using named attributes if these are available.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Since the `with_sbom` variable was used only in a single place, we can
simplify the code (and remove one extra line of it) to just directly use
the if condition.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Extend osbuild-depsolve-dnf, to return JSON with SPDX SBOM that
corresponds to the depsolved package set, if it has been requested.
For now, only DNF4 is supported.
Cover the new functionality with unit test.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
This will allow validating request arguments in the solver method in a
different way for dnf4 and dnf5 and raising an exception if needed.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Add implementation of standard-agnostic model for SBOM, and simple SPDX
v2.3 model. Also add convenience functions for converting DNF4 package
set to the standard-agnostic model and for converting it to SPDX model.
Cover the functionality with unit tests.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
It's sometimes useful to set up a loop device for an already formatted
disk/filesystem image to derive new artifacts from it. In that case, we
want to make sure it's impossible to modify its contents in any way in
that process, both for our own purposes and for other stages operating
on it.
Notably, mounting some filesystems read-only still seem to touch the
disk (like XFS).
The existing jsoncomm is a work of beautiy. For very big arguments
however the used `SOCK_SEQPACKET` hits the limitations of the
kernel network buffer size (see also [0]). This lead to various
workarounds in #824,#1331,#1836 where parts of the request are
encoded as part of the json method call and parts are done via
a side-channel via fd-passing.
This commit changes the code so that the fd channel is automatically
and transparently created and the workarounds are removed. A test
is added that ensures that very big messages can be passed.
[0] https://github.com/osbuild/osbuild/pull/1833
The current error message when an export is not found could be
improved by printing what exports are actually availalble to make
it easier for the user to e.g. spot typos.
Current osbuild will always print some non output even
when run with `--monitor=JSONSeqMonitor` because of the
unconditional `print/sys.stdout.write()` in `main_cli.py`.
This commit adds a new `-q` option to silence this so that something
like osbuild-composer can run `osbuild -q --monitor=JSONSeqMonitor`
to get pure json-seq output during the build.
The use-case is to run `osbuild --monitor-fd` from e.g. bib and
osbuild-composer so that we get pure json from the monitor-fd
and anything that goes on std{out,err} can be logged as it is
most likely error output.
For consistency, use subprocess.run() with check=True for the calls that
were previously using subprocess.check_call().
Update the affected tests to match.
If one of the chroot mounts fails to unmount, keep iterating so that we
don't stop and continue to unmount the rest.
Print an error message with the failed mounts, but don't fail the build.
Since failing to unmount doesn't fail the exiting of the context, and
the context itself doesn't know what will be running in the chroot,
do a lazy unmount.
The libdir is passed down for sources but it is never used in
any of our sources. As this is confusing and we want to eventually
support multiple libdirs remove this code.
It looks like the libdir for soruces was added a long time ago in 8423da3
but there is no indication if/how it is/was supposed to get used and
AFACT from going over the git history it was very used.
SourceService:dispatch() never sends "libdir" to the actual sources,
so it is not an even technically an API break.
Add a new util module called host which is used for functions that are
meant for interactions with the host. These functions should not be
used in stages.
The containers.get_host_storage() function is renamed to
host.get_container_storage() for clarity, since it is no longer
namespaced under containers.
The containers.storage.conf stage writes a header explaining what the
configuration is doing and its origin. It also supports adding extra
comments via stage options, which we need to support. Add support for
writing comments at the top of the file in the toml.dump_to_file()
function.
The toml module situation in Python is a bit of a mess. Different
distro versions have different modules packaged or built-in, sometimes
with different capabilities (no writing). Since we need to support
reading and writing toml files both on the host (osbuild internals,
sources, inputs) and in the build root (stages), let's centralise the
import decision making in an internal utility module that covers all
cases.
Two of the modules we might import (tomli and tomllib) don't support
writing, so we need to either import a separate module (tomli_w) or
raise an exception when dump() is called without a write-capable module.
The tomli and tomllib modules require files be opened in binary mode
(not text) while the others require text mode. So we can't wrap the
toml.load() and toml.dump() functions directly; the caller doesn't know
which module it will be using. Let's keep track of the mode based on
which import succeeded and have our functions open the files as needed.
The wrapper functions are named load_from_file() and dump_to_file() to
avoid confusion with the load() and dump() functions that take a file
object.
See also #1847
New chroot utility module that sets up a tree with the necessary virtual
filesystems needed for running commands in the root tree in a similar
environment as they would run in the build root.
This is needed for some stages, but may also be used for all chroot
calls to unify the setup and teardown of the root environment.
The Chroot context class was previously part of the org.osbuild.dracut
stage, which was the first stage to need this setup.
We recently hit the issue that `osbuild` crashed with:
```
Unable to decode response body "Traceback (most recent call last):
File \"/usr/bin/osbuild\", line 33, in <module>
sys.exit(load_entry_point('osbuild==124', 'console_scripts', 'osbuild')())
File \"/usr/lib/python3.9/site-packages/osbuild/main_cli.py\", line 181, in osbuild_cli
r = manifest.build(
File \"/usr/lib/python3.9/site-packages/osbuild/pipeline.py\", line 477, in build
res = pl.run(store, monitor, libdir, debug_break, stage_timeout)
File \"/usr/lib/python3.9/site-packages/osbuild/pipeline.py\", line 376, in run
results = self.build_stages(store,
File \"/usr/lib/python3.9/site-packages/osbuild/pipeline.py\", line 348, in build_stages
r = stage.run(tree,
File \"/usr/lib/python3.9/site-packages/osbuild/pipeline.py\", line 213, in run
data = ipmgr.map(ip, store)
File \"/usr/lib/python3.9/site-packages/osbuild/inputs.py\", line 94, in map
reply, _ = client.call_with_fds(\"map\", {}, fds)
File \"/usr/lib/python3.9/site-packages/osbuild/host.py\", line 373, in call_with_fds
kind, data = self.protocol.decode_message(ret)
File \"/usr/lib/python3.9/site-packages/osbuild/host.py\", line 83, in decode_message
raise ProtocolError(\"message empty\")
osbuild.host.ProtocolError: message empty
cannot run osbuild: exit status 1" into osbuild result: invalid character 'T' looking for beginning of value
...
input/packages (org.osbuild.files): Traceback (most recent call last):
input/packages (org.osbuild.files): File "/usr/lib/osbuild/inputs/org.osbuild.files", line 226, in <module>
input/packages (org.osbuild.files): main()
input/packages (org.osbuild.files): File "/usr/lib/osbuild/inputs/org.osbuild.files", line 222, in main
input/packages (org.osbuild.files): service.main()
input/packages (org.osbuild.files): File "/usr/lib/python3.11/site-packages/osbuild/host.py", line 250, in main
input/packages (org.osbuild.files): self.serve()
input/packages (org.osbuild.files): File "/usr/lib/python3.11/site-packages/osbuild/host.py", line 284, in serve
input/packages (org.osbuild.files): self.sock.send(reply, fds=reply_fds)
input/packages (org.osbuild.files): File "/usr/lib/python3.11/site-packages/osbuild/util/jsoncomm.py", line 407, in send
input/packages (org.osbuild.files): n = self._socket.sendmsg([serialized], cmsg, 0)
input/packages (org.osbuild.files): ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
input/packages (org.osbuild.files): OSError: [Errno 90] Message too long
```
The underlying issue is that the reply of the `map()` call is too
big for the buffer that `jsoncomm` uses. This problem existed before
for the args of map and was fixed by introducing a temporary file
in https://github.com/osbuild/osbuild/pull/1331 (and similarly
before in https://github.com/osbuild/osbuild/pull/824).
This commit writes the return values also into a file. This should
fix the crash above and make the function more symetrical as well.
Alternative/complementary version of
https://github.com/osbuild/osbuild/pull/1833
Closes: HMS-4537
When `jsoncomm` fails because the message is too big it currently
does not indicate just how big the message was. This commit adds
this information so that it's easier for us to determine what to
do about it.
We could also include a pointer to `/proc/sys/net/core/wmem_defaults`
but it seems we want to not require fiddling with that so let's
not do it for now.
See also https://github.com/osbuild/osbuild/pull/1838
A wrong exception type was returned for the same kind of issues,
compared to the DNF4 version. Specifically, the DNF4 version returned
`MarkingErrors`, while the DNF5 version returned `DepsolveError`, when
a non-existent package was specified in the depsplve request. Make the
behavior consistent and return `MarkingErrors` also from the DNF5
version.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
The returned error reason didn't contain any details after the merge
with DNF4 version. The reason is that previously, the actual exception
returned by the DNF library was appended to the error reason. However,
now it is wrapped by a custom `MarkingErrors` exception, which didn't
have any details set. The wrapped exception in the `__cause__`
property was not taken into account. Revert to the original behavior
by reusing the wrapped exception message as the message for the
wrapper exception.
Extend the unit test to allow testing of depsolving failures.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
This commit includes the used sovler in the dnfjson reply. This
is mostly information (e.g. in service logs) but also useful in
tests to ensure that the expected solver was really run.
Note that this needs https://github.com/osbuild/images/pull/723
first.
This moves the dnf and dnf5 code into a new osbuild module called
solver. The dnf specific code is in dnf.py and dnf5 is in dnf5.py
At runtime the osbuild-depsolve-dnf script reads a config file from
/usr/lib/osbuild/solver.json and imports the selected solver. This
currently just contains a 'use_dnf5' bool but can be extended to support
other configuration options or depsolvers.
At build time a config file is selected from tools/solver-dnf.json or
tools/solver-dnf5.json and installed. Currently dnf5 is not installed,
it will be added when dnf5 5.2.1.0 becomes available in rawhide (Fedora
41).
The error messages have been normalized since the top level functions in
osbuild-depsolve-dnf do not know which version of dnf is being used.
The existing code to record progress was a bit too naive. Instead
of just counting the number os pipelines in a manifest to get the
total steps we need to look at the resolved pipelines.
with this fix `bib` will report the correct number of steps left
when doing e.g. a qcow2 image build. Right now the number of
steps is incorrect because the osbuild manifest contains pipelines
for qcow2,vdmk,raw,ami and all are currently considered steps
that need to be completed. With this commit this is fixed.
This commit adds a new `https_serve_directory()` test helper
and some custom self-signed and worthless certs that are used
during testing. They are not dynamically generated to avoid the
extra compuation time during tests (but they could be).
Generated via:
```
$ openssl req -new -newkey rsa:2048 -nodes -x509 \
-subj "/C=DE/ST=Berlin/L=Berlin/O=Org/CN=localhost" \
-keyout "key1.pem" -out "cert1.pem"
```
This will allow us to test `https` download URLs as well in e.g.
the curl source.
When using a modern curl we can download download multiple urls
in parallel which avoids connection setup overhead and is generally
more efficient. Use when it's detected.
TODO: ensure both old and new curl are tested automatically via
the testsuite.