Extract testing of SBOM support into a dedicated test case. There's no
added value in running all SBOM test cases for all types of depsolve
transactions.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Testing all repo config combos for each search test case does not
really increase the test coverage for repo config combos. It just
increases the run time of the test.
Move the repo config combos testing to a dedicated test case, which will
test search for two packages from two different repositories.
For the original `test_search()`, always use repo configs in the
request.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Testing all repo config combos for each depsolve test case does not
really increase the test coverage for repo config combos. It just
increases the run time of the test.
Move the repo config combos testing to a dedicated test case, which will
test depsolving two packages from two different repositories.
For the original `test_depsolve()`, always use repo configs in the
request.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Extract the code that filters and composes repo servers for a test case
into a separate function. This enables reusing it in all places that did
the same thing. The problem would get more prominent as we would
separate some test scenarios into separate test cases.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
The `test_depsolve_result_api()` test case was parametrized based on
`dnf_config`, but in reality, the `depsolve()` call always used an
empty dict as `dnf_config`. Effectively, it was being tested three
times with DNF4.
In addition, don't pass optional arguments to `depsolve()`.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Modify the `config_combos()` to return `repo_configs` and `root_dir`
only if it should be really used. Otherwise, return `None`. Modify all
helper functions for dnf-depsolve API calls to add relevant fields to
the request JSON, only if the relevant values are set. This makes the
test cleaner, since previously, the `root_dir` was always set.
The same applies to `dnf_config`, which could be set to `None` already,
so let's make it optional.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
This commit limits the output in the json pipeline to a "reasonable"
length. We ran into issues (e.g. [0]) from a combination of a stage
that produce tons of output (dracut, ~256 kb, see issue#1976) and
the consumer ("images" osbuild/monitor.go) that used a golang scanner
with a max default buffer of 64kb before erroring. So limit it
here.
The stage result from via json is mostly for information and any error
will most likely at the end. Plus consumers can collect the individual
log lines on their own if desired via the "log()" messages that are
stream in "real-time" with the added benefit that e.g. timestamps
can be added to the logs etc.
[0] https://issues.redhat.com/browse/RHEL-77988
This commit replaces the `/usr/bin/logger` binary in the dracut
chroot with a bind mount to `/usr/bin/true` to silence the spam
that we get from dracut during initramfs generation:
```
logger: socket /dev/log: No such file or directory
```
Unfortunately I could not find a nicer way, it seems it is
not possible to simply pass `sysloglvl=0` via the commandline
or an environment.
The extra complication here is that the dracut stage mounts
`devtmpfs` which will likely include:
```
/dev/log -> /run/systemd/journal/dev-log
```
but of course inside this chroot there is no `/run` which
leads to these messages.
Closes: https://github.com/osbuild/osbuild/issues/1976
Modify the function able to handle messages about skipped binary
fcontext files and skip them. This started to happen on c10s. Extend the
unit test to cover this new scenario.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Add an additional option called `kernel_line_size`
to allow setting a maximum cmdline size check
value for custom kernels or other restrictions.
This will override the arch defaults, if not set,
then the size map is checked, and if the current
architecture is not in the map, fallback to
4096, which is the max value allowed for
COMMAND_LINE_SIZE.
Signed-off-by: Albert Esteve <aesteve@redhat.com>
Add check to ensure that the size of
the parameters does not exceed the
maximum kernel cmdline size.
Otherwise, the parameters will
be truncated and the command line
will fail.
The size is arch-dependant. In
order to not to over-complicate
the search of the value in the
kernel files (which will probably
not be installed in most cases),
it uses a map with some values
for common architectures.
If architecture is not found in
the map, defaults to 4096, which
is the maximum posible size for
COMMAND_LINE_SIZE.
Signed-off-by: Albert Esteve <aesteve@redhat.com>
Rework the function to actually fail in case it can't analyze the
provided ISO. Previously, the tool would silently fail to analyze ISO,
generate and generate an empty report. Fix this.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Add unit test for testing failures in analyse_iso(). The function
should fail if it can't analyze the provided ISO.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
For the purpose of using this tool in tests (specifically for manifest
tests where we diff image-info reports), it is important that the tools
exists with non-zero value if the final report is empty.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Make sure that shell does not interpret the text within the back-quote
as a command to execute in a sub-shell.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Catching the exception just to print it and exit with non-zero exit
return code. Let's not catch it at all.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Explicitly specify the workdir when running manifest_tests, make
potential debugging of the test case on CI runner easier (because
otherwise the workdir would get removed after failing test).
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
In case the workdir is not provided to the script explicitly as an
argument, the script will use a temporary directory under /var/tmp as
its workdir. In such case, the workdir will be deleted on exit. This
should mitigate potentially confusing behavior when executing the script
multiple times with different arguments, while never specifying the
workdir.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Extract the opening of LVM LV devices from `discover_lvm()` to
`OSBuildDeviceManager` class as `open_lvm_lv()` method.
`open_lvm_lv()` returns the path to the opened device in the devpath set
in the underlying `DeviceManager`. The `org.osbuild.lvm2.lv`
implementation takes the responsibility for creating and managing
device nodes. This means that we don't need to be creating any device
nodes directly in `osbuild-image-info`, especially in the current
working directory. This was previously causing issues when inspecting
two images with different LVM layout in a sequence.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Add a new class `OSBuildDeviceManager`, which wraps
`devices.DeviceManager`, so that we can consolidate all code that is
opening devices using osbuild, in it. As the fist step, move the
`loop_open()` function to the class.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Rename the function for naming consistency and always include the actual
error from `pvdisplay` when raising RuntimeError.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Extract the code that discovers LVM LV names for a given VG, from
`discover_lvm()` into a separate function `lvm_lvs_for_vg()`. This
improves the readability of the code. In addition, some values returned
by the `lvdisplay` invocation were never used. Don't request them and
simplify the code. Rename variables that hold LV names to clearly
express that.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Per discussion in the team, we see little value in rebuilding RHEL-8.10
images on RHEL-8.10 for the purpose of manifest testing in osbuild. So
let's not do that anymore.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>