Add a new script that parses the Schutzfile for repository snapshot
dates and updates any vars.ipp file found in test/data/manifests/ to
match the snapshot date for the same distro.
After modifying the vars.ipp files, it runs `sudo make test-data` to
regenerate all test manifests and then, for each modified manifest,
generates the new diff.json for that stage test.
A few things to note:
- The distro detection for each vars.ipp file is partially
heuristic-based. It assumes that the first component of the filename
is the distribution name. This is true for our current files, but
it's not a hard rule. The script will fail with an error if the first
component of a filename is not a valid distro name.
- The script uses ruamel.yaml instead of the standard pyyaml.
ruamel.yaml is much better at preserving the structure of the original
yaml file during a load-modify-dump and provides more ways of
controlling indentation and wrapping. The package will need to be
installed in any runner that calls this script.
- This script will eventually become part of a GitHub workflow that is
dispatched from the rpmrepo snapshot creation job. When that happens,
it might be changed to take snapshot dates as arguments rather than
reading them from the Schutzfile.
Extend the script to support specifying the data encoding. Keep
'base64' as the default encoding.
Add support for 'lzma+base64' encoding.
Also use the 'base64' module, instead of 'binascii' module for base64
encoding. This is consistent with what the actual source implementation
uses.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
This commit adds an error message if no repositories are
defined in the dnfjson query. We had the issue in
https://github.com/osbuild/bootc-image-builder/issues/922
that in a RHEL bootc-container no repositories are defined.
Here the error is quite confusing, as it complains about
error marking packages which is technically correct but
hides the root of the problem.
With this detect we can construct a more useful error
message in the higher layers.
Drop `module_platform_id` as it is now optional and none of
our tests is using it (i.e. has any observable difference if
missing).
Once we start using it we need to add it (and maybe a
"with_platform_id" as parameter on top so that both with/without
platform_id is tested).
The PLATFORM_ID got retired from fedora-43 [0] and it
seems like it was always kinda optional. So lets make
it optional for real to avoid failing to build fedora-43
images.
[0] https://fedoraproject.org/wiki/Changes/Drop_PLATFORM_ID
Allow passing a custom license index db file for SBOM generation by
specifying it in the solver configuration.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Instead of completely overriding the default solver configuration with
the one loaded from a file, just extend the default config. This will
allow to specify just desired config options and keeping the defaults
for the rest.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Extract testing of SBOM support into a dedicated test case. There's no
added value in running all SBOM test cases for all types of depsolve
transactions.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Testing all repo config combos for each search test case does not
really increase the test coverage for repo config combos. It just
increases the run time of the test.
Move the repo config combos testing to a dedicated test case, which will
test search for two packages from two different repositories.
For the original `test_search()`, always use repo configs in the
request.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Testing all repo config combos for each depsolve test case does not
really increase the test coverage for repo config combos. It just
increases the run time of the test.
Move the repo config combos testing to a dedicated test case, which will
test depsolving two packages from two different repositories.
For the original `test_depsolve()`, always use repo configs in the
request.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Extract the code that filters and composes repo servers for a test case
into a separate function. This enables reusing it in all places that did
the same thing. The problem would get more prominent as we would
separate some test scenarios into separate test cases.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
The `test_depsolve_result_api()` test case was parametrized based on
`dnf_config`, but in reality, the `depsolve()` call always used an
empty dict as `dnf_config`. Effectively, it was being tested three
times with DNF4.
In addition, don't pass optional arguments to `depsolve()`.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Modify the `config_combos()` to return `repo_configs` and `root_dir`
only if it should be really used. Otherwise, return `None`. Modify all
helper functions for dnf-depsolve API calls to add relevant fields to
the request JSON, only if the relevant values are set. This makes the
test cleaner, since previously, the `root_dir` was always set.
The same applies to `dnf_config`, which could be set to `None` already,
so let's make it optional.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Modify the function able to handle messages about skipped binary
fcontext files and skip them. This started to happen on c10s. Extend the
unit test to cover this new scenario.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Rework the function to actually fail in case it can't analyze the
provided ISO. Previously, the tool would silently fail to analyze ISO,
generate and generate an empty report. Fix this.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Add unit test for testing failures in analyse_iso(). The function
should fail if it can't analyze the provided ISO.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
For the purpose of using this tool in tests (specifically for manifest
tests where we diff image-info reports), it is important that the tools
exists with non-zero value if the final report is empty.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Extract the opening of LVM LV devices from `discover_lvm()` to
`OSBuildDeviceManager` class as `open_lvm_lv()` method.
`open_lvm_lv()` returns the path to the opened device in the devpath set
in the underlying `DeviceManager`. The `org.osbuild.lvm2.lv`
implementation takes the responsibility for creating and managing
device nodes. This means that we don't need to be creating any device
nodes directly in `osbuild-image-info`, especially in the current
working directory. This was previously causing issues when inspecting
two images with different LVM layout in a sequence.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Add a new class `OSBuildDeviceManager`, which wraps
`devices.DeviceManager`, so that we can consolidate all code that is
opening devices using osbuild, in it. As the fist step, move the
`loop_open()` function to the class.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Rename the function for naming consistency and always include the actual
error from `pvdisplay` when raising RuntimeError.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Extract the code that discovers LVM LV names for a given VG, from
`discover_lvm()` into a separate function `lvm_lvs_for_vg()`. This
improves the readability of the code. In addition, some values returned
by the `lvdisplay` invocation were never used. Don't request them and
simplify the code. Rename variables that hold LV names to clearly
express that.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
This commit adds a small test that ensures that we notice when
the solver API adds new top-level keys. When this happens the
images library breaks and we need to increase the
`Provides: osbuild-dnf-json-api` version in the `osbuild.spec`.
See e.g. https://github.com/osbuild/osbuild/pull/1992
Add two unit tests for the read_default_target() function:
1. When default target should be found.
2. When there should be no default target.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Add a new `enabled_repos` field on the testcases which explicitly lists
which repositories are passed into a certain testcase. This allows us to
pass appstream only to the module testcase.
Re-adjust the package lists again since we're now not using appstream in
all depsolve tests.
Signed-off-by: Simon de Vlieger <supakeen@redhat.com>
With the enabling of AppStream some more (perhaps optional) packages are
included.
This also adds a test case which installs a module and verifies that
that module is returned.
Signed-off-by: Simon de Vlieger <supakeen@redhat.com>
The CentOS Stream 9 repository metadata contains modules; these are
necessary for testing modularity depsolving.
Note that the filelists metadata is kept empty to keep repository size
down.
Co-authored-by: Michael Vogt <michael.vogt@gmail.com>
Signed-off-by: Simon de Vlieger <supakeen@redhat.com>
The mount ID must be unique. So far, we were using the device as the ID
for the mount because that was unique to each mount. With btrfs
subvolumes however, the device and partition are the same for all, so we
need another way to differentiate.
Btrfs volumes typically only contain subvolumes instead of (parts of)
the OS tree directly. In our images in particular, this is always the
case. When searching for root to find /etc/fstab, search through the
subvolumes on a btrfs volume for the file and return the path to the
root subvolume.
Co-authored-by: Michael Vogt <michael.vogt@gmail.com>
Always set partition=None for the kwargs of the Mount() constructor.
The previous code was added for backwards compatibility with older
versions of the Mount() constructor that didn't include the 'partition
argument. It's safe to remove now because:
1. It's been long enough that we wont run osbuild-image-info with an old
version of osbuild.
2. The tool is packaged with osbuild so there is no version drift and no
compatibility issues.
When the fstab file isn't found, the root_tree will never be set after
being initialised to "" and an exception is raised "The root filesystem
tree is not mounted". It's a lot clearer if the failure happens closer
to the root cause, which is that fstab wasn't found and there are no
fstab entries to iterate through and find the root filesystem.
When iterating partitions to mount, skip any with filesystem type
"swap". This is done in two places:
1. When mounting partitions to find /etc/fstab.
2. When mounting partitions and volumes to analyse the tree.
When iterating through partitions, store the fstype along with the other
information. This will be useful for identifying btrfs partitions,
which we will need to scan for subvolumes, and for identifying swap
partitions, so we can avoid trying to mount them.
Run isort for imports.
Pylint: wrong-import-order / C0411
Solves the following linter warnings:
- standard import "pathlib" should be placed before third party import
"yaml"
- standard import "collections.OrderedDict" should be placed before
third party imports "yaml", "jsonschema"
- standard import "typing.Dict" should be placed before third party
imports "yaml", "jsonschema"
Fix default arg values.
Pylint: dangerous-default-value / W0102
- Using mutable default values ([]) for function arguments is considered
dangerous.
Rename format variable.
Pylint: redefined-builtin / W0622
- 'format' is a built-in function.
Use f-strings instead of formatting where possible.
Pylint: consider-using-f-string / C0209
Remove unnecessary else after returns.
Pylint: no-else-return / R1705
Remove unnecessary else after continue.
Pylint: no-else-continue / R1724
Set the encoding (utf-8) for all calls to open().
Pylint: unspecified-encoding / W1514
Disable the too-many-branches and too-many-statements warnings for
append_partitions() and append_filesystem(). We can refactor the
functions to make them smaller later, but for now we're addressing only
the simpler issues.
Initialise with dict literal instead of call to function.
Pylint: use-dict-literal / R1735
Use implicit truthiness for glob instead of len().
Pylint: use-implicit-booleaness-not-len / C1802
Rename ambiguous variable 'l' to 'line'.
pycodestyle: ambiguous-variable-name (E741)
Merge comparisons with 'in'.
Pylint: consider-using-in / R1714
`read_boot_entries()` could previously fail when trying to split lines
in bootloader entries, which contained only "\n" and became empty
string after stripping whitespace characters. This is the case e.g. on
F41 images.
Moreover, bootloader entries can contain comments as lines starting with
"#", which were previously not ignored by the function and would end up
in the parsed entry and could potentially fail to be split.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Add unit test for osbuild-image-info's `read_boot_entries()` function,
to ensure that it can handle various situations that can happen in the
real world.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>