Since the `/etc/shadow` file contains a timestamp we need to add a
`null` value rather than a `sha256` hash to tell the diff tool to ignore
these fields. The issue is that the timestamp will always be different
meaning the tests will pass for a day, but then fail after that.
I'm not sure what happened, but the test case started failing on the
diff on 'main'. I didn't change anything related to this test case in my
PR. The previous changes adjusted the vars, specifically the Fedora
snapshot date used to generate the manifests, but the test passed on
it.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Since updating the snapshots the diffs for some stage tests have
changed. This commit updates the diffs accordingly.
I followed the same steps used in 1148a6e.
The original CentOS Stream GPG key uses SHA-1 in its signature. However,
SHA-1 is by default not allowed by the c10s / el10 crypto policy. As a
result, running the stage tests which use c9s on c10s / el10 are failing
when rpmkeys tries to import the key.
As part of CS-1616 [1], the CS GPG key has been resigned using SHA256,
however only in c10s for now. Let's use the SHA256 signed GPG key from
c10s for c9s manifests, to make tests pass also on c10s / el10.
[1] https://issues.redhat.com/browse/CS-1616
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
The `test_osbuild_mount_failure_msg` currently fails on fc40 when
run in tmt, see:
https://artifacts.dev.testing-farm.io/c6588a82-a2cb-46df-8ca8-85dd809465f2/
This is because the failure output is slightly different between
a container and a VM/real-machine. The test ensures that we capture
the output of mount and present to the user (for easier debugging).
So this commit updates this test once more for the error string
(that part of the error comes directly from the kernels fsconfig).
If we need another update of the string we should reconsider this
test and e.g. just use `testutil.mock_command()` for this. But
for now it's easier to just add this one more failure string.
This test ensures that the inputs of devices/mounts we generate for
bootc are actually considered valid by the schema. This is a more
blackbox style test compared to `test_get_schema_automatically_added`
which just checks that we get the expected schema but not that the
expected schema actually parses our inputs.
During the work on PR#1752 Florian discovered that make_containers()
is broken for nested containers like:
```
with make_container(tmp_path, {"file1": "file1 from base"}) as base_tag:
with make_container(tmp_path, {"file1": "file1 from final layer"}, base_tag) as cont_tag:
```
It errors with:
```
Error: 5b947de461ee21b858dd5b4224e80442b2f65b6410189147f2445884d9e4e3d8: image not known
```
The reason is that we work with hashes for the image and then call
`podman image rm` which by default will also remove all dangling
references. Those are defined by not having a tag and not referenced
anymore. So the inner container cleanup also removes the outter.
There are many ways to fix this, I went with re-adding tags to the
test containers because it also makes it easy for the user to see if
we left any containers (accidently) around.
Create a unit using an inline file called -.mount with the following
content:
[Unit]
Before=local-fs.target
After=blockdev@dev-disk-by\x2duuid-af34257d\x2d3e14\x2d4a51\x2db91d\x2dc430a956dcba.target
[Mount]
What=/dev/disk/by-uuid/af34257d-3e14-4a51-b91d-c430a956dcba
Where=/
Type=ext4
Options=rw,noatime
[Install]
RequiredBy=local-fs.target
and enable it in the systemd stage to test that we can enable units with
a - prefix.
With the new `bootc install to-filesystem` support many stages
will need a devices/mount setup to bind mount the deployment root
from the bootc deployment root of the generated image. To make
this globally available just allow "devices/mounts" for all stages
in the schema validation.
Note that `mounts` is already globally allowed so this just adds
devices (this was added in `7e776a076` with ostree as the use-case).
Nothing will change for the filesystem stages that already define
"devices" in a more specialized way.
Add two test rpm metadata directories that can be served as RPM repos.
One was copied from osbuild/images and contains the repository metadata
for CentOS Stream 9 BaseOS.
The second was created by building a simple spec file into an RPM and
creating the metadata using createrepo.
In CoreOS Assembler, some hyperv artifact we `zip` for compression. This
new stage is modeled after the `org.osbuild.tar` stage with necessary
modifications.
Add an org.osbuild.systemd.unit stage using the new format for the
Environment option with two instances to the test manifest.
The contents of the new dropin file at
tree/usr/lib/systemd/system/boltd.service.d/30-boltd-debug.conf are:
[Service]
Environment="G_MESSAGES_DEBUG=all"
Environment="G_MESSAGES_TRACE=none"
Add the new options to the b.json test and update the diff.
The new file has the following contents:
[Unit]
Description=Create directory
DefaultDependencies=False
ConditionPathExists=|!/etc/myfile
ConditionPathIsDirectory=|!/etc/mydir
[Service]
Type=oneshot
RemainAfterExit=True
ExecStart=mkdir -p /etc/mydir
ExecStart=touch /etc/myfile
Environment="DEBUG=1"
EnvironmentFile=/etc/example.env
[Install]
WantedBy=local-fs.target
RequiredBy=multi-user.target
Add an empty file to the location where the service file will be
created in the b.json version of the test. This way, we will get a
content hash of the created file which is a slightly better test than
just knowing that it was created.
Note that, in the diff, the "before" checksum is the empty file hash:
echo -n '' | sha256sum
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 -
In 2d2cdd8097 the file was replaced by
the generated json and it went unnoticed in the PR. Reverted and
updated the options to match the generated json file.
This is needed because on a mounted `bootc` container `setfiles`
without excluding `/sysroot` will create many warnings like:
```
setfiles: conflicting specifications for /run/osbuild/tree/sysroot/ostree/repo/objects/00/0ef9ada2ee87792e8ba21afd65aa00d79a1253018832652b8694862fb80e84.file and /run/osbuild/tree/usr/lib/firmware/cirrus/cs35l41-dsp1-spk-prot-103c8b8f-r1.bin.xz, using system_u:object_r:lib_t:s0.
```
but simply excluding this dir fixes them.
Instead of requiring only one of the properties to be present require at
least one of them being present; some stages specify both schema
versions (`org.osbuild.rpm`)
Instead of just mocking the binary also write a log of the way
it got called so that tests can use this to check if the right
options are passed.
Note that the API should be improved here, instead of returning
a "naked" path to the calllog file there should be a class wrapping
it. And of course there should be tests.
The current `make_container()` helper is a bit silly (which is
entirely my fault). It requires a container tag as input but all
tests end up creating a random number for this input. So instead
just remove the input and return the container_id from the podman
build in the contextmanager and use that.
Based on the feedback from Tomáš in [0] this commit adds tests
that ensure consistent behavior between the python and the json
loader.
It's not 100% because the python is extremly leaniant and does
not even check if the required pieces of the json are there.
I.e. it will load a module without a SCHEMA or SCHEMA_2 variable
and the json loader code will warn about the issue but not
raise an error.
Fwiw, I have no strong opinion here but I do lean slightly towards
staying close to the original code (but both approaches of failing
with an exectption and continue with a warning have good arguments).
[0] https://github.com/osbuild/osbuild/pull/1618#discussion_r1521141148
Instead of always parsing the python stage to load meta information
allow the user of a new `{stage}-meta.json` file. This is a first
step towards allowing modules to be written in a different language
than python. It also has some practical advantages:
- slightly faster as it avoids calling python to output the schemas
- easier to write schemas as this can be done in a real json editor
now
- more extensible in a future where stages maybe binaries with
shlib dependencies that are only satisfied in the buildroot
but not on the host
This reverts commit 158acaac78.
With https://github.com/osbuild/bootc-image-builder/pull/238 the
original reason to call mknod goes away so we can just revert
it. osbuild now requires not only the loop device but also uses
`losetup --partscan` quite a lot now so the mknod approach becomes
impractical and the consumers of osbuild in a container should
just setup devtmpfs.
Since we support python3.6 we cannot assume that dicts are ordered
in any way. To ensure the `id` is still always valid we pass
sort_keys=True to json.dump().
Thanks to Simon!
Generate log messages with origin "org.osbuild.main" when
pipelines/stages start and finish. This way a higher level
frontend can display high level progress coming from this
origin and filter out e.g. stages based log messages (that
are usually quite technical as they are just stdout/stderr
from the stages).
Tweak the Progress class to be simpler. Given that progress does
not need to support arbitrary depth but only has a single level
the class now just exposes "sub_progress" to the caller.
When the main progress is advanced the sub_progress is now fully
deleted instead of just reset. The rational is that when the main
progress is done and advances a step it is very likely that a
new sub_progress is required and it's most likely an error if
the same sub_progress will get re-used.
This means that `reset()` can be removed as it's not used anymore
(and YAGNI). We can add it back when we have a use-case.
It also change the code so that "total" starts with 0 instead
of `None` (principle of least surprise). This means that now
`progress.incr()` is called in the JSONSeqMonitor() for
`finish()` and `result()` to indicate that the pipeline/stage
is finished.
This commit tweaks Context a bit so that any write will automatically
reset the `_id`. This ensures that we do not forget to reset `_id`
when the code changes.
It also tweaks the naming a bit, before there was a "setter" for
origin and functions to set "pipeline" and "stage". They are all
functions now with a "set_" prefix for symetry mostly.
The class LogLine() is purely used as a dataclass with no state
and the only function on it is `as_dict()`. This got refactored
into a new function `log_entry()` because there is no need for
this to be a class. The function that takes the same inputs.