When generating the original test certs no `-days` paramter was
passed which resulted in a too low `notAfter` value.
This commit fixes this and uses 100y also updates the README:
```
$ openssl x509 -enddate -noout -in test/data/certs/cert1.pem
notAfter=Aug 2 10:42:40 2124 GMT
$ openssl x509 -enddate -noout -in test/data/certs/cert2.pem
notAfter=Aug 2 10:42:45 2124 GMT
```
This fixes a test failure in https://github.com/osbuild/osbuild/pull/1819
for the `test_curl_download_many_mixed_certs` test.
The libdir is passed down for sources but it is never used in
any of our sources. As this is confusing and we want to eventually
support multiple libdirs remove this code.
It looks like the libdir for soruces was added a long time ago in 8423da3
but there is no indication if/how it is/was supposed to get used and
AFACT from going over the git history it was very used.
SourceService:dispatch() never sends "libdir" to the actual sources,
so it is not an even technically an API break.
Add two unit tests for our toml util module.
- Write an object with util.toml, read it with util.toml, and compare
written and read objects.
- Write an object directly as a string, read it with util.toml,
comparing with an expected object.
A test that writes with util.toml, reads as string, and verifies the
read string is difficult to do in a general way, because each toml
module we support writes files in a slightly different way.
With the mounting of /dev (among others) into the chroot for the
update-crypto-policies, the leftover /dev/null is now removed.
This was created by the update-crypto-policies script, running in the
chroot, by multiple output redirects into /dev/null. Without a /dev fs,
the file was being created in the tree and would remain on the image.
Extend the dracut stage test case with checks for error / warning
messages complaining about unsupported / incorrect runtime environment.
Messages such as:
```
/dev/fd/63: No such file or directory
```
or
```
/proc/ is not mounted. This is not a supported mode of operation.
Please fix your invocation environment to mount /proc/ and /sys/
properly. Proceeding anyway. Your mileage may vary.
```
The stage will be fixed in the next commit.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
We recently hit the issue that `osbuild` crashed with:
```
Unable to decode response body "Traceback (most recent call last):
File \"/usr/bin/osbuild\", line 33, in <module>
sys.exit(load_entry_point('osbuild==124', 'console_scripts', 'osbuild')())
File \"/usr/lib/python3.9/site-packages/osbuild/main_cli.py\", line 181, in osbuild_cli
r = manifest.build(
File \"/usr/lib/python3.9/site-packages/osbuild/pipeline.py\", line 477, in build
res = pl.run(store, monitor, libdir, debug_break, stage_timeout)
File \"/usr/lib/python3.9/site-packages/osbuild/pipeline.py\", line 376, in run
results = self.build_stages(store,
File \"/usr/lib/python3.9/site-packages/osbuild/pipeline.py\", line 348, in build_stages
r = stage.run(tree,
File \"/usr/lib/python3.9/site-packages/osbuild/pipeline.py\", line 213, in run
data = ipmgr.map(ip, store)
File \"/usr/lib/python3.9/site-packages/osbuild/inputs.py\", line 94, in map
reply, _ = client.call_with_fds(\"map\", {}, fds)
File \"/usr/lib/python3.9/site-packages/osbuild/host.py\", line 373, in call_with_fds
kind, data = self.protocol.decode_message(ret)
File \"/usr/lib/python3.9/site-packages/osbuild/host.py\", line 83, in decode_message
raise ProtocolError(\"message empty\")
osbuild.host.ProtocolError: message empty
cannot run osbuild: exit status 1" into osbuild result: invalid character 'T' looking for beginning of value
...
input/packages (org.osbuild.files): Traceback (most recent call last):
input/packages (org.osbuild.files): File "/usr/lib/osbuild/inputs/org.osbuild.files", line 226, in <module>
input/packages (org.osbuild.files): main()
input/packages (org.osbuild.files): File "/usr/lib/osbuild/inputs/org.osbuild.files", line 222, in main
input/packages (org.osbuild.files): service.main()
input/packages (org.osbuild.files): File "/usr/lib/python3.11/site-packages/osbuild/host.py", line 250, in main
input/packages (org.osbuild.files): self.serve()
input/packages (org.osbuild.files): File "/usr/lib/python3.11/site-packages/osbuild/host.py", line 284, in serve
input/packages (org.osbuild.files): self.sock.send(reply, fds=reply_fds)
input/packages (org.osbuild.files): File "/usr/lib/python3.11/site-packages/osbuild/util/jsoncomm.py", line 407, in send
input/packages (org.osbuild.files): n = self._socket.sendmsg([serialized], cmsg, 0)
input/packages (org.osbuild.files): ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
input/packages (org.osbuild.files): OSError: [Errno 90] Message too long
```
The underlying issue is that the reply of the `map()` call is too
big for the buffer that `jsoncomm` uses. This problem existed before
for the args of map and was fixed by introducing a temporary file
in https://github.com/osbuild/osbuild/pull/1331 (and similarly
before in https://github.com/osbuild/osbuild/pull/824).
This commit writes the return values also into a file. This should
fix the crash above and make the function more symetrical as well.
Alternative/complementary version of
https://github.com/osbuild/osbuild/pull/1833
Closes: HMS-4537
When `jsoncomm` fails because the message is too big it currently
does not indicate just how big the message was. This commit adds
this information so that it's easier for us to determine what to
do about it.
We could also include a pointer to `/proc/sys/net/core/wmem_defaults`
but it seems we want to not require fiddling with that so let's
not do it for now.
See also https://github.com/osbuild/osbuild/pull/1838
Use the latest c9s BaseOS repodata snapshot, specifically so that it
contains multiple versions of the same packages. This will allow to test
the `osbuild-depsolve-dnf` 'search' command. The previous metadata
contained only single version of each package.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
When subscription-manager DNF plugins are enabled (e.g. on RHEL), they
produce messages to the stdout on any DNF command execution. E.g.
"Updating Subscription Management repositories.".
Disable all plugins when inspecting package markings so prevent them
from modifying the output.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
When I rolled back from using 'dnf4', to check package markings, to
using 'dnf', I didn't verify the test case on Fedora Rawhide with DNF5.
It turns out that the strings reported by DNF5 differ and make the test
case fail. This time I tested the change on Fedora Rawhide with DNF5 and
it works.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
The existing code to record progress was a bit too naive. Instead
of just counting the number os pipelines in a manifest to get the
total steps we need to look at the resolved pipelines.
with this fix `bib` will report the correct number of steps left
when doing e.g. a qcow2 image build. Right now the number of
steps is incorrect because the osbuild manifest contains pipelines
for qcow2,vdmk,raw,ami and all are currently considered steps
that need to be completed. With this commit this is fixed.
This commit adds a new `https_serve_directory()` test helper
and some custom self-signed and worthless certs that are used
during testing. They are not dynamically generated to avoid the
extra compuation time during tests (but they could be).
Generated via:
```
$ openssl req -new -newkey rsa:2048 -nodes -x509 \
-subj "/C=DE/ST=Berlin/L=Berlin/O=Org/CN=localhost" \
-keyout "key1.pem" -out "cert1.pem"
```
This will allow us to test `https` download URLs as well in e.g.
the curl source.
The test case still fails on RHEL-10.0 Beta, even when not using dnf5,
with:
```
for line in r.stdout.splitlines():
> package, mark = line.strip().split(",")
E ValueError: not enough values to unpack (expected 2, got 1)
```
Make debugging of failures like this easier by printing the line when
the issue happens.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Let's revert to using plain 'dnf', add an explicit newline in the query
format and skip empty lines when processing the output. This makes the
test case compatible with all DNF versions, even with dnf5 once this
issue gets fixed.
The previous approach didn't work on c9s / el9, because there is no
'/usr/bin/dnf4 -> dnf-3' symlink.
Also see:
https://github.com/osbuild/osbuild/actions/runs/10136827918/job/28026181824
Co-authored-by: Michael Vogt <michael.vogt@gmail.com>
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
According to `autotailor(8)` arguments passed in via the cli take
precedence over the JSON tailoring file contents.
Make the `new_profile` a required field for the json tailoring too and
pass it as an option to the `autotailor` command. This approach has some
trade-offs. It allows us to maintain the explicitness of the manifest
that is consumed by `osbuild`. The downside is that it will override the
profile id that is set by the user in the JSON tailoring file.
Rename the `new_profile` option to `tailoring_profile_id` for clarity.
This also ensures that the change is backwards compatible by falling
back to the `new_profile` option if that was set instead of the
`tailoring_profile` id option.
Similar to what was explained in 2e6d49fbe this commit updates
the l2hash in test_assemblers to the new values from fc40 images.
Sadly it is hard to derive them from first principles (see the
other commit) and given that this is legacy code it is probably
fine this way.
Since the `/etc/shadow` file contains a timestamp we need to add a
`null` value rather than a `sha256` hash to tell the diff tool to ignore
these fields. The issue is that the timestamp will always be different
meaning the tests will pass for a day, but then fail after that.
I'm not sure what happened, but the test case started failing on the
diff on 'main'. I didn't change anything related to this test case in my
PR. The previous changes adjusted the vars, specifically the Fedora
snapshot date used to generate the manifests, but the test passed on
it.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Since updating the snapshots the diffs for some stage tests have
changed. This commit updates the diffs accordingly.
I followed the same steps used in 1148a6e.
The original CentOS Stream GPG key uses SHA-1 in its signature. However,
SHA-1 is by default not allowed by the c10s / el10 crypto policy. As a
result, running the stage tests which use c9s on c10s / el10 are failing
when rpmkeys tries to import the key.
As part of CS-1616 [1], the CS GPG key has been resigned using SHA256,
however only in c10s for now. Let's use the SHA256 signed GPG key from
c10s for c9s manifests, to make tests pass also on c10s / el10.
[1] https://issues.redhat.com/browse/CS-1616
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
The `test_osbuild_mount_failure_msg` currently fails on fc40 when
run in tmt, see:
https://artifacts.dev.testing-farm.io/c6588a82-a2cb-46df-8ca8-85dd809465f2/
This is because the failure output is slightly different between
a container and a VM/real-machine. The test ensures that we capture
the output of mount and present to the user (for easier debugging).
So this commit updates this test once more for the error string
(that part of the error comes directly from the kernels fsconfig).
If we need another update of the string we should reconsider this
test and e.g. just use `testutil.mock_command()` for this. But
for now it's easier to just add this one more failure string.
This test ensures that the inputs of devices/mounts we generate for
bootc are actually considered valid by the schema. This is a more
blackbox style test compared to `test_get_schema_automatically_added`
which just checks that we get the expected schema but not that the
expected schema actually parses our inputs.
During the work on PR#1752 Florian discovered that make_containers()
is broken for nested containers like:
```
with make_container(tmp_path, {"file1": "file1 from base"}) as base_tag:
with make_container(tmp_path, {"file1": "file1 from final layer"}, base_tag) as cont_tag:
```
It errors with:
```
Error: 5b947de461ee21b858dd5b4224e80442b2f65b6410189147f2445884d9e4e3d8: image not known
```
The reason is that we work with hashes for the image and then call
`podman image rm` which by default will also remove all dangling
references. Those are defined by not having a tag and not referenced
anymore. So the inner container cleanup also removes the outter.
There are many ways to fix this, I went with re-adding tags to the
test containers because it also makes it easy for the user to see if
we left any containers (accidently) around.
Create a unit using an inline file called -.mount with the following
content:
[Unit]
Before=local-fs.target
After=blockdev@dev-disk-by\x2duuid-af34257d\x2d3e14\x2d4a51\x2db91d\x2dc430a956dcba.target
[Mount]
What=/dev/disk/by-uuid/af34257d-3e14-4a51-b91d-c430a956dcba
Where=/
Type=ext4
Options=rw,noatime
[Install]
RequiredBy=local-fs.target
and enable it in the systemd stage to test that we can enable units with
a - prefix.
With the new `bootc install to-filesystem` support many stages
will need a devices/mount setup to bind mount the deployment root
from the bootc deployment root of the generated image. To make
this globally available just allow "devices/mounts" for all stages
in the schema validation.
Note that `mounts` is already globally allowed so this just adds
devices (this was added in `7e776a076` with ostree as the use-case).
Nothing will change for the filesystem stages that already define
"devices" in a more specialized way.
Add two test rpm metadata directories that can be served as RPM repos.
One was copied from osbuild/images and contains the repository metadata
for CentOS Stream 9 BaseOS.
The second was created by building a simple spec file into an RPM and
creating the metadata using createrepo.
In CoreOS Assembler, some hyperv artifact we `zip` for compression. This
new stage is modeled after the `org.osbuild.tar` stage with necessary
modifications.
Add an org.osbuild.systemd.unit stage using the new format for the
Environment option with two instances to the test manifest.
The contents of the new dropin file at
tree/usr/lib/systemd/system/boltd.service.d/30-boltd-debug.conf are:
[Service]
Environment="G_MESSAGES_DEBUG=all"
Environment="G_MESSAGES_TRACE=none"
Add the new options to the b.json test and update the diff.
The new file has the following contents:
[Unit]
Description=Create directory
DefaultDependencies=False
ConditionPathExists=|!/etc/myfile
ConditionPathIsDirectory=|!/etc/mydir
[Service]
Type=oneshot
RemainAfterExit=True
ExecStart=mkdir -p /etc/mydir
ExecStart=touch /etc/myfile
Environment="DEBUG=1"
EnvironmentFile=/etc/example.env
[Install]
WantedBy=local-fs.target
RequiredBy=multi-user.target
Add an empty file to the location where the service file will be
created in the b.json version of the test. This way, we will get a
content hash of the created file which is a slightly better test than
just knowing that it was created.
Note that, in the diff, the "before" checksum is the empty file hash:
echo -n '' | sha256sum
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 -
In 2d2cdd8097 the file was replaced by
the generated json and it went unnoticed in the PR. Reverted and
updated the options to match the generated json file.