Extend the `manifestJobResultsFromJobDeps()` function to also return the
manifest `JobInfo`. This will be useful to inspect the job dependencies
and eliminate the need to add a specialized function for getting only
the `JobInfo`.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
None of our worker is running RHEL-8 any more. There's no value in
testing the Koji scenario on RHEL-8, RHEL-9 is fully sufficient.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
We have been testing builds of RHEL-9 on RHEL-8 for the Koji use case.
However, all of our workers are now running the latest GA RHEL-9
version. Therefore we should flip the test and test building of RHEL-8
on RHEL-9.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Adjust the test case to cope with the SPDX SBOM documents uploaded to
the Koji. Also explicitly check that there is the expected number of
SBOM documents uploaded as the image build output.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Extend the Koji target handling in the osbuild job, to also upload SBOM
documents attached to the related depsolve job result.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
If the Koji target result contains information about any uploaded SBOM
documents, import them to Koji as part of the finalize task.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
For Koji composes, all files are uploaded to Koji as part of the osbuild
job (specifically as part of handling the Koji target). So in order to
be able to upload SBOM documents to Koji as part of Koji compose, the
osbuild job needs to to be able to access the depsolve job result, which
contains the SBOM documents. For this, the osbuild job must depend on
the depsolve job.
For Koji composes, make sure that osbuild job depends on the depsolve
job and set the DepsolveDynArgsIdx.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Adjust all paces that call `Solver.Depsolve()`, to cope with the changes
that enabled SBOM support.
Fix loading of testing repositories in the CloudAPI unit tests.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
gce-rhui is now gone from RHEL 9 [1] and the old name simply aliases to
gce. gcp-rhui in the cloudapi now resolves to 'gce' in RHEL 9 and
'gce-rhui' in RHEL 8.
[1] https://github.com/osbuild/images/pull/857
The sshkey customization in osbuild/images has been dropped. In
osbuild-composer we maintain it for backwards compatibility, converting
each to a user customization, which is a superset of the sshkey.
Fixes:
```
Error: Create EC2 instances failed: 'ec2.ServiceResource' object has no attribute 'describe_images'
Traceback (most recent call last):
File "/osbuild-composer/tools/build-rpms.py", line 218, in <module>
stage_generate_rpms(cleanup_actions, args)
File "/osbuild-composer/tools/build-rpms.py", line 175, in stage_generate_rpms
create_ec2_instances, cleanup_actions, args, keyname)
File "/osbuild-composer/tools/build-rpms.py", line 66, in stage
ret = fun(*args)
File "/osbuild-composer/tools/build-rpms.py", line 109, in create_ec2_instances
img = ec2.describe_images(ImageIds=[arch_info[a]["ImageId"]])
AttributeError: 'ec2.ServiceResource' object has no attribute 'describe_images'
```
When the osbuild worker cannot register itself with the server
on startup the worker will "crash". This is inconsistent with the
existing behavior in `workerHeartbeat()` which deals with connectivity
or other server issue gracefully and retries periodically.
To unify the behavior this commit changes the behavior and only
issues a `logrus.Warnf` instead of the previous `Falalf` when
the registration fails.
Co-authored-by: Florian Schüller <florian.schueller@redhat.com>
Also adds the policy id to the blueprint, this doesn't have any effect
on the openscap step, it just puts in place the rhsm fact so instances
registered to insights will appear under that policy.
Subscription manager should be configured to manage repositories, and by
disabling rhui-client-config-server-9 rhui repositories don't get
(re-)enabled after updates.
osbuild/images#751 wrapped the errors in the images/dnfjson package to
provide more details, the depsolve job should take this into account to
map the dnfjson error to the correct worker client error.
This caused user input errors errors to be misclassified as internal
errors, triggering depsolve job failure alerts.
Previously, the worker was determining the GCE image guest OS Features
on its own, based on the OS name. This caused problems, in case the
osbuild-composer was of a newer version than the worker.
Example:
osbuild-composer contained support for c10s GCE image type and its
implementation also contained the proper guest OS Features list for it.
However, when the worker got the osbuild job, it built it and tried to
fetch the guest OS Features for the distro. Since its implementation was
too old, it didn't contain the code that added the actual support for
c10s GCE images and got no guest OS features list (which is the default
for unsupported distros). The image was successfully uploaded and
shared, but it does not boot in GCP, because it does not know that it
should use UEFI to boot it.
This behavior could be considered a bug. The worker should be dumb. It
should not be making decisions about the image features, but instead it
should take them from the upload target options. And composer should be
the authoritative source of truth for this. Because otherwise, we
basically have two components that need to be updated in sync to add
support for GCE images on a new distro.
Move the GCE image guest OS features to the GCP upload target options.
The worker will just take what is specified there and use it when
importing the image to GCP. As a compatibility layer for the case when
the composer would be older than the worker (unlikely, but still),
worker will try to determine the image guest OS features in case the
list in the upload target options is empty.
Extend the GCP functional tests to check that the imported image has at
least some guest OS features set.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
I find it slightly eaiser to read this code when
`api.BasePath = conf.BasePath` is right at the top as it's
unrelated to the parsing code below.
Note that the code itself is problematic:
- api.BasePath is global but client is not, this means that
multiple client with different configs will result in
api.BasePath being potentially wrong
- api.BasePath is set in a non-thread safe manner
Changing is a bigger job but we might consider it (IMHO).