Define the distribution strings for RHEL 8.5 in distro/rhel86 and add
constructors. Remove the old 8.5 from the distro registry and use the
new constructors.
Composer can now build RHEL 8.5 image-installer on aarch64, which wasn't
supported before.
RHEL 8.5 manifests have changed to minimise the differences from 8.6.
Some changes are fixes made in 8.6 but never backported to 8.5 because
of our (older) policy of not changing definitions after the release of a
distro.
Other changes are non-functional (e.g., stage or package order).
See the list below for the source of each change.
Manifest changes:
- Stage order changed for org.osbuild.systemd-logind and
org.osbuild.rhsm.
- org.osbuild.grub2 options: config.default = "saved"
Reverted 111cd8871f
- Partition sizes: RHEL 8.5 had extra arbitrarily sized padding for the
header. Now all partitions are sized to fit headers exactly.
Original change at b7abef54e8.
- SELinux set to permissive in Anaconda. This was changed in RHEL 8.6
and 9.0 but never backported to 8.5.
See a7fbe916b7.
- Installer isolevel set to 3. Like above, this was changed in
8.6 and 9.0.
Original change at d8d161480e.
- Specify a remote for edge deployments.
Original change at b18b4e80a0.
Added utility function for comparing RHEL version strings.
Conditions added:
- greenboot subpackages were changed between RHEL 8.5 and RHEL 8.6.
- fido client packages aren't available in RHEL prior to 8.6.
- the ec2 SAP image type is not supported in RHEL prior to 8.6.
- the edge-simplified-installer and edge-raw-image image types are not
supported in RHEL prior to 8.6.
- They were previously supported in 8.5 without FDO support, but now
it's dropped from 8.5 completely.
The script isn't supposed to fail when the manifests differ.
Initialise err to 0 and assign it the exit code of the diff call if it
returns with an error.
Add a container image type that is based on the existing fedora
container image. There is a delta in terms of the configuration
because osbuild does not yet provide all the neccessary means,
but the package set is already very close.
The script runs the gen-manifests command first on the PR head and then
on the merge-base with the PR's base branch (typically 'main') and
checks for any differences. It creates a review comment on the PR on
GitHub if any changes are detected.
The message is posted as a simple COMMENT type review to inform the
author and reviewers that changes exist.
The script doesn't fail if there's a diff. CI shouldn't fail if changes
are detected since they can be intentional. The job fails if something
goes wrong with the script execution (manifest generation, comment
posting, etc).
The script exits immediately if not run from a PR.
The gen-manifests run is silenced with `> /dev/null`. In the future,
this should be handled by flags to the command itself to control the
output format noisiness.
The gen-manifests command is run 50 workers. Testing with 100 seemed to
make the execution stall, likely because of the resources on the worker.
We can experiment with this value more in the future.
`osbuild-mock-openid-provider`'s `/token` endpoint expects URL-encoded
values in the POST request body. Use the same values as those that would
be used by the worker when refreshing a token.
The `amdgpu` module causes issues on certain GPU-enabled instances
on Azure and it must not be loaded by default.
Modules are sorted alphabetically.
Signed-off-by: Major Hayden <major@redhat.com>
Co-Authored-By: Christian Kellner <christian@redhat.com>
In commit 5c1530e we disabled `skx_edac` and `intel_cstate` but
after further consultation with Prarit Bhargava it was agreed that
for RHEL 9 we should indeed allow them.
Use case
--------
If Endpoint is not set and Region is - upload to AWS S3
If both the Endpoint and Region are set - upload the Generic S3 via Weldr API
If neither the Endpoint and Region are set - upload the Generic S3 via Composer API (use configuration)
jobimpl-osbuild
---------------
Add configuration fields for Generic S3 upload
Support S3 upload requests coming from Weldr or Composer API to either AWS or Generic S3
Weldr API for Generic S3 requires that all connection parameters but the credentials be passed in the API call
Composer API for Generic S3 requires that all conneciton parameters are taken from the configuration
Adjust to the consolidation in Target and UploadOptions
Target and UploadOptions
------------------------
Add the fields that were specific to the Generic S3 structures to the AWS S3 one
Remove the structures for Generic S3 and always use the AWS S3 ones
Worker Main
-----------
Add Endpoint, Region, Bucket, CABundle and SkipSSLVerification to the configuration structure
Pass the values to the Server
Weldr API
---------
Keep the generic.s3 provider name to maintain the API, but unmarshel into awsS3UploadSettings
tests - api.sh
--------------
Allow the caller to specifiy either AWS or Generic S3 upload targets for specific image types
Implement the pieces required for testing upload to a Generic S3 service
In some cases generalize the AWS S3 functions for reuse
GitLab CI
---------
Add test case for api.sh tests with edge-commit and generic S3
The script would test if the test case generation script when the script
would run normally if the osbuild-dnf-json.service was stopped.
This is no longer necessary.
This value is set in the worker config. In future it might also be
passed through the api to upload into target accounts, but it should
never be set in composer.
Those images are forced to be 64GiB in size but mostly consist of zeros.
This makes them hard to handle, e.g. uploading to brew takes a forever.
The vhdPipelines is converted to a function returning the pipelinesFunc
and it has a single argument `compress` that will add the compression
pipeline bits if `true`. Will return exactly the old pipeline in case
of `false`.
Those images are forced to be 64GiB in size but mostly consist of zeros.
This makes them hard to handle, e.g. uploading to brew takes a forever.
The vhdPipelines is converted to a function returning the pipelinesFunc
and it has a single argument `compress` that will add the compression
pipeline bits if `true`. Will return exactly the old pipeline in case
of `false`.
API
---
Allow the user to pass the CA public certification or skip the verification
AWSCloud
--------
Restore the old version of newAwsFromCreds for access to AWS
Create a new method newAwsFromCredsWithEndpoint for Generic S3 which sets the endpoint and optionally overrides the CA Bundle or skips the SSL certificate verification
jobimpl-osbuild
---------------
Update with the new parameters
osbuild-upload-generic-s3
-------------------------
Add ca-bunlde and skip-ssl-verification flags
tests
-----
Split the tests into http, https with certificate and https skip certificate check
Create a new base test for S3 over HTTPS for secure and insecure
Move the generic S3 test to tools to reuse for secure and insecure connections
All S3 tests now use the aws cli tool
Update the libvirt test to be able to download over HTTPS
Update the RPM spec
Kill container with sudo
When a test script fails in CI, it's often difficult to pinpoint the
exact line in the log where the script failed and the cleanup() function
(trapped on EXIT) begins.
Adding a prominent line (with greenprint where available) at the start
of the cleanup function will make reading logs of failed jobs a lot
easier.
Something odd is happening with the package check and it keeps failing
mysteriously even though the package is clearly in the list.
Changing the verification method to extract `passwd` and `packages` from
the image info file into separate files and grepping those seems to
work.
Reasons for this change:
- Mixed versions of composer and worker aren't a realistic use-case for
the weldr API (on prem) but we do run mixed versions in hosted IB, so
this test is closer to real world scenarios.
- The cloud API runs depsolve jobs in the worker, whereas the weldr API
runs them in composer. By testing the cloud API we also test the
backwards compatibility of the depsolve job.
The change requires osbuild-worker v51 or newer to be able to handle
depsolve and manifest jobs on the worker as well as depsolve chains.
Add support for building images for the Azure marketplace: add a
new image type "azure-rhui" that can be used to build images
tailored to the Azure marketplace.
This code is based on the corresponding image type in 8.6.
NB: does not have systemd-resovled (following RHEL 9 defaults)
It seems rhel-91 qcow2 customize images are out of sync because commit
2beb707 removed the core group from the `format-request-map.json` and
some these said manifests were generated between that commit and the
one that added it back 1ff36bce9.
We currently use a single GCP Compute region when spinning up VMs using
the imported GCE image. As a result, we are often hitting the
'IN_USE_ADDRESSES' quota limit when there are multiple CI jobs running.
Google does not allow us to increase the quota limit any more.
Change the GCP test cases to use the CI `GCP_REGION` variable to list
all GCE regions with available quota and pick a random one from the
list. The `GCP_REGION` value is used as the region name prefix when
filtering available regions. This means that if you specify an exact GCE
region, such as `us-west1`, you'll always get the same region, but if a
GCP multi-region is used, such as `us`, then a random region prefixed
with 'us' will be used.
Added code in fedora/pipelines.go to add the subformat field in the
manifests
Added manifests for f34 and f35 for x86_64 only (image type not
available in aarch64)
* IoT image types now correctly point to the fedora-identity-iot package
* QCOW2, VMDK and OCI types use Fedora Cloud as identity package
* Changed default target for AMI from graphical.target to multi-user.target. This matches the behaviour with the RHEL types, which all target the multi-user.
* Readded the image-info field for some manifests which was missing due to issues regenerating the manifests.