This is a bare minimum for our downstream testsuite to pass (otherwise
it will fail on non-existing 8.7 CDN repositories).
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
We would benefit from having support for 9.1 downstream so let's add it in
the form of an alias. This is a bare minimum for having a proper 9.1 support.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
Whenever we create a new mountpoint due to a user customization,
ensure the layout uses LVM, i.e. convert plain layouts to it, if
needed. This does not apply to rpm-ostree based systems.
Add "lvm2" to the build pipeline and thus generate new manifests
and image infos.
Adjust the existing tests that assumed we can not create more
than 4 partitions on mbr layouts, since that is now not true
anymore.
This is a port from rhel86, commit 63aa155
The change in osPipeline() is required now to fix the Prefix for the
bootloader specification when LVM is used. The unspecified Prefix, which
was previously used for all cases, defaults to "/boot". When the layout
is converted to LVM, a boot partition is created and the BLS Prefix
should be set to "".
In the case where we don't have a partition table, the BLS stage is not
needed, but it was done unconditionally before, so keep the default
image definitions unchanged.
Co-Authored-By: Achilleas Koutsou <achilleas@koutsou.net>
Added a filesystem customization to the qcow2 test case to test that the
filesystem is converted to an LVM layout.
Set overrides for distros that don't support fs customizations.
The purpose of the test is to check that the dnf-json socket can be
started automatically when running the test case generator while the
service or socket isn't enabled/started.
dnf-json will still be used to depsolve the packages and create the
manifest even if the image is not built.
Blueprint package set is now depsolved together with the OS package set
in a chain. The result is stored in the package specs sets under the OS
package set name.
In reality, the code was able to handle a `nil` package specs to be
passed to pipelines, however some parts were looking for the kernel
version in the blueprint package specs, which would be a bug.
Regenerated affected image test cases.
Blueprint package set is now depsolved together with the OS package set
in a chain. The result is stored in the package specs sets under the OS
package set name.
In reality, the code was able to handle a `nil` package specs to be
passed to pipelines, however some parts were looking for the kernel
version in the blueprint package specs, which would be a bug.
Regenerated affected image test cases.
Blueprint package set is now depsolved together with the OS package set
in a chain. The result is stored in the package specs sets under the OS
package set name.
In reality, the code was able to handle a `nil` package specs to be
passed to pipelines, however some parts were looking for the kernel
version in the blueprint package specs, which would be a bug.
Regenerated affected image test cases.
Blueprint package set is now depsolved together with the OS package set
in a chain. The result is stored in the package specs sets under the OS
package set name.
In reality, the code was able to handle a `nil` package specs to be
passed to pipelines, however some parts were looking for the kernel
version in the blueprint package specs, which would be a bug.
Regenerated affected image test cases.
Follow-up to 60db6ad06f
The SHA-1 key is no longer supported in RHEL 9.0. This isn't a problem
for RHEL 8.x in general, but it prevents cross building RHEL 8.x images
on RHEL 9.0, since the host (RHEL 9.0) rpm and openssl cannot import the
older keys and we fail to bootstrap the build root for the new image if
the source repositories use SHA-1 keys.
Related rhbz#2058497 (Comment 18).
Signed-off-by: Achilleas Koutsou <achilleas@koutsou.net>
- 2 space indent
- lists on multiple lines
- newlines at EOF
This was accomplished by simply running each file through `jq` with no
arguments.
It is also equivalent to Python's `json.dump(..., indent=2)` plus the
added newline.
Google repositories use RSA/SHA1 for signing packages. However the SHA1
has been disabled by default on el9/c9s. Since osbuild-composer imports
GPG keys specified in the repository definition unconditionally, this
creates issues when installing rpms signed with the key by osbuild [1].
Remove GPG keys in all el9/c9s GCP repo definitions and disable GPG
signature verification until [2] is resolved.
[1] https://github.com/osbuild/osbuild/issues/991
[2] https://issuetracker.google.com/issues/223626963
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Add support for importing the GCE image into GCP using Weldr API. The
credentials to be used can be specified in the upload settings and will
be then used by the worker to authenticate with GCP.
The GCP target credentials are passed to Weldr API as base64 encoded
content of the GCP credentials JSON file. The reason is that the JSON
file contains many values and its format could change in the future.
This way, the Weldr API does not rely on the credentials file content
format in any way.
Add a new test case for the GCP upload via Weldr and run it in CI.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Modify the Cloud API test case for GCP to use `gcloud` and GCP guest
tools installed in the image to connect to the VM instance over SSH.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Add the `gce-rhui` image type intended for Google Compute Engine. The image
uses Google's RHUI infrastructure to access Red Hat content.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Add the `gce` image type intended for Google Compute Engine. The image
is BYOS - bring your own subscription and requires registering in order
to access Red Hat content.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Add the `gce-rhui` image type intended for Google Compute Engine. The image
uses Google's RHUI infrastructure to access Red Hat content.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Add the `gce` image type intended for Google Compute Engine. The image
is BYOS - bring your own subscription and requires registering in order
to access Red Hat content.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Add the `gce-rhui` image type intended for Google Compute Engine. The image
uses Google's RHUI infrastructure to access Red Hat content.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Add the `gce` image type intended for Google Compute Engine. The image
is BYOS - bring your own subscription and requires registering in order
to access Red Hat content.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Add the `gce` image type intended for Google Compute Engine. The image
is BYOS - bring your own subscription and requires registering in order
to access Red Hat content.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Filter the list of repositories passed in compose request based on the
`image_type_tags` object member. This is the same approach used by the
Weldr API. If the `image_type_tags` does not exist, the repo is added to
the list. If the `image_type_tags` exists, the repo is added to the list
only if the image type name is in the tags array.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
jobimpl-osbuild
---------------
Add GenericS3Creds to struct
Add method to create AWS with Endpoint for Generic S3 (with its own credentials file)
Move uploading to S3 and result handling to a separate method (along with the special VMDK handling)
adjust the AWS S3 case to the new method
Implement a new case for uploading to a generic S3 service
awscloud
--------
Add wrapper methods for endpoint support
Set the endpoint to the AWS session
Set s3ForcePathStyle to true if endpoint was set
Target
------
Define a new target type for the GenericS3Target and Options
Handle unmarshaling of the target options and result for the Generic S3
Weldr
-----
Add support for only uploading to AWS S3
Define new structures for AWS S3 and Generic S3 (based on AWS S3)
Handle unmarshaling of the providers settings' upload settings
main
----
Add a section in the main config for the Generic S3 service for credentials
If provided pass the credentials file name to the osbuild job implementation
Upload Utility
--------------
Add upload-generic-s3 utility
Makefile
------
Do not fail if the bin directory already exists
Tests
-----
Add test cases for both AWS and a generic S3 server
Add a generic s3_test.sh file for both test cases and add it to the tests RPM spec
Adjust the libvirt test case script to support already created images
GitLabCI - Extend the libvirt test case to include the two new tests
We want to ensure that cloud images connect to Red Hat[1] independently
of how the content was acquired (PAYG, BYOS, or marketplace).
This auto-registration feature is already enabled for AWS and this
patch enables it for Azure with the same recommended settings:
Services:
rhsmcertd: Enabled (already done, so not changed in the patch)
/etc/rhsm/rhsm.conf:
auto_registration: enabled
auto_registration_interval: 60 (the default, so not explicitly set)
manage_repos: false
The latter value `manage_repos` is left enabled (the default) in case
the user explicitly requested to have the system subscribed, i.e. the
`RHSMConfigWithSubscription` code path.
Regenerate the relevant test manifests and image information.
[1] https://cloud.redhat.com
[2] https://docs.google.com/document/d/1VeZFJxNUlyZMQJh6s3NA3RLvadqATsGxVet6uuP87_4
The users anaconda module enables users to create user accounts at
install time if one is not already created in the payload. This is
required for the cloud API (Image Builder service) for the image
installer where user customizations are not supported. Without it, user
creation isn't possible on the installed system.
The module also enables user creation at install time through the
kickstart file for both the image-installer and the edge-installer
(Anaconda only).
Therefore, for the image-installer, the users and groups are no longer
created as part of the payload.
This commit adapts the changes from the following commits (originally
made in the RHEL 8.6 and RHEL 9.0 distros) to the rest of the RHEL
distro definitions:
ebc3330cbd5825294dad
Use single NewGroupsStageOptions() from osbuild1 and osbuild2 instead of
implementing in each distro.
- Followup from 2eef6e6e2d, copied to the
rest of the RHEL distro definitions.
- Added NewGroupsStageOptions() to osbuild1 for rhel8 and rhel84.
NB: The change was not made in the Fedora distro definitions as they are
currently being rewritten.
Followup from, f34380d5b5 and
3a1765a5a8, copied to the rest of the RHEL
distro definitions.
For now, these customizations have no effect on the manifest.
The new `with-users` variants of the edge-installer test cases include
the user customizations in the blueprint, but the manifests are
(currently) the same as the corresponding base cases.
Regression test suite has grown considerably and is taking too long to
run with a single wrapper. Splitting them into individual standalone
tests instead and making them run in parallel.
The VMDK image must be in stream-optimized format in order to be
imported to VSphere. osbuild-composer does not produce VMDK by default
as stream-optimized. Instead, it is converted on the fly when the image
build job has been submitted via Weldr API.
Since we are aiming mainly for the VSphere use case with the VMDK image
in the service, the image should be ready for importing to VSphere.
Implement a temporary workaround for the Cloud API and AWS S3 target to
upload stream-optimized VMDK image.
Adjust the `api.sh` test case to not convert the VMDK image downloaded
form S3, before importing it to VSphere.
Ensure that the content of the database is not printed to the console
when dumped at the end of the test case. The output is still preserved
as a CI run artifact.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Kill and remove the DB container as part of the test case cleanup.
Without this change, running the test case more than once fails.
Signed-off-by: Tomas Hozza <thozza@redhat.com>