Having the GPG check enabled for Google repos in `gce*` images will make
DNF try to import the relevant keys when upgrading, downgrading or
installing any packages from the repo. However due to Google still using
SHA-1 for GPG keys used to sign their RPMs, importing it will make any
transaction that includes such RPM to fail.
Disabling the GPG check will ensure that DNF won't attempt to import
Google GPG keys.
Related to https://issuetracker.google.com/issues/223626963
The repo is not needed any more, because the Google Cloud SDK is not
installed in the images by default. If anyone wants to install the SDK,
they can add the appropriate repo definition.
The repo is not needed any more, because the Google Cloud SDK is not
installed in the images by default. If anyone wants to install the SDK,
they can add the appropriate repo definition.
The Google SDK ships pre-compiled binaries. It is undesirable to install
it by default in `gce` and `gce-rhui` in its current shape. Also not
installing it does not anyhow affect the RHEL integration as the guest
OS in GCP.
The Google SDK ships pre-compiled binaries. It is undesirable to install
it by default in `gce` and `gce-rhui` in its current shape. Also not
installing it does not anyhow affect the RHEL integration as the guest
OS in GCP.
Since the LVM support was added to all distros, our disk
related code is adaptive, i.e. we will set the correct BLS
and grub2 prefix if there a `boot` partiton is present in
the layout after all customizations happen, which includes
LVMification.
One thing that was not yet fully working was layouts that
do not yet have a `/boot` partition but allow LVMification.
In that case `NewPartitionTable` and if `/boot` was the
first (or only) customization, would LVMify the partition
which in turn would create the `/boot` partition; but after
`newPT.ensureLVM()` the call to `newPT.createFilesystem`
with `/boot` would try to create another `/boot` mountpoint.
In order to deal with this situation correctly we are now
using a two phase approach: 1) enlarge existing mountpoints
and collect new ones. 2) if there are new ones and LMVify
was allowed, switch to LVM layout. Do a second pass and now
create or enlarge existing partitions, handling `/boot` in
the process.
Fedora is using 'tmpfs' as /tmp and that is based on the size of RAM.
That is not enough in case of medium Openstack machines. Changin to use
/var/tmp which is backed by a drive resolves this.
This test used to spawn two VMs at the same time which requires more
memory than the Openstack ci medium runner can provide. We want to be
using only medium runners so this change is necesasry to allow that.
Extend the implementation of mock openid server to take the `grant_type`
into consideration for the `/token` endpoint.
In addition to the previously supported `refresh_topen`, the
implementation now supports also `client_credentials`.
This is necessary to make it possible to use the mock server in
the `koji-osbuild` CI, because the builder plugin uses
`client_credentials` to get access token.
The implementation behaves in the following way:
- For `refresh_token` grant type, it takes the `refresh_token` value
from the request and adds it to the `rh-org-id` field in the custom
claim, which is part of the returned token.
- For `client_credentials` grant type, it takes the `client_secret`
value from the request and adds it to the `rh-org-id` field in the
custom claim, which is part of the returned token.
Requests without the supported `grant_type` set are rejected.
Modify affected test cases to specify `grant_type` when fetching a new
access token.
Add integration tests for oscap customizations.
This tests only the most basic case of oscap remediation.
Mountpoints and additional packages are not added since
this varies between distros and OpenSCAP profiles
i.e. additional blueprints customizations would need
to be specified for each oscap profile to ensure
best results.
To check embedding containers via the cloud API works, embed a
known test container from our gitlab CI and check that it is
indeed embedded in the image by pulling the commit and poking
into the container storage.
Do not start local worker (mask the unit) and Weldr API socket when
provisioning the SUT with TLS client cert authentication method. This
method is used only in the Service scenario, therefore starting these
units / sockets was not reflecting the intended deployment.
Modify `api.sh` to not rely on local worker.
Modify `base_tests.sh` to provision SUT with TLS for
`osbuild-auth-tests`, while provisioning SUT with no authentication
method for the rest of test cases.
`tools/provision.sh` is provisioning SUT always in the same way for
both, the Service scenario and the on-premise scenario. While this is
not causing any issues, it does not realistically represent how we
expect osbuild-composer and worker to be used in these scenarios.
The script currently supports the following authentication options:
- `none`
- Intended for the on-premise scenario with Weldr API.
- NO certificates are generated.
- NO osbuild-composer configuration file is created.
- NO osbuild-worker configuration file is created. This means that no
cloud provider credentials are configured directly in the worker.
- Only the local worker is started and used.
- Only the Weldr API socker is started.
- Appropriate repository definitions are copied to
`/etc/osbuild-composer/repositories/`.
- `jwt`
- Intended for the Service scenario with Cloud API.
- Should be the only method supported in the Service scenario in the
future.
- Certificates are generated and copied to `/etc/osbuild-composer`.
- osbuild-composer configuration file is created and configured for
JWT authentication.
- osbuild-worker configuration file is created, configured for JWT
authentication and with appropriate cloud provider credentials.
- Local worker unit is masked. Only the remote worker is used (the
socket is started and one remote-worker instance is created).
- Only the Cloud API socket is started (Weldr API socket is stopped).
- NO repository definitions are copied to
`/etc/osbuild-composer/repositories/`.
- `tls`
- Intended for the Service scenario with Cloud API.
- Should eventually go away.
- Certificates are generated and copied to `/etc/osbuild-composer`.
- osbuild-composer configuration file is created and configured for
TLS client cert authentication.
- osbuild-worker configuration file is created, configured for TLS
authentication and with appropriate cloud provider credentials.
- Services and sockets are started as they used to be originally:
- Both local and remote worker sockets are started.
- Both Weldr and Cloud API sockets are started.
- Only the local worker unit will be started automatically.
- NO repository definitions are copied to
`/etc/osbuild-composer/repositories/`.
Common integration tests should not need to care about specific ORG ID
configured in the worker, but they should be able to get access token
and check compose status without providing a specific ORG ID. The only
integration test that should care about ORG ID is the
`multi-tenancy.sh`.
Modify the `access_token` and `compose_status` functions to hide the
existence of ORG ID from the user and instead read it from the worker's
configuration, specifically `/etc/osbuild-worker/token`.
The original implementations of the functions mentioned above are now
available under `access_token_with_org_id` and
`compose_status_with_org_id` names.
Modify the `multi-tenancy.sh` to use the new function names.
Move some code related to using JWT tokens from the `multi-tenancy.sh`
test case to `test/cases/api/common/common.sh`, `tools/provision.sh`
and `tools/run-mock-auth-servers.sh`. Move the composer and worker
configuration from the test to new testing configuration files.
The `tools/provision.sh` now accepts an optional argument specifying the
authentication method to use with the provisioned composer and workers.
Valid values are `tls` and `jwt`. If no argument is specified, the `tls`
option is used and the script defaults to its previous behavior.
If the user does not pass a name, use the distribution as a name
A provided tag is used only if name is provided. It
The tag's default is a generated using UUID to avoid collisions
Add a new cloud API test that will build an edge-container,
upload it to the gitlab CI registry, fetch it from there,
run it and compare that the OSTree commit contained in it
is indeed the one we expect.
Co-Developed-By: Christian Kellner <christian@kellner.me>
The change made in 7f563a6db1 would
require the shell option `-e` to not be set, so that we could capture
the exit code after the command fails.
Fix the error handling by putting the commands that we want to handle in
the test part of an `if` clause.
In addition, error messages are now printed in red.
The nvdimm module is required for booting the image via UEFI HTTP.
The rest are added for feature parity with the official RHEL 9.0 ISO.
Fixes rhbz#2030730
During the diff-manifests.sh test the source repository checkout is
changed to generate manifests from current main branch for comparion. We
want to checkout back to $head after the script is done or in case of
any unexpected exit.
Fail the test if manifest generation fails on the PR HEAD, but don't
fail if the generation on main fails.
This can happen if something breaks in main (the generator, a
repository, an image definition, etc) and the PR is meant to fix it.
Extend the `koji.sh` test case to allow also testing the upload to
cloud, in addition to the testing that it supports currently (building
of multiple images in one Koji compose request).
The script now reuses some common functions used by the `api.sh` test
case. Once the Koji compose succeeds, the script verifies that the image
is present in the appropriate cloud environment using a CLI tool. No
additional testing of the image is done, it is not booted.