GCP Bucket to use can be now configured in the worker configuration.
Make the `Bucket` optional in the Cloud API when uploading image to GCP.
Adjust the Cloud API test case to configure GCP Bucket on the worker and
not provide it in the API request.
There was a missing `overrides` property in the `azure-rhui` request
definition, which resulted in:
```
$ ./tools/test-case-generators/generate-test-cases --distro rhel-90
--arch x86_64 --image-types azure-rhui --store /dev/null --output
/home/thozza/devel/osbuild-composer/test/data/manifests
--keep-image-info
Traceback (most recent call last):
File "/home/thozza/devel/osbuild-composer/./tools/test-case-generators/generate-test-cases", line 176, in <module>
main(args.distro, args.arch, args.image_types, args.keep_image_info, args.store, args.output)
File "/home/thozza/devel/osbuild-composer/./tools/test-case-generators/generate-test-cases", line 153, in main
if distro in test_case_request["overrides"]:
KeyError: 'overrides'
```
Instead of keeping the loop device of the base image and then opening
each partition as a loop device, remove the original loop device of the
base image and then create a loop device for each partition from the
file itself using the partition offsets.
The open_image() function is renamed to convert_image() and now only
handles converting qcow2 files to raw files if necessary.
The loop_open() context is done in analyse_image() instead, so that the
base loop device can be closed without removing the converted image.
This fixes the following issue with LVM partitions:
When the same lvm partition UUID is on two devices (e.g., /dev/loop0p4
and /dev/loop1), the 'vgchange -ay' command fails with the following
error:
Cannot activate LVs in VG rootvg while PVs appear on duplicate
devices.
This happens when we open the LVM partition as a separate loop device,
which we do for all partitions that we want to inspect.
NB: It's possible to restrict the vgchange command to a specific device
with --devices, but this isn't available in older versions of lvm2 (it
was introduced in 2.03.11).
make ansible playbooks arch-agnostic
extract embedded bash script into separate file with parameters
update packer template to support aarch64
Convert parts of bash script to python code that can start multi-arch instances to build RPMS
Extend the implementation of mock openid server to take the `grant_type`
into consideration for the `/token` endpoint.
In addition to the previously supported `refresh_topen`, the
implementation now supports also `client_credentials`.
This is necessary to make it possible to use the mock server in
the `koji-osbuild` CI, because the builder plugin uses
`client_credentials` to get access token.
The implementation behaves in the following way:
- For `refresh_token` grant type, it takes the `refresh_token` value
from the request and adds it to the `rh-org-id` field in the custom
claim, which is part of the returned token.
- For `client_credentials` grant type, it takes the `client_secret`
value from the request and adds it to the `rh-org-id` field in the
custom claim, which is part of the returned token.
Requests without the supported `grant_type` set are rejected.
Modify affected test cases to specify `grant_type` when fetching a new
access token.
Do not start local worker (mask the unit) and Weldr API socket when
provisioning the SUT with TLS client cert authentication method. This
method is used only in the Service scenario, therefore starting these
units / sockets was not reflecting the intended deployment.
Modify `api.sh` to not rely on local worker.
Modify `base_tests.sh` to provision SUT with TLS for
`osbuild-auth-tests`, while provisioning SUT with no authentication
method for the rest of test cases.
`tools/provision.sh` is provisioning SUT always in the same way for
both, the Service scenario and the on-premise scenario. While this is
not causing any issues, it does not realistically represent how we
expect osbuild-composer and worker to be used in these scenarios.
The script currently supports the following authentication options:
- `none`
- Intended for the on-premise scenario with Weldr API.
- NO certificates are generated.
- NO osbuild-composer configuration file is created.
- NO osbuild-worker configuration file is created. This means that no
cloud provider credentials are configured directly in the worker.
- Only the local worker is started and used.
- Only the Weldr API socker is started.
- Appropriate repository definitions are copied to
`/etc/osbuild-composer/repositories/`.
- `jwt`
- Intended for the Service scenario with Cloud API.
- Should be the only method supported in the Service scenario in the
future.
- Certificates are generated and copied to `/etc/osbuild-composer`.
- osbuild-composer configuration file is created and configured for
JWT authentication.
- osbuild-worker configuration file is created, configured for JWT
authentication and with appropriate cloud provider credentials.
- Local worker unit is masked. Only the remote worker is used (the
socket is started and one remote-worker instance is created).
- Only the Cloud API socket is started (Weldr API socket is stopped).
- NO repository definitions are copied to
`/etc/osbuild-composer/repositories/`.
- `tls`
- Intended for the Service scenario with Cloud API.
- Should eventually go away.
- Certificates are generated and copied to `/etc/osbuild-composer`.
- osbuild-composer configuration file is created and configured for
TLS client cert authentication.
- osbuild-worker configuration file is created, configured for TLS
authentication and with appropriate cloud provider credentials.
- Services and sockets are started as they used to be originally:
- Both local and remote worker sockets are started.
- Both Weldr and Cloud API sockets are started.
- Only the local worker unit will be started automatically.
- NO repository definitions are copied to
`/etc/osbuild-composer/repositories/`.
Move some code related to using JWT tokens from the `multi-tenancy.sh`
test case to `test/cases/api/common/common.sh`, `tools/provision.sh`
and `tools/run-mock-auth-servers.sh`. Move the composer and worker
configuration from the test to new testing configuration files.
The `tools/provision.sh` now accepts an optional argument specifying the
authentication method to use with the provisioned composer and workers.
Valid values are `tls` and `jwt`. If no argument is specified, the `tls`
option is used and the script defaults to its previous behavior.
The provision tools was calling to the Weldr API using a CLI client to do
a basic verification of the provisioned software. This is however not
practical nor needed. Eventually, we may want to not enable the Weldr
API socket when testing scenarios related to the Service, to make it
more realistic. Another reason to not do it is that test cases which are
using this script to provision the software are doing the actual
verification, so this just duplicates it.
Extend the `tools/koji-compose.py.sh` script to allow also testing the
upload to cloud, in addition to the testing that it supports currently.
If only the `DISTRO` and `ARCH` arguments are passed to the script, it
submits a new Koji compose with two image requests, as it always did.
If a `CLOUD_TARGET` and `IMAGE_TYPE` arguments are provided in addition
to `DISTRO` and `ARCH`, then the script submits a new Koji compose with
a single image request, which has the upload options set to make the
image be uploaded to cloud.
Supported cloud targets are:
- `aws`
- `azure`
- `gcp`
The image types are those that are accepted by the Cloud API. The script
does not check at all if the provided combination of the cloud target
and image type is valid and submits anything that it gets to composer.
Modify the `tools/koji-compose.py` script to print all log messages to
STDERR and to print only the Koji compose ID to STDOUT. This way, the
caller of the script can easily get the ID of the compose created by the
script and use it later.
Add support for reporting the install container images in an image.
NB: this does not use `podman` but reads the overlay storage
directly and therefore does currently not take additional image
locations or different storage drivers into account. For now this
is not a problem since we don't support any of that.
Fedora 34 is EOL, let's remove all traces of it, including:
- distro definition
- repositories (and test one)
- test manifests
- special package set rules
- hacks from the spec file
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
We want to be able to safely gather any artifacts without worrying about
any possible secrets leaking. Every artifacts that we want to upload
will now have to be placed in /tmp/artifacts which will then be uploaded
to S3 by the executor and link to the artifacts will be provided in the
logs. Only people with access to our AWS account can see them.
Added unversioned (el8, no minor version) repositories for RHEL 8.4
that provide packages for building ec2 and azure-rhui image types.
Added new repo snapshots to RHEL 8.4: ha, sap, and saphana
With the merging of 8.4 into the main rhel8 package, the name
'rhel-edge-commit' is no longer the primary name for the image type.
More generally, the 'rhel-' prefix doesn't appear in the main name for
any image type anymore.
Let's stay updated!
Also, let's remove 8.4 and 8.5 from Schutzfile, I strongly believe that it's
not used anywhere.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
Add a plain `rhel-8` alias as the default distribution name and version
for the `rhel8` package. The `rhel-86` distro is still available via
the NewRHEL86() constructor. These two distributions are identical.
Repositories
------------
The rhel-8 repositories (repositories/rhel-8.json) are now set to the
CDN repositories with no minor version:
https://cdn.redhat.com/content/dist/rhel8/8/...
The rhel-8 test repositories (test/data/repositories/rhel-8.json) were
already set to the plain `8` repositories. The Google repos have been
added.
The test case generator repositories used for `rhel-8` are the rpmrepo
snapshots as for rhel-86.
Add a container image type that is based on the existing fedora
container image. There is a delta in terms of the configuration
because osbuild does not yet provide all the neccessary means,
but the package set is already very close.
RPM Spec
--------
Remove all Go dependecies
Add Start and End marker comments for bundling information
Add '-k' to goprep to preserve the vendor directory
tools
-----
Add script to update the RPM spec file to generate the indication lines
based on vendor/modules.txt
Packit
------
Run the new script as a post-upstream-clone hook
Makefile
--------
Run the new script on the generated spec file before generating the RPM
mockbuild.sh
------------
Run the new script before creating the RPM
The repository checksums in the response from dnf-json aren't used
anywhere. Since we're making changes to dnf-json and depsolving, now is
a good opportunity to drop them completely.