We have recently added testing on aarch64 but not all of the test are
supported on that architecture. Adding specific rules for all and x86_64
arches to simplify which test jobs run on what arches.
some scripts skip the test if it's not supported for that
distro-version. Disable them in gitlab-ci.yml so we don't waste CI
resources.
To disable them, we are using the `rules` on each job with a regex
pattern. Using `=~` (pattern matches) as a WHITELIST and `!~` (pattern
does not match) as a BLACKLIST.
This test used to spawn two VMs at the same time which requires more
memory than the Openstack ci medium runner can provide. We want to be
using only medium runners so this change is necesasry to allow that.
There are several reasons a CI job can fail, mostly infra issue,
openstack issue, other random issues which are not test failures and so
restarting once in case of failure should reduce the ammount of time
people are investigating these test unrelated failures. Also add
interruptibble:true to init to make it actually work for the rest of the
jobs.
The execution time seems to be 10% longer at worst and this will allow
us to safely increase the number of concurent Openstack jobs by 75 %
(from 40 to 70)
Add a new cloud API test that will build an edge-container,
upload it to the gitlab CI registry, fetch it from there,
run it and compare that the OSTree commit contained in it
is indeed the one we expect.
Co-Developed-By: Christian Kellner <christian@kellner.me>
Test the functionality only on RHEL-8.6, since this is the version that
Brew workers use. Test only RHUI images, because these will be the ones
to be used with this functionality.
Now that we have enabled container embedding on RHEL 8, let's
also test it there.
We also pin it for Fedora and RHEL/CS 9 to be able to use the
new `org.osbuild.containers.storage.conf` stage.
Add a new test case that embeds an existing container store in our
gitlab ci registry into a qcow2 image. It uses `image-info` to
verify that the container, with the expected id, is indeed embedded
in the resulting image.
We no longer use it, let's remove it. If you are wondering what to use instead,
use Cloud API. It supports everything that Koji API supported and more.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
We want to be able to safely gather any artifacts without worrying about
any possible secrets leaking. Every artifacts that we want to upload
will now have to be placed in /tmp/artifacts which will then be uploaded
to S3 by the executor and link to the artifacts will be provided in the
logs. Only people with access to our AWS account can see them.
We stopped testing on RHEL 8.4 because it wasn't changing, but now it
will be (or might) since it lives inside the common rhel8 package.
Testing the distro ensures we don't break it. RHEL 8.4 is still
supported as EUS.
We will soon change the distro definition to specifically build 8.4 EUS.
Pin osbuild version for RHEL 8.4.
Change the ostree test to support 8.4 (and not 8.5).
We only need one runner and it should use the internal network for
access to all repositories.
Set a rule so it doesn't run on 'main' (makes no sense).
Set git depth to 500:
We need a long history in order to find the merge-base between the PR
and 'main'. It's unclear whether there's a straightforward way to find
the depth of the PR to limit the clone depth accurately. 500 should be
enough for any PR (I'd hate to see a PR that makes this statement
false).
The internal composer instance still uses kojiapi for Brew builds,
instead of the cloudapi. Keep testing Koji builds via both APIs for now
to ensure that everything works.
Use case
--------
If Endpoint is not set and Region is - upload to AWS S3
If both the Endpoint and Region are set - upload the Generic S3 via Weldr API
If neither the Endpoint and Region are set - upload the Generic S3 via Composer API (use configuration)
jobimpl-osbuild
---------------
Add configuration fields for Generic S3 upload
Support S3 upload requests coming from Weldr or Composer API to either AWS or Generic S3
Weldr API for Generic S3 requires that all connection parameters but the credentials be passed in the API call
Composer API for Generic S3 requires that all conneciton parameters are taken from the configuration
Adjust to the consolidation in Target and UploadOptions
Target and UploadOptions
------------------------
Add the fields that were specific to the Generic S3 structures to the AWS S3 one
Remove the structures for Generic S3 and always use the AWS S3 ones
Worker Main
-----------
Add Endpoint, Region, Bucket, CABundle and SkipSSLVerification to the configuration structure
Pass the values to the Server
Weldr API
---------
Keep the generic.s3 provider name to maintain the API, but unmarshel into awsS3UploadSettings
tests - api.sh
--------------
Allow the caller to specifiy either AWS or Generic S3 upload targets for specific image types
Implement the pieces required for testing upload to a Generic S3 service
In some cases generalize the AWS S3 functions for reuse
GitLab CI
---------
Add test case for api.sh tests with edge-commit and generic S3
The script would test if the test case generation script when the script
would run normally if the osbuild-dnf-json.service was stopped.
This is no longer necessary.