Fail the test if manifest generation fails on the PR HEAD, but don't
fail if the generation on main fails.
This can happen if something breaks in main (the generator, a
repository, an image definition, etc) and the PR is meant to fix it.
Extend the `koji.sh` test case to allow also testing the upload to
cloud, in addition to the testing that it supports currently (building
of multiple images in one Koji compose request).
The script now reuses some common functions used by the `api.sh` test
case. Once the Koji compose succeeds, the script verifies that the image
is present in the appropriate cloud environment using a CLI tool. No
additional testing of the image is done, it is not booted.
Add a new test case that embeds an existing container store in our
gitlab ci registry into a qcow2 image. It uses `image-info` to
verify that the container, with the expected id, is indeed embedded
in the resulting image.
This test is compiling `gen-manifests` via `go run` and thus needs
to pick up build requirements for the source. Instead of manually
installing the go toolchain use the `dnf build-dep` command on the
spec file so we pick up current and future build dependencies.
We want to be able to safely gather any artifacts without worrying about
any possible secrets leaking. Every artifacts that we want to upload
will now have to be placed in /tmp/artifacts which will then be uploaded
to S3 by the executor and link to the artifacts will be provided in the
logs. Only people with access to our AWS account can see them.
We stopped testing on RHEL 8.4 because it wasn't changing, but now it
will be (or might) since it lives inside the common rhel8 package.
Testing the distro ensures we don't break it. RHEL 8.4 is still
supported as EUS.
We will soon change the distro definition to specifically build 8.4 EUS.
Pin osbuild version for RHEL 8.4.
Change the ostree test to support 8.4 (and not 8.5).
Manifest diffs can sometimes get large and putting them in the log makes
life harder for everyone.
Save them in a single file in the job artifacts instead.
Update the comment left by Schutzbot on the PR to mention the artifacts.
- Fixed shellcheck errors
- Moved checkEnv from common to individual tests
- Fixed package install section in spec file:
Globs which include a directory fail on el-like distros.
- Use gcloud cli to ssh
- (re)Introduce generic s3 tests
Each cloud now has its own file that's sourced on-demand by the main api.sh
script. The main goal of this commit is to reduce the amount of clutter in
api.sh. I, personally, find 1300 lines of bash overwhelming and I think that
this is a reasonable beginning to start cleaning things up.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
The script isn't supposed to fail when the manifests differ.
Initialise err to 0 and assign it the exit code of the diff call if it
returns with an error.
The script runs the gen-manifests command first on the PR head and then
on the merge-base with the PR's base branch (typically 'main') and
checks for any differences. It creates a review comment on the PR on
GitHub if any changes are detected.
The message is posted as a simple COMMENT type review to inform the
author and reviewers that changes exist.
The script doesn't fail if there's a diff. CI shouldn't fail if changes
are detected since they can be intentional. The job fails if something
goes wrong with the script execution (manifest generation, comment
posting, etc).
The script exits immediately if not run from a PR.
The gen-manifests run is silenced with `> /dev/null`. In the future,
this should be handled by flags to the command itself to control the
output format noisiness.
The gen-manifests command is run 50 workers. Testing with 100 seemed to
make the execution stall, likely because of the resources on the worker.
We can experiment with this value more in the future.
`osbuild-mock-openid-provider`'s `/token` endpoint expects URL-encoded
values in the POST request body. Use the same values as those that would
be used by the worker when refreshing a token.
Use case
--------
If Endpoint is not set and Region is - upload to AWS S3
If both the Endpoint and Region are set - upload the Generic S3 via Weldr API
If neither the Endpoint and Region are set - upload the Generic S3 via Composer API (use configuration)
jobimpl-osbuild
---------------
Add configuration fields for Generic S3 upload
Support S3 upload requests coming from Weldr or Composer API to either AWS or Generic S3
Weldr API for Generic S3 requires that all connection parameters but the credentials be passed in the API call
Composer API for Generic S3 requires that all conneciton parameters are taken from the configuration
Adjust to the consolidation in Target and UploadOptions
Target and UploadOptions
------------------------
Add the fields that were specific to the Generic S3 structures to the AWS S3 one
Remove the structures for Generic S3 and always use the AWS S3 ones
Worker Main
-----------
Add Endpoint, Region, Bucket, CABundle and SkipSSLVerification to the configuration structure
Pass the values to the Server
Weldr API
---------
Keep the generic.s3 provider name to maintain the API, but unmarshel into awsS3UploadSettings
tests - api.sh
--------------
Allow the caller to specifiy either AWS or Generic S3 upload targets for specific image types
Implement the pieces required for testing upload to a Generic S3 service
In some cases generalize the AWS S3 functions for reuse
GitLab CI
---------
Add test case for api.sh tests with edge-commit and generic S3
The script would test if the test case generation script when the script
would run normally if the osbuild-dnf-json.service was stopped.
This is no longer necessary.
This value is set in the worker config. In future it might also be
passed through the api to upload into target accounts, but it should
never be set in composer.
API
---
Allow the user to pass the CA public certification or skip the verification
AWSCloud
--------
Restore the old version of newAwsFromCreds for access to AWS
Create a new method newAwsFromCredsWithEndpoint for Generic S3 which sets the endpoint and optionally overrides the CA Bundle or skips the SSL certificate verification
jobimpl-osbuild
---------------
Update with the new parameters
osbuild-upload-generic-s3
-------------------------
Add ca-bunlde and skip-ssl-verification flags
tests
-----
Split the tests into http, https with certificate and https skip certificate check
Create a new base test for S3 over HTTPS for secure and insecure
Move the generic S3 test to tools to reuse for secure and insecure connections
All S3 tests now use the aws cli tool
Update the libvirt test to be able to download over HTTPS
Update the RPM spec
Kill container with sudo
When a test script fails in CI, it's often difficult to pinpoint the
exact line in the log where the script failed and the cleanup() function
(trapped on EXIT) begins.
Adding a prominent line (with greenprint where available) at the start
of the cleanup function will make reading logs of failed jobs a lot
easier.
Something odd is happening with the package check and it keeps failing
mysteriously even though the package is clearly in the list.
Changing the verification method to extract `passwd` and `packages` from
the image info file into separate files and grepping those seems to
work.
Reasons for this change:
- Mixed versions of composer and worker aren't a realistic use-case for
the weldr API (on prem) but we do run mixed versions in hosted IB, so
this test is closer to real world scenarios.
- The cloud API runs depsolve jobs in the worker, whereas the weldr API
runs them in composer. By testing the cloud API we also test the
backwards compatibility of the depsolve job.
The change requires osbuild-worker v51 or newer to be able to handle
depsolve and manifest jobs on the worker as well as depsolve chains.
We currently use a single GCP Compute region when spinning up VMs using
the imported GCE image. As a result, we are often hitting the
'IN_USE_ADDRESSES' quota limit when there are multiple CI jobs running.
Google does not allow us to increase the quota limit any more.
Change the GCP test cases to use the CI `GCP_REGION` variable to list
all GCE regions with available quota and pick a random one from the
list. The `GCP_REGION` value is used as the region name prefix when
filtering available regions. This means that if you specify an exact GCE
region, such as `us-west1`, you'll always get the same region, but if a
GCP multi-region is used, such as `us`, then a random region prefixed
with 'us' will be used.