The script isn't supposed to fail when the manifests differ.
Initialise err to 0 and assign it the exit code of the diff call if it
returns with an error.
The script runs the gen-manifests command first on the PR head and then
on the merge-base with the PR's base branch (typically 'main') and
checks for any differences. It creates a review comment on the PR on
GitHub if any changes are detected.
The message is posted as a simple COMMENT type review to inform the
author and reviewers that changes exist.
The script doesn't fail if there's a diff. CI shouldn't fail if changes
are detected since they can be intentional. The job fails if something
goes wrong with the script execution (manifest generation, comment
posting, etc).
The script exits immediately if not run from a PR.
The gen-manifests run is silenced with `> /dev/null`. In the future,
this should be handled by flags to the command itself to control the
output format noisiness.
The gen-manifests command is run 50 workers. Testing with 100 seemed to
make the execution stall, likely because of the resources on the worker.
We can experiment with this value more in the future.
`osbuild-mock-openid-provider`'s `/token` endpoint expects URL-encoded
values in the POST request body. Use the same values as those that would
be used by the worker when refreshing a token.
Use case
--------
If Endpoint is not set and Region is - upload to AWS S3
If both the Endpoint and Region are set - upload the Generic S3 via Weldr API
If neither the Endpoint and Region are set - upload the Generic S3 via Composer API (use configuration)
jobimpl-osbuild
---------------
Add configuration fields for Generic S3 upload
Support S3 upload requests coming from Weldr or Composer API to either AWS or Generic S3
Weldr API for Generic S3 requires that all connection parameters but the credentials be passed in the API call
Composer API for Generic S3 requires that all conneciton parameters are taken from the configuration
Adjust to the consolidation in Target and UploadOptions
Target and UploadOptions
------------------------
Add the fields that were specific to the Generic S3 structures to the AWS S3 one
Remove the structures for Generic S3 and always use the AWS S3 ones
Worker Main
-----------
Add Endpoint, Region, Bucket, CABundle and SkipSSLVerification to the configuration structure
Pass the values to the Server
Weldr API
---------
Keep the generic.s3 provider name to maintain the API, but unmarshel into awsS3UploadSettings
tests - api.sh
--------------
Allow the caller to specifiy either AWS or Generic S3 upload targets for specific image types
Implement the pieces required for testing upload to a Generic S3 service
In some cases generalize the AWS S3 functions for reuse
GitLab CI
---------
Add test case for api.sh tests with edge-commit and generic S3
The script would test if the test case generation script when the script
would run normally if the osbuild-dnf-json.service was stopped.
This is no longer necessary.
This value is set in the worker config. In future it might also be
passed through the api to upload into target accounts, but it should
never be set in composer.
API
---
Allow the user to pass the CA public certification or skip the verification
AWSCloud
--------
Restore the old version of newAwsFromCreds for access to AWS
Create a new method newAwsFromCredsWithEndpoint for Generic S3 which sets the endpoint and optionally overrides the CA Bundle or skips the SSL certificate verification
jobimpl-osbuild
---------------
Update with the new parameters
osbuild-upload-generic-s3
-------------------------
Add ca-bunlde and skip-ssl-verification flags
tests
-----
Split the tests into http, https with certificate and https skip certificate check
Create a new base test for S3 over HTTPS for secure and insecure
Move the generic S3 test to tools to reuse for secure and insecure connections
All S3 tests now use the aws cli tool
Update the libvirt test to be able to download over HTTPS
Update the RPM spec
Kill container with sudo
When a test script fails in CI, it's often difficult to pinpoint the
exact line in the log where the script failed and the cleanup() function
(trapped on EXIT) begins.
Adding a prominent line (with greenprint where available) at the start
of the cleanup function will make reading logs of failed jobs a lot
easier.
Something odd is happening with the package check and it keeps failing
mysteriously even though the package is clearly in the list.
Changing the verification method to extract `passwd` and `packages` from
the image info file into separate files and grepping those seems to
work.
Reasons for this change:
- Mixed versions of composer and worker aren't a realistic use-case for
the weldr API (on prem) but we do run mixed versions in hosted IB, so
this test is closer to real world scenarios.
- The cloud API runs depsolve jobs in the worker, whereas the weldr API
runs them in composer. By testing the cloud API we also test the
backwards compatibility of the depsolve job.
The change requires osbuild-worker v51 or newer to be able to handle
depsolve and manifest jobs on the worker as well as depsolve chains.
We currently use a single GCP Compute region when spinning up VMs using
the imported GCE image. As a result, we are often hitting the
'IN_USE_ADDRESSES' quota limit when there are multiple CI jobs running.
Google does not allow us to increase the quota limit any more.
Change the GCP test cases to use the CI `GCP_REGION` variable to list
all GCE regions with available quota and pick a random one from the
list. The `GCP_REGION` value is used as the region name prefix when
filtering available regions. This means that if you specify an exact GCE
region, such as `us-west1`, you'll always get the same region, but if a
GCP multi-region is used, such as `us`, then a random region prefixed
with 'us' will be used.
The purpose of the test is to check that the dnf-json socket can be
started automatically when running the test case generator while the
service or socket isn't enabled/started.
dnf-json will still be used to depsolve the packages and create the
manifest even if the image is not built.
Add support for importing the GCE image into GCP using Weldr API. The
credentials to be used can be specified in the upload settings and will
be then used by the worker to authenticate with GCP.
The GCP target credentials are passed to Weldr API as base64 encoded
content of the GCP credentials JSON file. The reason is that the JSON
file contains many values and its format could change in the future.
This way, the Weldr API does not rely on the credentials file content
format in any way.
Add a new test case for the GCP upload via Weldr and run it in CI.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Modify the Cloud API test case for GCP to use `gcloud` and GCP guest
tools installed in the image to connect to the VM instance over SSH.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Filter the list of repositories passed in compose request based on the
`image_type_tags` object member. This is the same approach used by the
Weldr API. If the `image_type_tags` does not exist, the repo is added to
the list. If the `image_type_tags` exists, the repo is added to the list
only if the image type name is in the tags array.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
jobimpl-osbuild
---------------
Add GenericS3Creds to struct
Add method to create AWS with Endpoint for Generic S3 (with its own credentials file)
Move uploading to S3 and result handling to a separate method (along with the special VMDK handling)
adjust the AWS S3 case to the new method
Implement a new case for uploading to a generic S3 service
awscloud
--------
Add wrapper methods for endpoint support
Set the endpoint to the AWS session
Set s3ForcePathStyle to true if endpoint was set
Target
------
Define a new target type for the GenericS3Target and Options
Handle unmarshaling of the target options and result for the Generic S3
Weldr
-----
Add support for only uploading to AWS S3
Define new structures for AWS S3 and Generic S3 (based on AWS S3)
Handle unmarshaling of the providers settings' upload settings
main
----
Add a section in the main config for the Generic S3 service for credentials
If provided pass the credentials file name to the osbuild job implementation
Upload Utility
--------------
Add upload-generic-s3 utility
Makefile
------
Do not fail if the bin directory already exists
Tests
-----
Add test cases for both AWS and a generic S3 server
Add a generic s3_test.sh file for both test cases and add it to the tests RPM spec
Adjust the libvirt test case script to support already created images
GitLabCI - Extend the libvirt test case to include the two new tests
Regression test suite has grown considerably and is taking too long to
run with a single wrapper. Splitting them into individual standalone
tests instead and making them run in parallel.
The VMDK image must be in stream-optimized format in order to be
imported to VSphere. osbuild-composer does not produce VMDK by default
as stream-optimized. Instead, it is converted on the fly when the image
build job has been submitted via Weldr API.
Since we are aiming mainly for the VSphere use case with the VMDK image
in the service, the image should be ready for importing to VSphere.
Implement a temporary workaround for the Cloud API and AWS S3 target to
upload stream-optimized VMDK image.
Adjust the `api.sh` test case to not convert the VMDK image downloaded
form S3, before importing it to VSphere.
Ensure that the content of the database is not printed to the console
when dumped at the end of the test case. The output is still preserved
as a CI run artifact.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Kill and remove the DB container as part of the test case cleanup.
Without this change, running the test case more than once fails.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Extend the `api.sh` test to verify the VMDK images uploaded to S3 in
VSphere by booting them and configuring using cloud-init.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Remove comment out code
Use three different IP address for different test scenarios
Move /boot/device-credentials file checking into playbook
Some shell script improvements
This test get stuck randomly on centos-stream-8 and is making the CI
unreliable. Adding hard wait limit and destroying the VM afterwards
helps the test get unstuck and continue as expected. See
https://github.com/osbuild/osbuild-composer/issues/2413 for details.