- remove `custom-repos.sh` integratoin test
- add custom repositories check to `api` tests for supported
images
- verify custom repositores are added to /etc/yum.repos.d
- verify gpg key is saved to /etc/pki/rpm-gpg (for inline keys)
Use directories and files customization in the compose request for image
types that support this customization (only ostree installer and raw
image do not support it).
Extend the instance verification to check for the custom directories and
files.
Extend the ostree commit verification to check for the custom
directories and files.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
If a job is unresponsive the worker has most likely crashed or been shut
down and the in-progress job been lost.
Instead of failing these jobs, requeue them up to two times. Once a job is lost
a third time it fails. This avoids infinite loops.
This is implemented by extending FinishJob to RequeuOrFinish job. It takes a
max number of requeues as an argument, and if that is 0, it has the same
behavior as FinishJob used to have.
If the maximum number of requeues has not yet been reached, then the running
job is returned to pending state to be picked up again.
Since we're sharing functions between test scripts, move greenprint(),
the most rewritten function in the history of the project, to
shared_lib.sh and source it everywhere.
Support for creating multiple amis from a single compose. It uses the
AWSEC2* jobs to push images to new regions, and share them with new
accounts.
The compose it depends upon has to have succeeded.
Extend the implementation of mock openid server to take the `grant_type`
into consideration for the `/token` endpoint.
In addition to the previously supported `refresh_topen`, the
implementation now supports also `client_credentials`.
This is necessary to make it possible to use the mock server in
the `koji-osbuild` CI, because the builder plugin uses
`client_credentials` to get access token.
The implementation behaves in the following way:
- For `refresh_token` grant type, it takes the `refresh_token` value
from the request and adds it to the `rh-org-id` field in the custom
claim, which is part of the returned token.
- For `client_credentials` grant type, it takes the `client_secret`
value from the request and adds it to the `rh-org-id` field in the
custom claim, which is part of the returned token.
Requests without the supported `grant_type` set are rejected.
Modify affected test cases to specify `grant_type` when fetching a new
access token.
Do not start local worker (mask the unit) and Weldr API socket when
provisioning the SUT with TLS client cert authentication method. This
method is used only in the Service scenario, therefore starting these
units / sockets was not reflecting the intended deployment.
Modify `api.sh` to not rely on local worker.
Modify `base_tests.sh` to provision SUT with TLS for
`osbuild-auth-tests`, while provisioning SUT with no authentication
method for the rest of test cases.
Add a new cloud API test that will build an edge-container,
upload it to the gitlab CI registry, fetch it from there,
run it and compare that the OSTree commit contained in it
is indeed the one we expect.
Co-Developed-By: Christian Kellner <christian@kellner.me>
We want to be able to safely gather any artifacts without worrying about
any possible secrets leaking. Every artifacts that we want to upload
will now have to be placed in /tmp/artifacts which will then be uploaded
to S3 by the executor and link to the artifacts will be provided in the
logs. Only people with access to our AWS account can see them.
- Fixed shellcheck errors
- Moved checkEnv from common to individual tests
- Fixed package install section in spec file:
Globs which include a directory fail on el-like distros.
- Use gcloud cli to ssh
- (re)Introduce generic s3 tests
Each cloud now has its own file that's sourced on-demand by the main api.sh
script. The main goal of this commit is to reduce the amount of clutter in
api.sh. I, personally, find 1300 lines of bash overwhelming and I think that
this is a reasonable beginning to start cleaning things up.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
`osbuild-mock-openid-provider`'s `/token` endpoint expects URL-encoded
values in the POST request body. Use the same values as those that would
be used by the worker when refreshing a token.
Use case
--------
If Endpoint is not set and Region is - upload to AWS S3
If both the Endpoint and Region are set - upload the Generic S3 via Weldr API
If neither the Endpoint and Region are set - upload the Generic S3 via Composer API (use configuration)
jobimpl-osbuild
---------------
Add configuration fields for Generic S3 upload
Support S3 upload requests coming from Weldr or Composer API to either AWS or Generic S3
Weldr API for Generic S3 requires that all connection parameters but the credentials be passed in the API call
Composer API for Generic S3 requires that all conneciton parameters are taken from the configuration
Adjust to the consolidation in Target and UploadOptions
Target and UploadOptions
------------------------
Add the fields that were specific to the Generic S3 structures to the AWS S3 one
Remove the structures for Generic S3 and always use the AWS S3 ones
Worker Main
-----------
Add Endpoint, Region, Bucket, CABundle and SkipSSLVerification to the configuration structure
Pass the values to the Server
Weldr API
---------
Keep the generic.s3 provider name to maintain the API, but unmarshel into awsS3UploadSettings
tests - api.sh
--------------
Allow the caller to specifiy either AWS or Generic S3 upload targets for specific image types
Implement the pieces required for testing upload to a Generic S3 service
In some cases generalize the AWS S3 functions for reuse
GitLab CI
---------
Add test case for api.sh tests with edge-commit and generic S3
This value is set in the worker config. In future it might also be
passed through the api to upload into target accounts, but it should
never be set in composer.
When a test script fails in CI, it's often difficult to pinpoint the
exact line in the log where the script failed and the cleanup() function
(trapped on EXIT) begins.
Adding a prominent line (with greenprint where available) at the start
of the cleanup function will make reading logs of failed jobs a lot
easier.
We currently use a single GCP Compute region when spinning up VMs using
the imported GCE image. As a result, we are often hitting the
'IN_USE_ADDRESSES' quota limit when there are multiple CI jobs running.
Google does not allow us to increase the quota limit any more.
Change the GCP test cases to use the CI `GCP_REGION` variable to list
all GCE regions with available quota and pick a random one from the
list. The `GCP_REGION` value is used as the region name prefix when
filtering available regions. This means that if you specify an exact GCE
region, such as `us-west1`, you'll always get the same region, but if a
GCP multi-region is used, such as `us`, then a random region prefixed
with 'us' will be used.
Modify the Cloud API test case for GCP to use `gcloud` and GCP guest
tools installed in the image to connect to the VM instance over SSH.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Filter the list of repositories passed in compose request based on the
`image_type_tags` object member. This is the same approach used by the
Weldr API. If the `image_type_tags` does not exist, the repo is added to
the list. If the `image_type_tags` exists, the repo is added to the list
only if the image type name is in the tags array.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
The VMDK image must be in stream-optimized format in order to be
imported to VSphere. osbuild-composer does not produce VMDK by default
as stream-optimized. Instead, it is converted on the fly when the image
build job has been submitted via Weldr API.
Since we are aiming mainly for the VSphere use case with the VMDK image
in the service, the image should be ready for importing to VSphere.
Implement a temporary workaround for the Cloud API and AWS S3 target to
upload stream-optimized VMDK image.
Adjust the `api.sh` test case to not convert the VMDK image downloaded
form S3, before importing it to VSphere.
Ensure that the content of the database is not printed to the console
when dumped at the end of the test case. The output is still preserved
as a CI run artifact.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Kill and remove the DB container as part of the test case cleanup.
Without this change, running the test case more than once fails.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Extend the `api.sh` test to verify the VMDK images uploaded to S3 in
VSphere by booting them and configuring using cloud-init.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
We have to do a small hack to enable edge-commit on Fedora because its name
is different. We can also change this in the image definition but I want to
iterate quickly on the Fedora Integration MVP and don't want to run in
any conflicts with
https://github.com/osbuild/osbuild-composer/pull/2461
This commit also enables a test for Fedora IoT built through the API.
While enabling the test, I also simplified our decision logic for SSH_USER
and DISTRO.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
This will allow us to use the service accounts which work against
identity.api.openshift.com. These are much easier to manage, especially
with the new multi-tenancy, as there's a single page to create/expire
them across an account.
They also have the added benefit of not expiring automatically when
they're not used like offline tokens, and immediate expiration when
desired.
Previously, the DB was not dumped in case the compose failed. Ensure
that the DB is dumped before the script exits in any case.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Do not create files directly in `/tmp`, but use `$WORKDIR`, which is a
temporary directory for transient files, which gets cleaned up when the
test case finishes. Without this change, running `api.sh` twice fails
the second time.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
This commit implements multi-tenancy. A tenant is defined based on a value
from JWT claims. The key of this value must be specified in the configuration
file. This allows us to pick different values when using multiple SSOs.
Let me explain more in depth how this works:
Cloud API gets a new compose request. Firstly, it extracts a tenant name from
JWT claims. The considered claims are configured as an array in
cloud_api.jwt.tenant_provider_fields in composer's config file. The channel
name for all jobs belonging to this compose is created by `"org-" + tenant`.
Why is the channel prefixed by "org-"? To give us options in the future. I can
imagine the request having a channel override. This basically means that
multiple tenants can share a channel. A real use-case for this is multiple
Fedora projects sharing one pool of workers.
Why this commit adds a whole new cloud_api section to the config? Because the
current config is a mess and we should stop adding new stuff into the koji
section. As the Koji API is basically deprecated, we will need to remove it
soon nevertheless.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
We (mistakenly) don't enforce a minimum size for /var,
so setting it to 1024 (1kB) causes the image build to fail.
CI does not expose this in a helpful way at the moment,
so this is a bit tricky to debug.
Also skip customizations for the AWS.S3 upload type. Not all the
image types with this upload type support filesystem customizations
and that's as expected. We could make a more fine-grained test in
the future, but testing with a coulpe of targets should be
sufficient.
We no longer test Cloud API on Fedora and Fedora 33 is EOL anyway.
Remove all Fedora 33 related lines from the test case.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
When backed by a DB, composer has no need of a queue directory.
This also addresses "Error moving artifacts for job" logging noise.
Signed-off-by: sanne <sanne.raymaekers@gmail.com>
Script is run with the image type to build as the argument.
The target / cloud service is selected based on the image type
specified. This is how the API actually works now: Only an image type
can be specified.
The script now supports all the blobby image types for testing:
- edge-commit
- edge-container
- edge-installer
- image-installer
- guest-image (qcow2)
- vsphere (vmdk)
These are image types that are uploaded to S3 and provided to the user
as an object to download rather than a VM image on a cloud provider.
To verify the cloud api compose request options for the qcow2 and vmdk
image types, download the object and inspect it using image-info.
Checks if postgresql is installed and that user1 and user2 exist in the
passwd file.
Signed-off-by: Achilleas Koutsou <achilleas@koutsou.net>