In these graphs p99 isn't very important. If 1% of jobs are slow that's
fine. The p50 and p95 slices are the important ones, so reorder and
recolor the duration graphs to reflect this.
The interval dictates the granularity of the graphs. As the interval
decreases, spikes and dips become more pronounced. 28 days as an
interval doesn't actually show much, reduce this to 6h by default which
is a happy medium.
scheduled cloud cleaner is skipping the default storage account for a
resource group, as this images should get removed. There can be a
situation where this images are not removed and forgotten here. Remove
this skip condition so scc checks also in this storage account.
Since the `api.sh` test case is using random GCE zone from a random GCE
region which name starts with the `GCP_REGION` CI environment variable.
Since the used region name is not known to the `cloud-cleaner`, it has
to iterate over all potential GCE regions and their zones. We can not
simply filter the VM instance name a list of instances, because any
`instances` API call requires a zone name to be provided.
Add a new internal `cloud/gcp` package method to list existing GCE
regions based on a provided filter.
We currently use a single GCP Compute region when spinning up VMs using
the imported GCE image. As a result, we are often hitting the
'IN_USE_ADDRESSES' quota limit when there are multiple CI jobs running.
Google does not allow us to increase the quota limit any more.
Change the GCP test cases to use the CI `GCP_REGION` variable to list
all GCE regions with available quota and pick a random one from the
list. The `GCP_REGION` value is used as the region name prefix when
filtering available regions. This means that if you specify an exact GCE
region, such as `us-west1`, you'll always get the same region, but if a
GCP multi-region is used, such as `us`, then a random region prefixed
with 'us' will be used.
We want to run all of the scripts in after_script even if some of them
fail. In aws we have rhui repos in the images and we don't use them on
GA RHEL so ci_details.sh fails there and cloud_cleaner does not run.
The `org.osbuild.udev.rules` stage creates custom udev rules files.
This is a full implementation of the stage and includes information
about valid operators and keys.
A small test suit to test the basic functionality and validation is
included.
Validate incoming requests with openapi3. Remove unsupported
uuid format from the openapi spec. Similarly, change url to uri as
uri is a supported format and url is not.
Co-authored-by: Ondřej Budai <obudai@redhat.com>
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
oneOf means that the body is valid against exactly ONE schema. There's an
issue with AWS EC2 upload options though: It requires region and
share_with_accounts fields. Such a request is also valid AWS S3 upload though
(this one only require region). This means that AWS EC2 upload options will be
always valid against two schemas which violates the oneOf rule.
Let's switch to anyOf and explain this in the openAPI spec.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
It was never required, never used. I honestly think that this was a copy-paste
error, I don't see any reason why a user would have an object reference.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
We are interested in the time it takes from a job could be dequeued
until it is, but if a job has dependencies that are not yet finished, it
cannot be dequeued.
Change the logic to measure the time since the last dependency was
dequeued rather than when the job was queued.
The purpose of this metric is to have an alert fire in case we have too
few workers processing jobs.
Similar to DRY_RUN, these values should be overwritten in app-interface
per namespace. At some point the maintenance specific to the CRC tenant
(aws and gcp maintenance) should run in the workers namespace rather
than the composer namespace. Granularity is needed for this.
Stage and production share the GCP account. To avoid trying to delete
each GCP image twice, the maintenance script needs the ability to
selectively disable certain parts based on the config.
Building `tar` image for `s390x` on RHEL-84 ends with panic:
"s390x image must have a partition table, this is a programming error"
A tar image should not need a partition table, so this error does not
make sense.
overrides where needed for `qcow2` and `simplified-image-installer` images on specific distros. Also some repos needed to be updated to newer versions.
Do not apply the user customizations on edge-installer on RHEL-85/90beta,
since they are not supported there yet. The way we generate image test
cases from the `format-request-map.json` makes even the customized image
types being generated for all distributions automatically.
Extend the manifests test to ensure that each an every image type of
each architecture and each distribution is covered by at least one
image test case.
Since now we have the ability to generate image test cases for more
complicated image types, which consists only of the manifest, we should
have test coverage for each and every image type.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Capture stdout and stderr output when running generate-test-casesin the manifests command.
This is helpful for debugging test case generation failures.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Add a simple tool `osbuild-composer-image-definitions` which dumps the
matrix of all distributions, architectures and image types names
supported by composer as a JSON to the stdout.
Default to fetching the image test case generation matrix directly from
composer. This eliminates the need to update a JSON source file with
this information every time a new distro or image type are added to
composer.
Delete the previously used JSON source file with the image test case
generation matrix.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Removed in 2beb707def, possibly
accidentally.
The affected manifests were not regenerated based on this change so
they all already contain the core group.
Fedora 36 ships pylint 2.13 that newly reports:
dnf-json:436:20: E0601: Using variable 'cache_state' before assignment (used-before-assignment)
As dnf-json is pending a big rewrite
(https://github.com/osbuild/osbuild-composer/pull/2537),
I decided to pin fedora to 35 and let the other PR decide how to proceed in
order to prevent any conflicts.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
Before, the autoscaling group discovery is failing with error:
Error getting the Autoscaling instances: MissingRegion MissingRegion:
could not find region configuration
I think that we can spare the users of clouadpi of writing "rhsm": "false"
into the requests so I decided to make this property optional and default
to false.
This is nice because it matches the behaviour of Weldr repositories and
sources so we can also use test/data/repositories without any changes after
openapi validation is enabled.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>