Create an entry point for all regression test called "regression.sh" and
run it as part of the base tests for all our distros. This entry
point contains logic for running only the test cases that are
appropriate for a given distribution.
When a users wants to install a package that itself is excluded or its
dependency is excluded, it fails the build. There is no known workaround
for this shorcoming of our current design.
Therefore, remove a package from the list of excluded if it is
explicitly mentioned in a blueprint. This will not solve the issue with
dependencies, but it will create a possibility of a workaround.
Also, introduce regression test to verify the bug fix and hook it into
CentOS CI (this issue was reported against RHEL, but CentOS runs on AWS
so it is better to verify the fix there).
This uses an image created and uploaded to Azure using composer-cli
and then terraform to spin up a linux vm from that image, check
if the machine works and then cleans up everything.
s3cmd sync actually downloads metadata for all objects in a s3 bucket.
We have built a lot of RPMs, thus this takes 5 minutes on AWS and 25 minutes
on my laptop (!!!).
Let's use recursive put instead. This doesn't delete any files on the remote
side. As we upload RPMs only once, this also shouldn't fail on "the
object already exists". Using this method, we should be able to upload the
RPMs in seconds.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
GitLab CI builds its own rpms and thus it must be use a different path.
This commit modifies mockbuild.sh and deploy.sh to be able to add an
extra path segment into the path so GitLab can use a different path.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
- image_tests.sh is already being executed from the osbuild
repository by installing osbuild-composer-tests & calling the
script directly
- in this repo we've got distro_test.go::TestDistro_Manifest() and
distro_test_common.TestDistro_Manifest() which compare the static
manifests stored in this repository with the ones generated
dynamically by the code base. This is executed via `go test` and
runs against all available json files.
The above two items cover the part where we want to make sure that
the resulting content is what we expect.
Additionally the existing integration tests cover the part where
we build images, upload them to a cloud vendor and boot a new VM
from the image.
Using DISTRO_CODE simplifies test case selection and allows to test
different distro than the one test is running on.
This is used to run tests for RHEL 9.0 on F33 or RHEL 8.4
By default, a when block is evaluated after an agent is started. I discovered
this randomly: I opened a pipeline and saw that it was stuck on "Prepare EL8
internal 🤔" stage even though the pipeline should have even run it.
This commit fixes it by adding "beforeAgent true" to all when blocks. It
changes the behaviour to more sane "if when is true, allocate an agent".
See https://www.jenkins.io/doc/book/pipeline/syntax/#evaluating-when-before-entering-agent-in-a-stage
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
Extend internal GCP library to allow deleting Compute Node image and
instance. In addition provide function to load service account
credentials file content from the environment.
Change names used for GCP image and instance in `api.sh` integration
test to make them predictable. This is important, so that cloud-cleaner
can identify potentially left over resources and clean them up. Use the
same approach for generating predictable, but run-specific, test ID as
in GenerateCIArtifactName() from internal/test/helpers.go. Use SHA224
to generate a hash from the string, because it can contain characters
not allowed by GCP for resource name (specifically "_" e.g. in "x86_64").
SHA-224 was picked because it generates short enough output and it is
future proof for use in RHEL (unlike MD5 or SHA-1).
Refactor cloud-cleaner to clean up GCP resources and also to run cleanup
for each cloud in a separate goroutine.
Modify run_cloud_cleaner.sh to be able to run in environment in which
AZURE_CREDS is not defined.
Always run cloud-cleaner after integration tests for rhel8, rhel84 and
cs8, which test GCP.
Define DISTRO_CODE for each integration testing stage in Jenkinsfile.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
by defining the COMPOSE_URL environment variable! This will allow
testing more flavors of internal releases.
The rest is renaming files and variables to reflect the fact that
we're running tests against internal trees, not only nightlies.
Move the container build to the same phase as the RPM builds. This does not make a huge difference, but should
shave off about two minutes of total CI runtime.
Mockbuild using systemd-nspawn currently fails on Fedora 34. The
workaround is to use "simple" isolation method - the traditional
chroot() call.
Reported as: https://bugzilla.redhat.com/show_bug.cgi?id=1931452
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Refactor test/cases/api.sh to incorporate testing of cloudapi with
multiple cloud providers as the target. Since all variables in Bash are
by default global, don't declare them as empty in advance. The only
place where underclared variables can be potentially expanded are the
cleanup functions. Ensure that there are no unbound variables expanded
inside cleanup functions. Rename all AWS-specific variables to
contain "AWS_" prefix to make their purpose explicit.
Modify provision.sh to append the GCP credentials file path to the
worker configuration.
Add GCP api.sh test case to integration tests in Jenkins and run it only
if the appropriate GCP credentials environment variable is defined. Run
the GCP test case for RHEL images.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
We are gaining new ostree features that overlap to a great deal
with the current ones. We still need to keep the current features
for backwards compatibility, so add another test run that does the
same but using new API.
For now this simply uses the `url` parameter rather than `parent`
to build update commits. Further changes will be made in follow-up
commits.
Use `curl` rather than `composer-cli` as we have a chicken-and-egg
problem where we can't land this feature without tests, but
`composer-cli` can't add support for it without having it first in
`composer`.
https://github.com/osbuild/osbuild-composer/pull/1228 was merged with
a failing Schutzbot's pipeline. The failure is caused because `var` apparently
isn't a right Groovy syntax. Let's use the right word `def` instead.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
The image definition is shared with the latest RHEL 8.y one (8.4 currently).
I expect that we the introduction of 8.5 support, we point the centos 8
distro at it.
The test repositories and manifests use the official CentOS composes. From
what I can tell, they are persistent. This is not guaranteed though, so we
might need to switch to RPMRepo at some point.
The "classic" CentOS 8 should also be buildable but due to the chicken and egg
issue (this commit will get into Centos "8.4" but Centos "8.4" isn't a thing
yet), we cannot test it and therefore it might be broken.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
- even if they don't exist sometimes but ignore the errors
- make the nightly repository with a higher priority
- override rhel-8*.json files so that newly built images will
also consume the nightly content
because -tests.rpm isn't shipped with the distro the prepare
script downloads it from Brew, trying to match the same version
that exists in the actual nightly compose. Then prepares a repo on
S3 for the subsequent test jobs to use!
Use AWS_CREDS for ~/.s3cmd
- use detect_build_cause() funtion and set a global env.BUILD_CAUSE
variable for use in conditionals
- add a cron job trigger - this will work together with the
GitHub pull request trigger
- use conditional blocks for all steps we want to be executed
outside of cron jobs
- only EL8 jobs will be executed unconditionally, both in cron
and for PRs. The preparation stage for cron jobs makes sure to
use the same name for osbuild-mock.repo so that the jobs can
unstash it later!
We can now send webhook data to an SQS queue at AWS without signing the
request with credentials. This allows us to trigger Schutzbot from
forks and from branches on the main repository.
Signed-off-by: Major Hayden <major@redhat.com>