OSTree tests, especially the fedora-ostree-image one, will soon
need the tight integration with the host for LVM2/LUKS support.
This we cannot run them in github action containers. Move them
to Schutzbot.
Explicitly install the new sub-package until composer gains the
needed requirement.
Instead of using merify which seems to not do what we want, use
a combination of mergify and automerge. We let mergify review
dependabot PRs. We let mergify dismiss reviews on updates but
exclude those from Schutzbot. We then let Schutzbot update and
merge the PRs via automerge if the `ci:automerge` label is set.
Re-review the PR after rebasing it. Leave a message to make it
clear that it was not the impersonated person but mergify that
did it.
Ideally, if we had premium, we would impersonate Schutzbot so
it is clear who did it and then use mergify to dismiss reviews
on changes but not for Schutzbot.
For dependabot we dont want it anyway (but it is true by default).
Also remove it for "merge via auto-label", so that once all the
conditions are met the PR is queued and the label is removed.
Currently the queuing might not happen because the branch protection
is not met. Therefore we make the condition explicit and remove the
branch protection.
Define a merge queue "default", with all current checks (minus the
ostree one) are required to get out of.
Two rules to get into the queue: 1) standard branch protection,
plus packit, plus the ci:automerge
2) dependabot, does not require the standard branch protection
since that implies reviews. Instead the checks are manually
listed.
We need a privileged / admin user doing the post-release version bump as
this is a direct commit to main (i.e. without a PR) so switch to using
schutzbot with a scoped personal access token (only public_repo).
This is necessary for the new simplified release process and is done
ahead of time once for the upcoming release now.
After osbuild 40 this will be done by the GitHub composite action.
This commit changes our release process from the model of having a
release commit (and pull request) which also updated the NEWS.md file
and bumped the versions in the osbuild.spec and setup.py files to simply
pushing a tag.
After the tag (containing the release notes) is pushed, a GitHub
composite action is triggered that creates a GitHub release with the
contents of the git release tag. Furthermore the bumping of the version
number now always has to happen directly after a release to avoid having
to push a(n untested) commit to main for the release and this is also
handled by the GitHub composite action.
Finally packit pushes directly to dist-git now on pushing the release
tag, so no pull-request needs to be reviewed and merged anymore.
Use the "Checks" workflow to trigger gitlab; this workflow should
be much quicker to complete and thus the gitlab ci will trigger
earlier leading to a more parallel ci run.
Sadly `github.event.workflow_run.pull_requests` is empty if the pull
request was opened from another fork. Use the sha to find an open PR,
otherwise assume it's a branch.
This workflow doesn't have access to the original pull request event
that resulted in this workflow being triggered.
Simply use `head_sha` which will contain the PR sha if it was triggered
by a PR's workflow, or the branch sha if it was triggered from a
branch's workflow.
The `workflow_run` event is triggered either when a workflow was
requested or complete (see `types`). We can use this event as a
trigger for the gitlab ci conditioning on a successful workflow
run of the main tests ("Tests" workflow). This will ensure that,
with outside contributor protection turned o, no secrets are
leaked via PRs from non-contributors, but also that gitlab ci is
run for those PRs once they were manually allowed to run.
The only downside is that now the gitlab ci will only run after
the main workflow ("Tests) has completed and thus serializing
both CI runs. OTOH gitlab CI is quite intense so maybe this is
not so bad after all. If in the future we want to parallelize
both CI runs we could have a third "precheck" condition with
maybe the spell checker and the pylint tests that the main tests
as well as the gitlab ci run depend on.
Instead of maintaining a separate set of samples that by now are very
much outdated (using Fedora 31 or older), make the samples directory
a symlink to the test data. Manifests in there are indeed tested and
maintained.
The error and noop samples are also covered in unit tests, so no need
for extra samples there either.
CI: remove the sample validation since all the test data manifests
are actually built.
Add new `org.osbuild.cloud-init` stage, which currently allows to create
configuration files for cloud-init under `/etc/cloud/cloud.cfg.d`. The
stage supports only a very limited subset of cloud-init configuration
options, which is covering needs of RHEL AMI images.
The schema mandates that if the 'configuration_files' option is
specified, then at least one configuration file must be defined. In
addition each section of the configuration must contain at least one
property (section or configuration option).
Add `python3-pyyaml` package to the `F34-build` testing manifest,
because it is required for running and testing the new stage.
Regenerate all affected manifests.
Add test for the new stage.
Update the `osbuild-ci` container image used for testing to a new tag,
which includes python3-pyyaml, the dependency of the new stage.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
We explicitly pinned the F32 CI images in the past due to update issues
in F33. However, those have been resolved and we should switch back to
the most recent Fedora CI images.
This commits switches all instances of the osbuild-ci image back to the
latest stream, snapshot taken on 2021-02-19 13:11 (latest-202102191311).
Signed-off-by: David Rheinsberg <david.rheinsberg@gmail.com>
Use the new immutable image infrastructure from `osbuild/containers`.
While at it, also switch over to the new github-actions helper, now that
we no longer run `systemd-nspawn` in our tests.
The old image was renamed from `ghci-osbuild` to `osbuild-ci` to avoid
accidentally replacing old images. The new infrastructure uses immutable
images, so downstream will no longer get automatic updates, unless the
`latest` tags are used.
Signed-off-by: David Rheinsberg <david.rheinsberg@gmail.com>
Previously, we had a webhook relay. It received a notification from Github
and sent it to AWS SQS. Now, the webhook is dead. The new method (already used
in osbuild-composer and image-builder) is to send the notification directly
from a github action to AWS SQS.
`make test-data` always regenerates test data, without the need to pass
the `--always-make` option to make.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Integrate with codecov. Define a threshold of 5% to pass. Coverage
is cumulative, i.e. all the tests send their coverage to codecov,
which will integrate them all into a total.
Move the last remaining test into the correct subdir. With this done,
all our tests run in one of the 3 groups:
* `make test-src`
Run tests against the source-code, including linters.
* `make test-mod`
Run unit-tests on the individual python modules. This needs no
special permissions (unless noted in each test) or runtime
environments. It is meant to be fast and easy to run in all
circumstances.
* `make test-run`
Run tests that execute the osbuild pipeline. This requires
superuser privileges and will likely take a while. Furthermore,
this might produce large artifacts.
Align the makefile targets with the test-targets (`module` -> `mod`,
etc.). This way, we have consistent names everywhere.
While at it, move the `make test-run` invocation closer to the others.
Move the stage-tests over to the new test-infrastructure. This moves
the test invocation into `./test/run/test_stages.py`, so it is invoked
as part of the runtime-tests. Secondly, the test-data is stored in
./test/data/stages/ so the path is relative to
TestBase.locate_test_data().
While at it, this also drops the dynamic class modifications and instead
uses subTest(). This simplifies the code quite a bit and avoids
dynamically creating python code.
Move the `test_osbuild.py` test into the module-test directory. This
test contains just a bunch of basic functionality tests for a selection
of osbuild modules. Hence, it can be run together with the other module
tests.
Move `test_objectstore` into the module-level tests. This allows us to
run it as part of `make test-module.
Make sure to properly guard it as root-only module.
Run the MPP tools in the CI and verify the committed test-data did not
change and is up-to-date.
This runs `make test-data` and then simply uses `git diff --exit-code`
to trigger a CI failure if there are any differences in ./test/data.
Add a new trivial runtime-test which simply runs a no-op pipeline. This
is a fast, trivial test that simply verifies osbuild is properly setup
and accessible.
Remove the explicit no-op test from the CI, now that the test-suite has
it as well.