when building the containers the initial `dnf upgrade` will download
content from the latest nightly trees which turns the container into a
Beta system and subsequent `dnf isntall` gets confused!
s3cmd sync actually downloads metadata for all objects in a s3 bucket.
We have built a lot of RPMs, thus this takes 5 minutes on AWS and 25 minutes
on my laptop (!!!).
Let's use recursive put instead. This doesn't delete any files on the remote
side. As we upload RPMs only once, this also shouldn't fail on "the
object already exists". Using this method, we should be able to upload the
RPMs in seconds.
The same patch was applied in osbuild-composer cf73edd2
Let's explain how RPMs for RHEL are built:
We use a subscribed RHEL 8.x machine and mock build these on it. Mock
initializes its own buildroot based on the latest RHEL 8 CDN content, see[1].
This means that the minor version of the buildroot is independent of the minor
version of the host.
However, we currently upload RPMs to a directory whose name consists also of
the minor version of the host. Our hosts are currently running RHEL 8.3 so
the RPMs are uploaded into rhel-8.3 directory despite them being built in the
RHEL 8.4 buildroot (RHEL 8 CDN buildroot specifically). This means that
we cannot guarantee that they are installable on RHEL 8.3 which is weird.
This commit adds a special case for hosts that run on subscribed RHEL and
thus build RPMs in a buildroot constructed from RHEL CDN. These RPMs are
now uploaded into rhel-8-cdn directory. This change more accurately reflects
the way we build our RPMs and removes some confusion.
Also, we need to bump osbuild commit so we have a version that already has
the rhel-8-cdn change in it.
This also bumps all deps so we have rhel-8-cdn repos everywhere.
[1]: https://github.com/rpm-software-management/mock/blob/main/mock-core-configs/etc/mock/templates/rhel-8.tpl#L37
Add support for fetching manifests via the compose/<id>/manifests
API endpoint. A failure to fetch them is not critical, since it is
possible the manifests don't exist, e.g. when depsolving fails.
The manifest is attached per image request.
Previously, we had a webhook relay. It received a notification from Github
and sent it to AWS SQS. Now, the webhook is dead. The new method (already used
in osbuild-composer and image-builder) is to send the notification directly
from a github action to AWS SQS.
Podman 2.2.0 doesn't create a gateway by default. See:
https://github.com/containers/podman/issues/8748
This commits introduces a workaround: specifying the gateway manually.
Note that the gateway is used in test/run-builder.sh
We want to use `dnf` in the same way as our users do. This means we want the
modular repositories and weak deps enabled. Fastestmirror is fine, it doesn't
change the content set nor depsolving.
Also, this is a workaround for rhbz#1908352, tl;dr: installing podman without
weak deps makes it unusable on Fedora 32.
Remove koji-osbuild.spec.in and the bits in meson.build that generate
the final spec file out of it. Pull in the changelog from how Ondřej
Budai does it in osbuild-composer, because it's adorable.
When releasing, the version now has to be bumped in the spec file.
This is to make it consistent with other osbuild projects, and to
simplify reverse dependency testing.
Repository URLs are predictable. There's no need to use Jenkins' stash
feature to pass the repo file between stages.
Instead, simply create the repo file where it is needed, in deploy.sh.
Change the repository path on S3 to a more predictable one, mirroring
the pattern we're using for osbuild-composer.
Notably, don't use short commit ids. The length of these is not
predictable. It depends on the shortest unique prefix in the repository
and git configuration.
For example, koji-osbuild commit $SHA for fedora-33 on x86_64 will
result in this URL:
koji-osbuild/fedora-33/x86_64/$SHA
If called from within the source directory, i.e. the local plugin
exists, copy those to the share directory so they can be picked
up by the entry point scripts, in case the rpms are not found.
In case `TEST_PATH` was not specified as command line argument,
it was falling back to `test`. Make the latter an absolute path,
by pre-pending `PWD`, otherwise podman complains about the name
of the volume.
Instead of just using the "latest" container everywhere, which will
change every time a new release is made, add a build argument to
specify the version and then match that version to the host in all
the build scripts. This will make it possible to use the tests for
gating, and ensure that we test the plugins on the OS version that
is targeted.
Instead of building on the existing quay.io/osbuild/koji:v1, and
then replacing a lot of it (entry point), move the packages and
the dnf.conf change over from the former base and then directly
depend on Fedora. This gives us more control, especially over
what Fedora version is being used.
This is similar to how other osbuild packages are testing: everything
that's needed for testing is included in the tests package or a
dependency of it. The test runner then runs every executable in
/usr/libexec/tests/<packagename>. This gives a simple test API to
projects depending on this package (notably osbuild-composer).
The local development workflow described in HACKING.md is meant to
continue to work. To ensure this, all relevant scripts gained a
TEST_DATA variable, which defaults to `./test`, but is set from $1 to
the installed path from integration.sh.
Instead of taking podman-plugins from the source directory, use the one
that will be released into RHEL 8.3.1.
This will simplify moving tests into an rpm.
De-serialize the koji init and import logs, required fields in the
ComposeLogs, and if non-empty, attach them to the task.
Update the tests to check for the presence of these logs.
Instead of getting the `koji_build_id` from the direct reply of
the compose request call, use the one returned in the compose
status.
The reason behind this is that composer was changed so that the
CGInitBuild call to koji is now being done by a worker and not
composer itself. This means that once the compose request call
returns, the build id is not yet known. In composer release 24,
the compose request call internally waits for the worker that
does the CGInitBuild API call, but that will be changed, and
the koji_build_id will then not be returned from the compose
request API call anymore. This prepares for that. The tests are
also adapted to simulate the new behavior.
NB: this makes composer 24 a dependency, since the build id is
taken from the ComposeStatus, which was only added there.