Add support for both cancelling and failing a build. This is tested, but
not hooked up, as we need some more architecture work before that makes
sense.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Move to requiring CGInitBuild to be called before CGImport. In the
future we could make the former optional again, but for now we want to
allow the caller to have done CGInitBuild and for composer only to do
the CGImport using the passed in build_id and token.
Also rename and document some struct fields in the metadata struct to
make them more specific to our use-case and hopefully easier to read.
Signed-off-by: Tom Gundersen <teg@jklm.no>
So far, composes created by kojiapi didn't have any targets. This commit
adds the koji target to them.
This is the last piece of the puzzle. From now on, osbuild-composer has
a koji API, which is actually able to upload images to Koji! Yay!
Introduce a target for Koji and hooked it up in the worker, so if koji
target is specified, the image is uploaded to koji.
[teg: use system kerberos config rather than reading from env]
Add a systemd socket for Koji API. If enabled when osbuild-composer.service
is started, the service will also listen on the socket and serve Koji API
there.
Note that Koji API doesn't upload to Koji yet, this still needs to be hooked
up.
Based on a patch from Tom Gundersen, thanks!
This just translates between the OpenAPI spec and our internal
API.
This still lacks tests, but a follow-up commit adds integration tests.
`internal/kojiapi/openapi.gen.go` was automatically generated from
`internal/kojiapi/openapi.yml`. To regenerate use `go generate ./...`.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This adds the OpenAPI spec for the new composer-koji API. This API is meant
to expose expose just the functionality needed to generate images and push
them to koji.
Each compose may consist of several images, each image may have a
different architecture and image type, and the set of repositories with
their contents may be distinct. However, a compose is restricted to one
distro and one koji transaction.
Signed-off-by: Tom Gundersen <teg@jklm.no>
When using random names for artifacts like AWS snapshots, or Azure
images, it becomes hard to clean them up in case of CI failure. See this
issue for more details:
https://github.com/osbuild/osbuild-composer/issues/942
This PR introduces predictable names so that we can easily determine
which artifact belongs to which PR and therefore we can decide to wipe
all resources that are not needed any more.
The worker API returns errors of the form:
{ "message": "..." }
Print the message of those errors when receiving an error on the client.
This adds an `Error` type to openapi.yml and marks all routes as
returning it on 4XX and 5XX.
Instead of sending a `token` to workers, send back to URLs:
1. "location": URL at which the job can be inspected (GET) and updated
(PATCH).
2. "artifact_location": URL at which artifacts should be uploaded to.
The actual URLs remain the same, but a client does not need to stitch
them together manually (except appending the artifact's name).
Unfortunately, the client code generated by `deepmap` does not lend
itself to this style of APIs. Use standard http.Client again, which is a
partial revert of 0962fbd30.
The job token will be deprecated in favor of URLs.
If a key is not set, use a new random UUID. Also, don't overwrite the
options struct with that new key.
Don't give out job ids to workers, but `tokens`, which serve as an
indirection. This way, restarting composer won't confuse it when a stray
worker returns a result for a job that was still running. Also,
artifacts are only moved to the final location once a job finishes.
This change breaks backwards compatibility, but we're not yet promising
a stable worker API to anyone.
This drops the transition tests in server_test.go. These don't make much
sense anymore, because there's only one allowed transition, from running
to finished. They heavily relied on job slot ids, which are not easily
accessible with the `TestRoute` API. Overall, adjusting this seemed like
too much work for their benefit.
The code generator uses the `operationID` field to generate server
handlers, client functions, and types. Use simpler names to make the
generated code easier to read.
This kind of common base path is better set in the top-level
`server.url` field, so that it can be adjusted.
For now, drop it completely, as we already broke the consistency when
introducing the `/status` route.
This change breaks backwards compatibility, but we're not yet promising
a stable worker API to anyone.
Write an openapi spec for the worker API and use `deepmap/oapi-codegen`
to generate scaffolding for the server-side using the `labstack/echo`
server.
Incidentally, echo by default returns the errors in the same format that
worker API always has:
{ "message": "..." }
The API itself is unchanged to make this change easier to understand. It
will be changed to better suit our needs in future commits.
Notes:
ATM will not run any actual tests b/c we want to make sure the
pipeline configuration is correct.
run_tests() will call the deploy.sh script and then do nothing
b/c of the "dummy-" prefix which doesn't match any actual tests!
We are storing some data in the user's home directory, so let's print
the username so we know what that is.
In particular, this would tell us which user has been authorized to log
in via ssh.
Signed-off-by: Tom Gundersen <teg@jklm.no>
More specifically only those that are needed in
/cmd/osbuild-image/tests.
This patch can be merged with the previous one if we want to make sure
every commit can be built, but I'm going to keep it like this for now so
that we can easily see the changes.
We need this for greenboot-status, in the RHEL for Edge images. This
updates the generator for x86_64 and aarch64 and updates the test cases
for rhel-edge-commit.
Signed-off-by: Tom Gundersen <teg@jklm.no>
We now have greenboot available so update the packages and services
also add exclude sections as subscription-manager is for some reason
getting pulled in which brings dnf and other deps we explicitly don't
want.
Signed-off-by: Peter Robinson <pbrobinson@gmail.com>
[teg: minor fixes and squash several fixup commits]
explicitly specify the cluster and the default resource pool
when importing b/c the import process creates a temporary VM,
which requires a ResourcePool to provision. Same thing when
provisioning a VM.
Now that we've reduced how much of our PSI quota we are using so the
OpenStack boot tests will work, we need to use AWS for jobs more often.
This should allow test runs to complete a little sooner by freeing up
PSI resources for the jobs that are only able to run there.
Signed-off-by: Major Hayden <major@redhat.com>
Prior this commit we only had support for username/password authentication
in the koji integration. This wasn't particularly useful because this
auth type isn't used in any production instance.
This commit adds the support for GSSAPI/Kerberos authentication.
The implementation uses kerby library which is very lightweight wrapper
around C gssapi library.
Also, the koji unit test and the run-koji-container script were modified
so the GSSAPI auth is fully tested.
In the near future, we will need to communicate with Koji using HTTPS.
This will surely bring the need for ignoring bad certificates/providing
our own self-signed ones. Thus, this commit prepares the Koji integration
by adding a way to accept a custom http transport which can be used to
customize the TLS settings.
Previously, Koji instance could be both logged-in and not logged-in.
This change disallows it: Now, the Koji instance is created by calling
koji.Login, so it must be always logged-in. This change should lead to more
robust code.
Services in Github Actions are cool but have some drawbacks:
1) We want to be able to use the container setup locally, therefore there's
run-koji-container script which does exactly the same setup as it's defined
in Github Actions. We don't want duplicities though.
2) In the near future, we will need more complicated setup - generating
certificates before a container is started. This is not possible with
the current Github Actions capabilities.
This commit removes the container setup from Github Actions and just reuses
the run-koji-container script in the GH Actions environment. This way we
have only one setup which is also more flexible.
run-koji-container has now two actions: start and stop:
- ./run-koji-container.sh start
- ./run-koji-container.sh stop
The start action starts all containers. When it exits, all containers are
started and running in the background. To stop and removethem, use the stop
action.
This change is needed so we're able to easily use this script also in the CI
environment.
The setup should be container engine agnostic. This changes allows this script
to be run on systems which prefer docker over podman (e.g. Github Actions).
- inside RunJob() there is a deferred function which will remove
the entire temporary directory in which images are created, including
the streamOptimized file
- inside testBootUsingVMware(), which wants to use this function,
there is already a deferred function which removes the converted
image
osbuild support returning metadata about each of the stages/assembler
runs. Parse the results from the rpm stage, which contains the header
fields from the installed RPMs, in particular the MD5 sum of the RPMs in
question. This information is needed to be passed as metadata to koji
when uploading images.
Signed-off-by: Tom Gundersen <teg@jklm.no>
In the same way `osbuild.Manifest` is the input to the osbuild API,
`osbuild.Result` is the output. Move it to the `osbuild` package where
it belongs.
This is not a functional change.
Signed-off-by: Tom Gundersen <teg@jklm.no>