Rather than using the arbitrary port 8701, use the standard 443. The
worker API will remain on a separate port, and as long as the two APIs
are exposed by the same binary that will have to remain separate at
8700.
Move the test instance of koji on localhost from 443 to 4343, to avoid a
conflict.
In a follow-up we should also give this API a prefix, so the cloud API
can share the same port with it.
Signed-off-by: Tom Gundersen <teg@jklm.no>
CHANGE_ID is set for PRs, which is why it worked during the review
process, but in the master branch it is not set. BRANCH_NAME is set for
both PRs and master branch. In case of PRs it is in form `PR-<pull
request number>`.
This sets up containers running koji and supporting infrastructure, and
calls the osbuild-composer-koji API to build and image and push it into
our testing instance.
koji-compose.py and various fixes by Christian Kellner.
Signed-off-by: Tom Gundersen <teg@jklm.no>
In the same way we require authentication for the worker API, require
clients of the koji API to authenticate using SSL client certificates.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Contrary to our assumption, we cannot initialize the build with the
link to the task. We can only update the link once the build has
completed.
This seems like a bug in koji, but we keep it like this for now.
The API of kolo/xmlrpc changed after the commit that is shipped in
Fedora. Pin the vendored version to that and adjust the API usage.
This should make the RPM compile in both RHEL and Fedora.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Add support for both cancelling and failing a build. This is tested, but
not hooked up, as we need some more architecture work before that makes
sense.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Move to requiring CGInitBuild to be called before CGImport. In the
future we could make the former optional again, but for now we want to
allow the caller to have done CGInitBuild and for composer only to do
the CGImport using the passed in build_id and token.
Also rename and document some struct fields in the metadata struct to
make them more specific to our use-case and hopefully easier to read.
Signed-off-by: Tom Gundersen <teg@jklm.no>
So far, composes created by kojiapi didn't have any targets. This commit
adds the koji target to them.
This is the last piece of the puzzle. From now on, osbuild-composer has
a koji API, which is actually able to upload images to Koji! Yay!
Introduce a target for Koji and hooked it up in the worker, so if koji
target is specified, the image is uploaded to koji.
[teg: use system kerberos config rather than reading from env]
Add a systemd socket for Koji API. If enabled when osbuild-composer.service
is started, the service will also listen on the socket and serve Koji API
there.
Note that Koji API doesn't upload to Koji yet, this still needs to be hooked
up.
Based on a patch from Tom Gundersen, thanks!
This just translates between the OpenAPI spec and our internal
API.
This still lacks tests, but a follow-up commit adds integration tests.
`internal/kojiapi/openapi.gen.go` was automatically generated from
`internal/kojiapi/openapi.yml`. To regenerate use `go generate ./...`.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This adds the OpenAPI spec for the new composer-koji API. This API is meant
to expose expose just the functionality needed to generate images and push
them to koji.
Each compose may consist of several images, each image may have a
different architecture and image type, and the set of repositories with
their contents may be distinct. However, a compose is restricted to one
distro and one koji transaction.
Signed-off-by: Tom Gundersen <teg@jklm.no>
When using random names for artifacts like AWS snapshots, or Azure
images, it becomes hard to clean them up in case of CI failure. See this
issue for more details:
https://github.com/osbuild/osbuild-composer/issues/942
This PR introduces predictable names so that we can easily determine
which artifact belongs to which PR and therefore we can decide to wipe
all resources that are not needed any more.
The worker API returns errors of the form:
{ "message": "..." }
Print the message of those errors when receiving an error on the client.
This adds an `Error` type to openapi.yml and marks all routes as
returning it on 4XX and 5XX.
Instead of sending a `token` to workers, send back to URLs:
1. "location": URL at which the job can be inspected (GET) and updated
(PATCH).
2. "artifact_location": URL at which artifacts should be uploaded to.
The actual URLs remain the same, but a client does not need to stitch
them together manually (except appending the artifact's name).
Unfortunately, the client code generated by `deepmap` does not lend
itself to this style of APIs. Use standard http.Client again, which is a
partial revert of 0962fbd30.
The job token will be deprecated in favor of URLs.
If a key is not set, use a new random UUID. Also, don't overwrite the
options struct with that new key.
Don't give out job ids to workers, but `tokens`, which serve as an
indirection. This way, restarting composer won't confuse it when a stray
worker returns a result for a job that was still running. Also,
artifacts are only moved to the final location once a job finishes.
This change breaks backwards compatibility, but we're not yet promising
a stable worker API to anyone.
This drops the transition tests in server_test.go. These don't make much
sense anymore, because there's only one allowed transition, from running
to finished. They heavily relied on job slot ids, which are not easily
accessible with the `TestRoute` API. Overall, adjusting this seemed like
too much work for their benefit.
The code generator uses the `operationID` field to generate server
handlers, client functions, and types. Use simpler names to make the
generated code easier to read.
This kind of common base path is better set in the top-level
`server.url` field, so that it can be adjusted.
For now, drop it completely, as we already broke the consistency when
introducing the `/status` route.
This change breaks backwards compatibility, but we're not yet promising
a stable worker API to anyone.
Write an openapi spec for the worker API and use `deepmap/oapi-codegen`
to generate scaffolding for the server-side using the `labstack/echo`
server.
Incidentally, echo by default returns the errors in the same format that
worker API always has:
{ "message": "..." }
The API itself is unchanged to make this change easier to understand. It
will be changed to better suit our needs in future commits.
Notes:
ATM will not run any actual tests b/c we want to make sure the
pipeline configuration is correct.
run_tests() will call the deploy.sh script and then do nothing
b/c of the "dummy-" prefix which doesn't match any actual tests!
We are storing some data in the user's home directory, so let's print
the username so we know what that is.
In particular, this would tell us which user has been authorized to log
in via ssh.
Signed-off-by: Tom Gundersen <teg@jklm.no>
More specifically only those that are needed in
/cmd/osbuild-image/tests.
This patch can be merged with the previous one if we want to make sure
every commit can be built, but I'm going to keep it like this for now so
that we can easily see the changes.
We need this for greenboot-status, in the RHEL for Edge images. This
updates the generator for x86_64 and aarch64 and updates the test cases
for rhel-edge-commit.
Signed-off-by: Tom Gundersen <teg@jklm.no>
We now have greenboot available so update the packages and services
also add exclude sections as subscription-manager is for some reason
getting pulled in which brings dnf and other deps we explicitly don't
want.
Signed-off-by: Peter Robinson <pbrobinson@gmail.com>
[teg: minor fixes and squash several fixup commits]
explicitly specify the cluster and the default resource pool
when importing b/c the import process creates a temporary VM,
which requires a ResourcePool to provision. Same thing when
provisioning a VM.
Now that we've reduced how much of our PSI quota we are using so the
OpenStack boot tests will work, we need to use AWS for jobs more often.
This should allow test runs to complete a little sooner by freeing up
PSI resources for the jobs that are only able to run there.
Signed-off-by: Major Hayden <major@redhat.com>
Prior this commit we only had support for username/password authentication
in the koji integration. This wasn't particularly useful because this
auth type isn't used in any production instance.
This commit adds the support for GSSAPI/Kerberos authentication.
The implementation uses kerby library which is very lightweight wrapper
around C gssapi library.
Also, the koji unit test and the run-koji-container script were modified
so the GSSAPI auth is fully tested.
In the near future, we will need to communicate with Koji using HTTPS.
This will surely bring the need for ignoring bad certificates/providing
our own self-signed ones. Thus, this commit prepares the Koji integration
by adding a way to accept a custom http transport which can be used to
customize the TLS settings.
Previously, Koji instance could be both logged-in and not logged-in.
This change disallows it: Now, the Koji instance is created by calling
koji.Login, so it must be always logged-in. This change should lead to more
robust code.