Upload target type is currently not returned form the worker, but
hardcoded in the cloudapi code to always return "aws". This make testing
of the cloudapi for other cloud providers quite complicated.
Since extending the target status information returned from the
worker is currently in progress, work around the situation for now
by returning an empty string as the upload type.
This will allows other types of upload targets to be tested as part of
cloudapi test case.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Add support for GCP as an upload target to the internal API.
Extend the cloudapi to allow GCP as an upload target in the compose
request. Regenerate the cloudapi go code. Added GCP-specific upload
result component in the API definition, similar to AWS. It is not yet
used, but it will be once returning a target-specific result from
worker is supported.
Add support for GCP upload target to the worker job implementation.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Optionally allow a pacakge set to be included in the compose request.
The specified packages are added to the base packages before
depsolving. As the base packages differ between the image types
the package customizations may have different results on the different
images part of the compose request.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Expose a more detailed job status result - specifically, include upload status
alongside image status. Expand openapi.yml accordingly and add an UploadStatus
field to the OSBuildJobResult struct. At the moment, only represent the
"success" and "failure" states of UploadStatus - to differentiate between
"pending" and "running" would involve significant design decisions and should be
addressed in a separate commit.
When rpmmd's Depsolve function is called we need to pass in the image
type's excluded packages. These excluded packages are retrieved when we
get the packages we include from each image type.
I thought rand in Go is auto-seeded but I was wrong, see [1].
This commit adds seed initialization.
[1]: https://golang.org/pkg/math/rand/#Seed
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
Imagine this situation: You have a RHEL system booted from an image produced
by osbuild-composer. On this system, you want to use osbuild-composer to
create another image of RHEL.
However, there's currently something funny with partitions:
All RHEL images built by osbuild-composer contain a root xfs partition. The
interesting bit is that they all share the same xfs partition UUID. This might
sound like a good thing for reproducibility but it has a quirk.
The issue appears when osbuild runs the qemu assembler: it needs to mount all
partitions of the future image to copy the OS tree into it.
Imagine that osbuild-composer is running on a system booted from an imaged
produced by osbuild-composer. This means that its root xfs partition has this
uuid:
efe8afea-c0a8-45dc-8e6e-499279f6fa5d
When osbuild-composer builds an image on this system, it runs osbuild that
runs the qemu assembler at some point. As I said previously, it will mount
all partitions of the future image. That means that it will also try to
mount the root xfs partition with this uuid:
efe8afea-c0a8-45dc-8e6e-499279f6fa5d
Do you remember this one? Yeah, it's the same one as before. However, the xfs
kernel driver doesn't like that. It contains a global table[1] of all xfs
partitions that forbids to mount 2 xfs partitions with the same uuid.
I mean... uuids are meant to be unique, right?
This commit changes the way we build RHEL 8.4 images: Each one now has a
unique uuid. It's now literally a unique universally unique identifier. haha
[1]: a349e4c659/fs/xfs/xfs_mount.c (L51)
Previously, baseurl was required in openapi.yaml. In order to add support
for metalink and mirrorlist repos as well, make all optional, since openapi
does not support mutually exclusive parameters. Instead, enforce this logic
in server.go, and if no repo has been specified, return a 400 bad request error.
The status of a job may depend on the status of its dependenices,
as we do not repeat for instance the failed state in each dependent
job.
Return also the list of dependencies so these can be queried too.
Most of the worker API is now untyped, but keep Enqueu() typed to
ensure the job objects match the names in the queue. This means we
must add a version of Enqueue() for each job type we support.
The worker server was heavily tied to OSBuildJob(Result). Untie it so
that it can deal with different job types in the future.
This necessitates a change in the jobqueue: Dequeue() now returns the
job type, as well as job arguments as json.RawMessage. This is so that
the server can wait on multiple job types with different argument
types.
The weldr, composer, and koji APIs continue to use only "osbuild" jobs.
Workers reported status via an `osbuild.Result`, which only includes
osbuild output. Make it report OSBuildJobResult instead, which was meant
to be used for this purpose and is already used as the result type in
the jobqueue.
While at it, add any errors produced by targets into this struct, as
well as an overall success flag.
Note that this breaks older workers returning the result of an osbuild
job to a new composer. I think this is fine in this case, for two
reasons:
1. We don't support running different versions of the worker and
composer in the weldr API, and remote workers aren't widely used yet.
2. Both osbuild.Result and worker.OSBuildJobResult have a top-level
`Success` boolean. Thus, logs are lost in such cases, but the overall
status of the compose is not.
Add "image_name" and "stream_optimized" fields to the osbuild job as
replacement for the local target options. The former signifies the name
of the uploaded artifact and whether an artifact should be uploaded at
all (only weldr API). The latter will be deprecated at some point, when
osbuild itself can make streamoptimized vmdk images.
This change separates what have always been two distinct concepts:
artifacts that are reported back to the composer node (in practice
always running on the same machine), and upload targets to clouds and
such. Separating them makes it easier to add job types that only allow
one upload target while keeping artifacts.
Keep the local target around, so that jobs that are scheduled can still
be run after an upgrade.
Don't use common.State anymore, because it has different values from
what's defined in openapi.yml. It makes sense to have these strings
defined in the same package as the spec — ideally, the code generator
would make them for us.
While at it, add a "running" status.
Fix the api.sh test to use these new statuses. Thanks to Ondřej Budai
for an additional fix there.
This removes the osbuild-composer-cloud package, binary, systemd units,
the (unused) test binary, and the (only-run-on-RHEL) test in aws.sh.
Instead, move the cloud API into the main package, using the same
socket as the koji API, osbuild-composer-api.socket. Expose it next to
the koji API on route `/api/composer/v1`.
This is a backwards incompatible change, but only of the -cloud parts,
which have been marked as subject to change.