Uploads an OCI image to OCI object storage, and generates a
pre-authenticated request for the object, which can be used to import it
into custom images.
The osbuild export is specific to the upload target and different
targets may require using a different export. While osbuild-composer
still does not support multiple exports for osbuild jobs, this prepares
the ground for such support in the future.
The backward compatibility with older implementations of the composer
and workers is kept on the JSON (Un)mashaling level, where the JSON
message is always a super-set of the old and new way of providing the
exports to osbuild job.
The uploading of artifacts back to the worker server for the on-premise
(Weldr) use case was signaled to the worker by setting the `ImageName`
in the `OSBuildJob` definition. The code also relies on the osbuild
exports being specified in the `OSBuildJob`, instead of in the target
(this is not implemented yet).
Prepare the ground for moving osbuild export definition from
`OSBuildJob` to `Target` by introducing an explicit `Worker Server"
upload target. This target will signal to the worker that it should
upload the image back to the worker server. The new target is not yet
used by any API implementation.
Extend the worker osbuild job implementation to handle the new upload
target.
The filename of the image as produced by osbuild for a given export is
currently set in each target options type in the `Filename` struct
member. However, the value is not really specific to any target type,
but to the specific export used for the target. For this reason move the
value form target type options to the `Target` struct inside a new
struct `OsbuildArtifact` under the name`ExportFilename`.
The backward compatibility with older implementations of the composer
and workers is kept on the JSON (Un)mashaling level, where the JSON
object is always a super-set of the old and new way of providing the
export filename in the Target.
Completely remove the use of `local` target from all code, which is not
required to keep backward compatibility. The target has not been used in
composer for some time already, but some unit tests still used its data
structures. Mark the target as deprecated and adjust all unit tests that
depended on it.
The backward compatibility is kept mostly to enable long running
osbuild-composer instances, which were upgraded to still read old jobs
from the store.
While a target with the same intention will be reintroduced, the current
`local` target data structures contain many fields which would not be
relevant for the new target.
In addition, while the "local" target will be ever used only by Weldr
API, the name would be a bit misleading. Although the worker usually
runs on the same system when using Weldr API, there is no hard
requirement enforcing this setup. In reality, the worker will be
uploading the image back to the worker server, so there is room for a
better name.
Add a new generic container registry client via a new `container`
package. Use this to create a command line utility as well as a
new upload target for container registries.
The code uses the github.com/containers/* project and packages to
interact with container registires that is also used by skopeo,
podman et al. One if the dependencies is `proglottis/gpgme` that
is using cgo to bind libgpgme, so we have to add the corresponding
devel package to the BuildRequires as well as installing it on CI.
Checks will follow later via an integration test.
Use case
--------
If Endpoint is not set and Region is - upload to AWS S3
If both the Endpoint and Region are set - upload the Generic S3 via Weldr API
If neither the Endpoint and Region are set - upload the Generic S3 via Composer API (use configuration)
jobimpl-osbuild
---------------
Add configuration fields for Generic S3 upload
Support S3 upload requests coming from Weldr or Composer API to either AWS or Generic S3
Weldr API for Generic S3 requires that all connection parameters but the credentials be passed in the API call
Composer API for Generic S3 requires that all conneciton parameters are taken from the configuration
Adjust to the consolidation in Target and UploadOptions
Target and UploadOptions
------------------------
Add the fields that were specific to the Generic S3 structures to the AWS S3 one
Remove the structures for Generic S3 and always use the AWS S3 ones
Worker Main
-----------
Add Endpoint, Region, Bucket, CABundle and SkipSSLVerification to the configuration structure
Pass the values to the Server
Weldr API
---------
Keep the generic.s3 provider name to maintain the API, but unmarshel into awsS3UploadSettings
tests - api.sh
--------------
Allow the caller to specifiy either AWS or Generic S3 upload targets for specific image types
Implement the pieces required for testing upload to a Generic S3 service
In some cases generalize the AWS S3 functions for reuse
GitLab CI
---------
Add test case for api.sh tests with edge-commit and generic S3
jobimpl-osbuild
---------------
Add GenericS3Creds to struct
Add method to create AWS with Endpoint for Generic S3 (with its own credentials file)
Move uploading to S3 and result handling to a separate method (along with the special VMDK handling)
adjust the AWS S3 case to the new method
Implement a new case for uploading to a generic S3 service
awscloud
--------
Add wrapper methods for endpoint support
Set the endpoint to the AWS session
Set s3ForcePathStyle to true if endpoint was set
Target
------
Define a new target type for the GenericS3Target and Options
Handle unmarshaling of the target options and result for the Generic S3
Weldr
-----
Add support for only uploading to AWS S3
Define new structures for AWS S3 and Generic S3 (based on AWS S3)
Handle unmarshaling of the providers settings' upload settings
main
----
Add a section in the main config for the Generic S3 service for credentials
If provided pass the credentials file name to the osbuild job implementation
Upload Utility
--------------
Add upload-generic-s3 utility
Makefile
------
Do not fail if the bin directory already exists
Tests
-----
Add test cases for both AWS and a generic S3 server
Add a generic s3_test.sh file for both test cases and add it to the tests RPM spec
Adjust the libvirt test case script to support already created images
GitLabCI - Extend the libvirt test case to include the two new tests
Uploads an artifact to an S£ bucket and returns a presigned URL to allow
the user to download the file.
Although it uses a lot of common code with the AWS AMI upload target,
it's treated as a completely separate target.
This commit adds and implements org.osbuild.azure.image target.
Let's talk about the already implemented org.osbuild.azure target firstly:
The purpose of this target is to authenticate using the Azure Storage
credentials and upload the image file as a Page Blob. Page Blob is basically
an object in storage and it cannot be directly used to launch a VM. To achieve
that, you need to define an actual Azure Image with the Page Blob attached.
For the cloud API, we would like to create an actual Azure Image that is
immediately available for new VMs. The new target accomplishes it.
To achieve this, it must use a different authentication method: Azure OAuth.
The other important difference is that currently, the credentials are stored
on the worker and not in target options. This should lead to better security
because we don't send the credentials over network. In the future, we would
like to have credential-less setup using workers in Azure with the right
IAM policies applied but this requires more investigation and is not
implemented in this commit.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
Add support for GCP as an upload target to the internal API.
Extend the cloudapi to allow GCP as an upload target in the compose
request. Regenerate the cloudapi go code. Added GCP-specific upload
result component in the API definition, similar to AWS. It is not yet
used, but it will be once returning a target-specific result from
worker is supported.
Add support for GCP upload target to the worker job implementation.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
New upload target for VMWare, similar to the ones for AWS and Azure,
allowing users to set credentials for their vSphere instance.
Commit also includes function that performs the actual upload.
Introduce a target for Koji and hooked it up in the worker, so if koji
target is specified, the image is uploaded to koji.
[teg: use system kerberos config rather than reading from env]
Go doesn't really do variants, so we must somehow emulate it. The
json objects we use are essentially tagged unions, with a `name`
field in reverse domain name notation identifying the type and a
type specific 'options' object.
In Go we represent this by having an BarOptions interface, which
implements a private method `isBarOptions()`, making sure that only
types in the same package are able to implement it. Each type FooBar
that should belong to the variant implements the interface, and a
constructor `NewFooBar(options *FooBarOptions) *Bar` that makes sure
the `name` field is set correctly.
This would be enough to represent our types and marshal them into
JSON, but unmarshalling would not work (json does not know about
our tags, so would not know what concrete types to demarshal to).
We therefore must also implement the Unmarshall interface for Bar,
to select the right types for the Options field.
We implement his logic for Target, Stage and Assembler. A handful
of concrete types are also implemented, matching what osbuild
supports.
Signed-off-by: Tom Gundersen <teg@jklm.no>
For now we will hardcode the org.osbuild.local target, so we might
as well fix this up front.
We do not yet support any target types, but for testing purposes we
claim to support 'tar', and we pass a noop tar pipeline to the worker.
This makes introspecting the job-queu api using curl a bit more
pleasant.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This is by no means done, and needs more tests, docs and bugfixes,
but push it early so we have a common base to work on.
Based on work by Martin Sehnoutka.
Signed-off-by: Tom Gundersen <teg@jklm.no>