Introduce a new `ComputeImageInsert()` method for importing images into
GCP. It uses the `compute.Images.Insert()` API [1], which has many
advantages over the currently used way of importing images using the
CloudBuild API. The advantages are mainly that the image is imported as
is and no additional cache files or VMs are created as part of the
import process. Therefore there is no need to do additional cleanup of
cache files after importing the image.
In addition, the import itself is approximately 30% faster for RHEL
images when using the `Insert()` call.
Nevertheless the `Insert()` call accepts only gzip-ed tarball with a RAW
disk, unlike the `Import()` call, which accepts basically any virtual
disk format.
[1] https://cloud.google.com/compute/docs/reference/rest/v1/images/insert
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Add the `gce-rhui` image type intended for Google Compute Engine. The image
uses Google's RHUI infrastructure to access Red Hat content.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Add the `gce` image type intended for Google Compute Engine. The image
is BYOS - bring your own subscription and requires registering in order
to access Red Hat content.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Extend RHEL-84 `imageTypeS2` structure to contain pipelines generator
function. Previously, the `imageTypeS2` implementation defaulted to only
a single pipelines generator method for EDGE image types. The ability to
pass a different generator function implementation is important to
enable addition of new image types relying on osbuild Manifest v2.
Rename the original pipeline generator method to `edgePipelines`.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Add the `gce-rhui` image type intended for Google Compute Engine. The image
uses Google's RHUI infrastructure to access Red Hat content.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Add the `gce` image type intended for Google Compute Engine. The image
is BYOS - bring your own subscription and requires registering in order
to access Red Hat content.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Add the `gce-rhui` image type intended for Google Compute Engine. The image
uses Google's RHUI infrastructure to access Red Hat content.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Add the `gce` image type intended for Google Compute Engine. The image
is BYOS - bring your own subscription and requires registering in order
to access Red Hat content.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Add the `gce` image type intended for Google Compute Engine. The image
is BYOS - bring your own subscription and requires registering in order
to access Red Hat content.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Filter the list of repositories passed in compose request based on the
`image_type_tags` object member. This is the same approach used by the
Weldr API. If the `image_type_tags` does not exist, the repo is added to
the list. If the `image_type_tags` exists, the repo is added to the list
only if the image type name is in the tags array.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Introduce `tarArchivePipeline()` function returning a pipeline,
which creates a Tar archive from another pipeline tree referenced by the
pipeline name.
Replace `tarStage()` with `osbuild.NewTarStage()`
Use the `tarArchivePipeline()` function in respective image type
pipelines.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Introduce `tarArchivePipeline()` function returning a pipeline, which
creates a Tar archive from another pipeline tree referenced by the
pipeline name.
Replace `tarStage()` with `osbuild.NewTarStage()`
Use the `tarArchivePipeline()` function in respective image type
pipelines.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Introduce `tarArchivePipeline()` function returning a pipeline, which
creates a Tar archive from another pipeline tree referenced by the
pipeline name.
Replace `tarStage()` with `osbuild.NewTarStage()`
Use the `tarArchivePipeline()` function in respective image type
pipelines.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
The `tar` stage options contain three boolean values. All of them
default to `true` in the osbuild stage implementation [1]. However
if these values were explicitly set to `false`, they would be omitted
from the resulting JSON structure. As a result, it was impossible to use
any non-default values.
Use `*bool` instead of `bool`, to ensure that explicitly set `false`
values will end up in the JSON structure passed to osbuild.
[1] 8102f20d23/stages/org.osbuild.tar (L39-L53)
Signed-off-by: Tomas Hozza <thozza@redhat.com>
The jobs in finish stage are only meant to report the overall status of
the pipeline, they do not require to download the hundreds of artifacts
from the previous stages.
Add depsolve job error dependency test cases for
regular composes and koji composes. The error furthest
up the chain should be returned in the details field
of the job error.
If an osbuild or koji-osbuild job has failed, add
a check to see if it is a result of the build jobs
dependencies and return the dependency failure job
error furthest up the chain of errors & add this
error to the details filed of the build job error.
Add a helper function to query dependency
failures of osbuild & koji-osbuild jobs.
If a build job has a dependency error the
function will check for the job error of the
manifest job. If that also has a dependency
error the function will query the depsolve
job too for a job error.
Add a helper function to check for dependency
errors for job errors. This simply returns true
if a job error has a dependency error code and
false otherwise.
Specify a size for the root filesystem in the partition table,
which basically equates to a minimum size. In reality all image
types specify a larger image size and thus we enlarge the root
file system to more than the specified size for plain layouts.
Does not change any existing manifests.
This also prepares the enablement of auto-LVM conversion, since
in that case we need to have a size for the root file system
specified.
jobimpl-osbuild
---------------
Add GenericS3Creds to struct
Add method to create AWS with Endpoint for Generic S3 (with its own credentials file)
Move uploading to S3 and result handling to a separate method (along with the special VMDK handling)
adjust the AWS S3 case to the new method
Implement a new case for uploading to a generic S3 service
awscloud
--------
Add wrapper methods for endpoint support
Set the endpoint to the AWS session
Set s3ForcePathStyle to true if endpoint was set
Target
------
Define a new target type for the GenericS3Target and Options
Handle unmarshaling of the target options and result for the Generic S3
Weldr
-----
Add support for only uploading to AWS S3
Define new structures for AWS S3 and Generic S3 (based on AWS S3)
Handle unmarshaling of the providers settings' upload settings
main
----
Add a section in the main config for the Generic S3 service for credentials
If provided pass the credentials file name to the osbuild job implementation
Upload Utility
--------------
Add upload-generic-s3 utility
Makefile
------
Do not fail if the bin directory already exists
Tests
-----
Add test cases for both AWS and a generic S3 server
Add a generic s3_test.sh file for both test cases and add it to the tests RPM spec
Adjust the libvirt test case script to support already created images
GitLabCI - Extend the libvirt test case to include the two new tests