Add support for importing the GCE image into GCP using Weldr API. The
credentials to be used can be specified in the upload settings and will
be then used by the worker to authenticate with GCP.
The GCP target credentials are passed to Weldr API as base64 encoded
content of the GCP credentials JSON file. The reason is that the JSON
file contains many values and its format could change in the future.
This way, the Weldr API does not rely on the credentials file content
format in any way.
Add a new test case for the GCP upload via Weldr and run it in CI.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Refactor the handling of GCP credentials in the worker to be equivalent
to what is done for AWS. The main idea is that the code decides which
credentials to use when processing each job. This change will allow
preferring credentials passed via upload `TargetOptions` with the job,
over the credentials configured in worker's configuration or the default
way of authenticating implemented by the Google library.
Move loading of GCP credentials to the internal `gcp` library into
`NewFromFile()` function accepting path to the file with credentials.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Modify the Cloud API test case for GCP to use `gcloud` and GCP guest
tools installed in the image to connect to the VM instance over SSH.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Delete all internal `cloud/gcp` API related to importing virtual images
to GCP using Cloud Build API. This API is no longer needed.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Introduce a new `ComputeImageInsert()` method for importing images into
GCP. It uses the `compute.Images.Insert()` API [1], which has many
advantages over the currently used way of importing images using the
CloudBuild API. The advantages are mainly that the image is imported as
is and no additional cache files or VMs are created as part of the
import process. Therefore there is no need to do additional cleanup of
cache files after importing the image.
In addition, the import itself is approximately 30% faster for RHEL
images when using the `Insert()` call.
Nevertheless the `Insert()` call accepts only gzip-ed tarball with a RAW
disk, unlike the `Import()` call, which accepts basically any virtual
disk format.
[1] https://cloud.google.com/compute/docs/reference/rest/v1/images/insert
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Add the `gce-rhui` image type intended for Google Compute Engine. The image
uses Google's RHUI infrastructure to access Red Hat content.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Add the `gce` image type intended for Google Compute Engine. The image
is BYOS - bring your own subscription and requires registering in order
to access Red Hat content.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Extend RHEL-84 `imageTypeS2` structure to contain pipelines generator
function. Previously, the `imageTypeS2` implementation defaulted to only
a single pipelines generator method for EDGE image types. The ability to
pass a different generator function implementation is important to
enable addition of new image types relying on osbuild Manifest v2.
Rename the original pipeline generator method to `edgePipelines`.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Add the `gce-rhui` image type intended for Google Compute Engine. The image
uses Google's RHUI infrastructure to access Red Hat content.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Add the `gce` image type intended for Google Compute Engine. The image
is BYOS - bring your own subscription and requires registering in order
to access Red Hat content.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Add the `gce-rhui` image type intended for Google Compute Engine. The image
uses Google's RHUI infrastructure to access Red Hat content.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Add the `gce` image type intended for Google Compute Engine. The image
is BYOS - bring your own subscription and requires registering in order
to access Red Hat content.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Add the `gce` image type intended for Google Compute Engine. The image
is BYOS - bring your own subscription and requires registering in order
to access Red Hat content.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Filter the list of repositories passed in compose request based on the
`image_type_tags` object member. This is the same approach used by the
Weldr API. If the `image_type_tags` does not exist, the repo is added to
the list. If the `image_type_tags` exists, the repo is added to the list
only if the image type name is in the tags array.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Introduce `tarArchivePipeline()` function returning a pipeline,
which creates a Tar archive from another pipeline tree referenced by the
pipeline name.
Replace `tarStage()` with `osbuild.NewTarStage()`
Use the `tarArchivePipeline()` function in respective image type
pipelines.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Introduce `tarArchivePipeline()` function returning a pipeline, which
creates a Tar archive from another pipeline tree referenced by the
pipeline name.
Replace `tarStage()` with `osbuild.NewTarStage()`
Use the `tarArchivePipeline()` function in respective image type
pipelines.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Introduce `tarArchivePipeline()` function returning a pipeline, which
creates a Tar archive from another pipeline tree referenced by the
pipeline name.
Replace `tarStage()` with `osbuild.NewTarStage()`
Use the `tarArchivePipeline()` function in respective image type
pipelines.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
The `tar` stage options contain three boolean values. All of them
default to `true` in the osbuild stage implementation [1]. However
if these values were explicitly set to `false`, they would be omitted
from the resulting JSON structure. As a result, it was impossible to use
any non-default values.
Use `*bool` instead of `bool`, to ensure that explicitly set `false`
values will end up in the JSON structure passed to osbuild.
[1] 8102f20d23/stages/org.osbuild.tar (L39-L53)
Signed-off-by: Tomas Hozza <thozza@redhat.com>
The jobs in finish stage are only meant to report the overall status of
the pipeline, they do not require to download the hundreds of artifacts
from the previous stages.
Add depsolve job error dependency test cases for
regular composes and koji composes. The error furthest
up the chain should be returned in the details field
of the job error.
If an osbuild or koji-osbuild job has failed, add
a check to see if it is a result of the build jobs
dependencies and return the dependency failure job
error furthest up the chain of errors & add this
error to the details filed of the build job error.
Add a helper function to query dependency
failures of osbuild & koji-osbuild jobs.
If a build job has a dependency error the
function will check for the job error of the
manifest job. If that also has a dependency
error the function will query the depsolve
job too for a job error.
Add a helper function to check for dependency
errors for job errors. This simply returns true
if a job error has a dependency error code and
false otherwise.
Specify a size for the root filesystem in the partition table,
which basically equates to a minimum size. In reality all image
types specify a larger image size and thus we enlarge the root
file system to more than the specified size for plain layouts.
Does not change any existing manifests.
This also prepares the enablement of auto-LVM conversion, since
in that case we need to have a size for the root file system
specified.