After introducing Go 1.18 to a project, it's required by law to convert at
least one method to a generic one.
Everyone hates IntToPtr, StringToPtr, BoolToPtr and Uint64ToPtr, so let's
convert them to the ultimate generic ToPtr one.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
At the moment we have duplicate logic here; ideally of course
we consolidate (since both codebases are Go, perhaps we could
create a tiny little Go library for "RHEL GCP stuff"?) but
for now let's just cross-link for awareness.
Since the `api.sh` test case is using random GCE zone from a random GCE
region which name starts with the `GCP_REGION` CI environment variable.
Since the used region name is not known to the `cloud-cleaner`, it has
to iterate over all potential GCE regions and their zones. We can not
simply filter the VM instance name a list of instances, because any
`instances` API call requires a zone name to be provided.
Add a new internal `cloud/gcp` package method to list existing GCE
regions based on a provided filter.
Delete all internal `cloud/gcp` API related to importing virtual images
to GCP using Cloud Build API. This API is no longer needed.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Introduce a new `ComputeImageInsert()` method for importing images into
GCP. It uses the `compute.Images.Insert()` API [1], which has many
advantages over the currently used way of importing images using the
CloudBuild API. The advantages are mainly that the image is imported as
is and no additional cache files or VMs are created as part of the
import process. Therefore there is no need to do additional cleanup of
cache files after importing the image.
In addition, the import itself is approximately 30% faster for RHEL
images when using the `Insert()` call.
Nevertheless the `Insert()` call accepts only gzip-ed tarball with a RAW
disk, unlike the `Import()` call, which accepts basically any virtual
disk format.
[1] https://cloud.google.com/compute/docs/reference/rest/v1/images/insert
Signed-off-by: Tomas Hozza <thozza@redhat.com>
The internal GCP package used `pkg.go.dev/google.golang.org/api` [1] to
interact with Compute Engine API. Modify the package to use the new and
idiomatic `pkg.go.dev/cloud.google.com/go` [2] library for interacting
with the Compute Engine API. The new library have been already used to
interact with the Cloudbuild and Storage APIs. The new library was not
used for Compute Engine since the beginning, because at that time, it
didn't support Compute Engine.
Update go.mod and vendored packages.
[1] https://github.com/googleapis/google-api-go-client
[2] https://github.com/googleapis/google-cloud-go
Signed-off-by: Tomas Hozza <thozza@redhat.com>
The `api.sh` test currently always defaults to "<REGION>-a" zone when
creating instance using the built image. The resources in a zone may get
exhausted and the solution is to use a different zone. Currently even a
CI job retry won't help with mitigation of such error during a CI run.
Modify `api.sh` to pick random GCP zone for a given region when creating
a compute instance. Use only GCP zones which are "UP".
The `cloud-cleaner` relied on the behavior of `api.sh` to always choose
the "<REGION>-a" zone. Guessing the chosen zone in `cloud-cleaner` is
not viable, but thankfully the instance name is by default unique for
the whole GCP project. Modify `cloud-cleaner` to iterate over all
available zones in the used region and try to delete the specific
instance in each of them.
Make `ComputeZonesInRegion` method from the `internal/cloud/gcp` package
exported and use it in `cloud-cleaner` for getting the list of available
zones in a region.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
The GCP image import method currently use the Cloud Build API with
Google's Daisy workflow. This workflow creates multiple GCE resources
during its execution. Although the desired Region for the imported image
is specified as a workflow argument, this has no effect on the GCE
Zone used by the workflow for created resources. By default it seems
to default to "us-central1-a" Zone. As a result, there are common cases
of resources being exhausted in the default zone.
Add a method, which translates provided Google Storage Region to a GCE
Region, which is needed mainly for multi and dual Storage Regions.
Add a method, which returns a list of available GCE Zones for a given
GCE Region.
Modify the ComputeImageImport() method to translate the provided Google
Storage Region to list of corresponding GCE Regions. If the provided
Storage Region is not multi or dual Region, then the list contains only
a single item, the provided Region. Then pick a random Region from the
list. Subsequently get available GCE Zones within the Region and pick a
random one for use by the workflow. Specify the GCE Zone to use as a
build step argument.
This change should be completely transparent to the API user.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Add method to fetch Cloudbuild job log.
Add method to parse Cloudbuild job log for created resources. Parsing is
specific to the Image import Cloudbuild job and its logs format. Add
unit tests for the parsing function.
Add method to clean up all resources (instances, disks, storage objects)
after a Cloudbuild job.
Modify the worker osbuild job implementation and also the GCP upload CLI
tool to use the new cleanup method CloudbuildBuildCleanup().
Keep the StorageImageImportCleanup() method, because it is still used by
the cloud-cleaner tool. There is no way for the cloud-cleaner to figure
out the Cloudbuild job ID to be able to call CloudbuildBuildCleanup()
instead.
Add methods to delete Compute instance and disk.
Add method to get Compute instance information. This is useful for
checking if the instance has been already deleted, or whether it still
exists.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Modify all relevant methods in the internal GCP library to accept
context from the caller.
Modify all places which call the internal GCP library methods to pass
the context.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Extend internal GCP library to allow deleting Compute Node image and
instance. In addition provide function to load service account
credentials file content from the environment.
Change names used for GCP image and instance in `api.sh` integration
test to make them predictable. This is important, so that cloud-cleaner
can identify potentially left over resources and clean them up. Use the
same approach for generating predictable, but run-specific, test ID as
in GenerateCIArtifactName() from internal/test/helpers.go. Use SHA224
to generate a hash from the string, because it can contain characters
not allowed by GCP for resource name (specifically "_" e.g. in "x86_64").
SHA-224 was picked because it generates short enough output and it is
future proof for use in RHEL (unlike MD5 or SHA-1).
Refactor cloud-cleaner to clean up GCP resources and also to run cleanup
for each cloud in a separate goroutine.
Modify run_cloud_cleaner.sh to be able to run in environment in which
AZURE_CREDS is not defined.
Always run cloud-cleaner after integration tests for rhel8, rhel84 and
cs8, which test GCP.
Define DISTRO_CODE for each integration testing stage in Jenkinsfile.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Split the GCP library into multiple files:
- compute.go - code interacting mainly with the Compute Node resources
- storage.go - code interacting mainly with the Cloud Storage resources
- gcp.go - common code (e.g. authentication with GCP)
Signed-off-by: Tomas Hozza <thozza@redhat.com>