Using the group names option only works for the default VPC, the workers
are not running in the default VPC. For non-default VPCs filters should
be used.
InstanceRequirements is very flakey, the create fleet request fails
almost consistently with the same error.
To continue with testing use a fixed instance type for now. As a
followup we can expand the instance type selection logic or figure out
what was wrong with the InstanceRequirements.
For non-default VPCs, AWS needs the subnets it can launch the instance
in, otherwise it will try to launch the instance in the default VPC,
even if the supplied security groups are attached to a non-default VPC.
Furthermore there can only be 1 subnet specified per availability zone,
so query the subnets in the VPC of the host (as the instance needs to be
launched in the same network), and pick 1 of the VPC's subnets per AZ.
The SEV-SNP support was added since RHEL-9.1, so we need to keep the
original Guest OS Feature set when importing RHEL-9.0 images to GCP.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
UBI and the oldest support Fedora (37) now all have go 1.19, so we are
cleared to switch.
gofmt now reformats comments in certain cases, so that explains the formatting
changes in this commit.
See https://go.dev/doc/go1.19#go-doc
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
Some uses of `cloudbuild` GCP API have been left in our internal cloud
API implementation for GCP. We do not use `cloudbuild` to import GCE
images into GCP any more.
Do not request the `cloudbuild` authentication scope when getting new
GCP client.
Update vendored packages accordingly.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
When the AMI is being registered from a snapshot, the caller can
optionally specify the boot mode of the AMI. If no boot mode is
specified, then the default behavior is to use the boot type of the
instance that is launched from the AMI.
The default behavior (no boot type specified) is preserved after this
change.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
ioutil has been deprecated since go 1.16, this fixes all of the
deprecated functions we are using:
ioutil.ReadFile -> os.ReadFile
ioutil.ReadAll -> io.ReadAll
ioutil.WriteFile -> os.WriteFile
ioutil.TempFile -> os.CreateTemp
ioutil.TempDir -> os.MkdirTemp
All of the above are a simple name change, the function arguments and
results are exactly the same as before.
ioutil.ReadDir -> os.ReadDir
now returns a os.DirEntry but the IsDir and Name functions work the
same. The difference is that the FileInfo must be retrieved with the
Info() function which can also return an error.
These were identified by running:
golangci-lint run --build-tags=integration ./...
After introducing Go 1.18 to a project, it's required by law to convert at
least one method to a generic one.
Everyone hates IntToPtr, StringToPtr, BoolToPtr and Uint64ToPtr, so let's
convert them to the ultimate generic ToPtr one.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
At the moment we have duplicate logic here; ideally of course
we consolidate (since both codebases are Go, perhaps we could
create a tiny little Go library for "RHEL GCP stuff"?) but
for now let's just cross-link for awareness.
By setting the object's ACL to "public-read", anyone can download the object
even without authenticating with AWS.
The osbuild-upload-generic-s3 command got a new -public argument that
uses this new feature.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
API
---
Allow the user to pass the CA public certification or skip the verification
AWSCloud
--------
Restore the old version of newAwsFromCreds for access to AWS
Create a new method newAwsFromCredsWithEndpoint for Generic S3 which sets the endpoint and optionally overrides the CA Bundle or skips the SSL certificate verification
jobimpl-osbuild
---------------
Update with the new parameters
osbuild-upload-generic-s3
-------------------------
Add ca-bunlde and skip-ssl-verification flags
tests
-----
Split the tests into http, https with certificate and https skip certificate check
Create a new base test for S3 over HTTPS for secure and insecure
Move the generic S3 test to tools to reuse for secure and insecure connections
All S3 tests now use the aws cli tool
Update the libvirt test to be able to download over HTTPS
Update the RPM spec
Kill container with sudo
Since the `api.sh` test case is using random GCE zone from a random GCE
region which name starts with the `GCP_REGION` CI environment variable.
Since the used region name is not known to the `cloud-cleaner`, it has
to iterate over all potential GCE regions and their zones. We can not
simply filter the VM instance name a list of instances, because any
`instances` API call requires a zone name to be provided.
Add a new internal `cloud/gcp` package method to list existing GCE
regions based on a provided filter.
Refactor the handling of GCP credentials in the worker to be equivalent
to what is done for AWS. The main idea is that the code decides which
credentials to use when processing each job. This change will allow
preferring credentials passed via upload `TargetOptions` with the job,
over the credentials configured in worker's configuration or the default
way of authenticating implemented by the Google library.
Move loading of GCP credentials to the internal `gcp` library into
`NewFromFile()` function accepting path to the file with credentials.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Delete all internal `cloud/gcp` API related to importing virtual images
to GCP using Cloud Build API. This API is no longer needed.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Introduce a new `ComputeImageInsert()` method for importing images into
GCP. It uses the `compute.Images.Insert()` API [1], which has many
advantages over the currently used way of importing images using the
CloudBuild API. The advantages are mainly that the image is imported as
is and no additional cache files or VMs are created as part of the
import process. Therefore there is no need to do additional cleanup of
cache files after importing the image.
In addition, the import itself is approximately 30% faster for RHEL
images when using the `Insert()` call.
Nevertheless the `Insert()` call accepts only gzip-ed tarball with a RAW
disk, unlike the `Import()` call, which accepts basically any virtual
disk format.
[1] https://cloud.google.com/compute/docs/reference/rest/v1/images/insert
Signed-off-by: Tomas Hozza <thozza@redhat.com>
jobimpl-osbuild
---------------
Add GenericS3Creds to struct
Add method to create AWS with Endpoint for Generic S3 (with its own credentials file)
Move uploading to S3 and result handling to a separate method (along with the special VMDK handling)
adjust the AWS S3 case to the new method
Implement a new case for uploading to a generic S3 service
awscloud
--------
Add wrapper methods for endpoint support
Set the endpoint to the AWS session
Set s3ForcePathStyle to true if endpoint was set
Target
------
Define a new target type for the GenericS3Target and Options
Handle unmarshaling of the target options and result for the Generic S3
Weldr
-----
Add support for only uploading to AWS S3
Define new structures for AWS S3 and Generic S3 (based on AWS S3)
Handle unmarshaling of the providers settings' upload settings
main
----
Add a section in the main config for the Generic S3 service for credentials
If provided pass the credentials file name to the osbuild job implementation
Upload Utility
--------------
Add upload-generic-s3 utility
Makefile
------
Do not fail if the bin directory already exists
Tests
-----
Add test cases for both AWS and a generic S3 server
Add a generic s3_test.sh file for both test cases and add it to the tests RPM spec
Adjust the libvirt test case script to support already created images
GitLabCI - Extend the libvirt test case to include the two new tests
Defaults according to https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config:
Defaults to a chain of credential providers to search for credentials in
environment variables, shared credential file, and EC2 Instance Roles.
If nothing is specified fall back to whatever instance role.
The internal GCP package used `pkg.go.dev/google.golang.org/api` [1] to
interact with Compute Engine API. Modify the package to use the new and
idiomatic `pkg.go.dev/cloud.google.com/go` [2] library for interacting
with the Compute Engine API. The new library have been already used to
interact with the Cloudbuild and Storage APIs. The new library was not
used for Compute Engine since the beginning, because at that time, it
didn't support Compute Engine.
Update go.mod and vendored packages.
[1] https://github.com/googleapis/google-api-go-client
[2] https://github.com/googleapis/google-cloud-go
Signed-off-by: Tomas Hozza <thozza@redhat.com>
The `api.sh` test currently always defaults to "<REGION>-a" zone when
creating instance using the built image. The resources in a zone may get
exhausted and the solution is to use a different zone. Currently even a
CI job retry won't help with mitigation of such error during a CI run.
Modify `api.sh` to pick random GCP zone for a given region when creating
a compute instance. Use only GCP zones which are "UP".
The `cloud-cleaner` relied on the behavior of `api.sh` to always choose
the "<REGION>-a" zone. Guessing the chosen zone in `cloud-cleaner` is
not viable, but thankfully the instance name is by default unique for
the whole GCP project. Modify `cloud-cleaner` to iterate over all
available zones in the used region and try to delete the specific
instance in each of them.
Make `ComputeZonesInRegion` method from the `internal/cloud/gcp` package
exported and use it in `cloud-cleaner` for getting the list of available
zones in a region.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
The `cloudbuildResourcesFromBuildLog()` function from the internal GCP
package could cause panic while parsing Build job log which failed early
and didn't create any Compute Engine resources. The function relied on
the `Regexp.FindStringSubmatch()` method to always return a match
while being used on the build log. Accessing a member of a nil slice
would cause a panic in `osbuild-worker`, such as:
Stack trace of thread 185316:
#0 0x0000564e5393b5e1 runtime.raise (osbuild-worker)
#1 0x0000564e5391fa1e runtime.sigfwdgo (osbuild-worker)
#2 0x0000564e5391e354 runtime.sigtrampgo (osbuild-worker)
#3 0x0000564e5393b953 runtime.sigtramp (osbuild-worker)
#4 0x00007f37e98e3b20 __restore_rt (libpthread.so.0)
#5 0x0000564e5393b5e1 runtime.raise (osbuild-worker)
#6 0x0000564e5391f5ea runtime.crash (osbuild-worker)
#7 0x0000564e53909306 runtime.fatalpanic (osbuild-worker)
#8 0x0000564e53908ca1 runtime.gopanic (osbuild-worker)
#9 0x0000564e53906b65 runtime.goPanicIndex (osbuild-worker)
#10 0x0000564e5420b36e github.com/osbuild/osbuild-composer/internal/cloud/gcp.cloudbuildResourcesFromBuildLog (osbuild-worker)
#11 0x0000564e54209ebb github.com/osbuild/osbuild-composer/internal/cloud/gcp.(*GCP).CloudbuildBuildCleanup (osbuild-worker)
#12 0x0000564e54b05a9b main.(*OSBuildJobImpl).Run (osbuild-worker)
#13 0x0000564e54b08854 main.main (osbuild-worker)
#14 0x0000564e5390b722 runtime.main (osbuild-worker)
#15 0x0000564e53939a11 runtime.goexit (osbuild-worker)
Add a unit test testing this scenario.
Make the `cloudbuildResourcesFromBuildLog()` function more robust and
not blindly expect to find matches in the build log. As a result the
`cloudbuildBuildResources` struct instance returned from the function
may be empty. Subsequently make sure that the `CloudbuildBuildCleanup()`
method handles an empty `cloudbuildBuildResources` instance correctly.
Specifically the `storageCacheDir.bucket` may be an empty string and
thus won't exist. Ensure that this does not result in infinite loop by
checking for `storage.ErrBucketNotExist` while iterating the bucket
objects.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
The GCP image import method currently use the Cloud Build API with
Google's Daisy workflow. This workflow creates multiple GCE resources
during its execution. Although the desired Region for the imported image
is specified as a workflow argument, this has no effect on the GCE
Zone used by the workflow for created resources. By default it seems
to default to "us-central1-a" Zone. As a result, there are common cases
of resources being exhausted in the default zone.
Add a method, which translates provided Google Storage Region to a GCE
Region, which is needed mainly for multi and dual Storage Regions.
Add a method, which returns a list of available GCE Zones for a given
GCE Region.
Modify the ComputeImageImport() method to translate the provided Google
Storage Region to list of corresponding GCE Regions. If the provided
Storage Region is not multi or dual Region, then the list contains only
a single item, the provided Region. Then pick a random Region from the
list. Subsequently get available GCE Zones within the Region and pick a
random one for use by the workflow. Specify the GCE Zone to use as a
build step argument.
This change should be completely transparent to the API user.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Add method to fetch Cloudbuild job log.
Add method to parse Cloudbuild job log for created resources. Parsing is
specific to the Image import Cloudbuild job and its logs format. Add
unit tests for the parsing function.
Add method to clean up all resources (instances, disks, storage objects)
after a Cloudbuild job.
Modify the worker osbuild job implementation and also the GCP upload CLI
tool to use the new cleanup method CloudbuildBuildCleanup().
Keep the StorageImageImportCleanup() method, because it is still used by
the cloud-cleaner tool. There is no way for the cloud-cleaner to figure
out the Cloudbuild job ID to be able to call CloudbuildBuildCleanup()
instead.
Add methods to delete Compute instance and disk.
Add method to get Compute instance information. This is useful for
checking if the instance has been already deleted, or whether it still
exists.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Modify all relevant methods in the internal GCP library to accept
context from the caller.
Modify all places which call the internal GCP library methods to pass
the context.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Add StorageListObjectsByMetadata() to internal GCP library. The method
allows one to search specific Storage bucket for objects based on
provided metadata.
Extend cloud-cleaner to search for any Storage objects related to the
image import, using custom metadata set on the object. Delete all found
objects.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Extend StorageObjectUpload() to allow setting custom metadata on the
uploaded object.
Modify worker's osbuild job implementation and GCP CLI upload tool to
set the chosen image name as a custom metadata on the uploaded object.
This will make it possible to connect Storage objects to specific
images.
Add News entry about image name being added as metadata to uploaded GCP
Storage object as part of worker job.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Extend internal GCP library to allow deleting Compute Node image and
instance. In addition provide function to load service account
credentials file content from the environment.
Change names used for GCP image and instance in `api.sh` integration
test to make them predictable. This is important, so that cloud-cleaner
can identify potentially left over resources and clean them up. Use the
same approach for generating predictable, but run-specific, test ID as
in GenerateCIArtifactName() from internal/test/helpers.go. Use SHA224
to generate a hash from the string, because it can contain characters
not allowed by GCP for resource name (specifically "_" e.g. in "x86_64").
SHA-224 was picked because it generates short enough output and it is
future proof for use in RHEL (unlike MD5 or SHA-1).
Refactor cloud-cleaner to clean up GCP resources and also to run cleanup
for each cloud in a separate goroutine.
Modify run_cloud_cleaner.sh to be able to run in environment in which
AZURE_CREDS is not defined.
Always run cloud-cleaner after integration tests for rhel8, rhel84 and
cs8, which test GCP.
Define DISTRO_CODE for each integration testing stage in Jenkinsfile.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Split the GCP library into multiple files:
- compute.go - code interacting mainly with the Compute Node resources
- storage.go - code interacting mainly with the Cloud Storage resources
- gcp.go - common code (e.g. authentication with GCP)
Signed-off-by: Tomas Hozza <thozza@redhat.com>
The internal GCP library was originally placed into `internal/upload`
directory, since its purpose was mainly to upload and import built
images to GCP.
Functionality for other cloud-provider-specific libraries is broader,
however scattered around the `internal/` directory based on purpose (e.g. in
`internal/boot` and `internal/upload`). Since all parts of provider-specific
library usually share some common pieces (e.g. authentication), it makes
sense to consolidate them into a single package (e.g. in
`internal/cloud/<provider>`).
Create `internal/cloud` directory, where all cloud-provider-specific
internal libraries should be consolidated. Start with GCP.
Signed-off-by: Tomas Hozza <thozza@redhat.com>