The `api.sh` test currently always defaults to "<REGION>-a" zone when
creating instance using the built image. The resources in a zone may get
exhausted and the solution is to use a different zone. Currently even a
CI job retry won't help with mitigation of such error during a CI run.
Modify `api.sh` to pick random GCP zone for a given region when creating
a compute instance. Use only GCP zones which are "UP".
The `cloud-cleaner` relied on the behavior of `api.sh` to always choose
the "<REGION>-a" zone. Guessing the chosen zone in `cloud-cleaner` is
not viable, but thankfully the instance name is by default unique for
the whole GCP project. Modify `cloud-cleaner` to iterate over all
available zones in the used region and try to delete the specific
instance in each of them.
Make `ComputeZonesInRegion` method from the `internal/cloud/gcp` package
exported and use it in `cloud-cleaner` for getting the list of available
zones in a region.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Move the OSBuildStagesToRPMs function, associated test, and RPM type
from the worker into the rpmmd subpackge. We will use this function in
the cloud API to compile the NEVRAs for the new metadata endpoint.
If a user uses a temporary access key for login, a session token is also
needed.
This commit adds support for it to the internal aws library and also
to the osbuild-upload-aws helper. Note that this doesn't affect the main
osbuild-composer executable nor the worker. Everything here should work
as before and session tokens are not supported. Something for a follow up
if anyone needs it.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
Add a new CLI option to `osbuild-image-tests` called
`-skip-selinux-ctx-check` to workaround the limitation of `setfiles` on
RHEL-8 [1]. If the option is passed to the binary, then the
'selinux/context-mismatch' part is removed from the "expected" and
"actual" image-info report, before these two reports are compared.
Modify `image_tests.sh` to run `osbuild-image-tests` with
`-skip-selinux-ctx-check` when run on RHEL-8.
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1973754
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Koji image request handling now reads the exports defined by each image
type. All APIs now support reading the exports defined by each image
type. The worker still falls back to "assembler" in case the call comes
from an older version of composer.
Uploads an artifact to an S£ bucket and returns a presigned URL to allow
the user to download the file.
Although it uses a lot of common code with the AWS AMI upload target,
it's treated as a completely separate target.
Modify composer to use RepoRegistry, instead of loading the host
repositories, when initializing WeldrAPI.
Modify WeldrAPI to use RepoRegistry, instead of a map of repository
definitions. Make sure that the RepoRegistry method specific to image
type is used in Welder where appropriate. Specifically when depsolving a
Blueprint, which is used to build a specific image type. Update Weldr
API unit tests to reflect the change.
Add a new method to RepoRegistry, allowing to get list of repositories,
which should be used for building an image for a given architecture,
without specifying the exact image type. Add relevant unit tests.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Composer does not have 1:1 mapping of what can be the Host Distro name
and the names of supported distributions held in the Distroregistry.
The fact that the host distro `Name()` method as passed to the Weldr API
does not return the same name as what is used as distro name for
repository definitions. This makes it hard to use `distro.Distro` and
`distro.Arch` directly and rely on the values returned by them as their
name.
Add `New*HostDistro()` to all distro definitions, accepting the name
that should be returned by the distro's `Name()` method. This is useful
mainly if the host distro is Beta or Stream variant of the distro.
Change the distroregistry.Registry to contain host distro as a separate
value set when creating it using `New()` function. This value is
returned by `Registry.FromHost()` method. Determining the host distro is
handled by the `NewDefault()` function. Move the distro name mangling to
distroregistry package. Add relevant unit tests.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Add method to fetch Cloudbuild job log.
Add method to parse Cloudbuild job log for created resources. Parsing is
specific to the Image import Cloudbuild job and its logs format. Add
unit tests for the parsing function.
Add method to clean up all resources (instances, disks, storage objects)
after a Cloudbuild job.
Modify the worker osbuild job implementation and also the GCP upload CLI
tool to use the new cleanup method CloudbuildBuildCleanup().
Keep the StorageImageImportCleanup() method, because it is still used by
the cloud-cleaner tool. There is no way for the cloud-cleaner to figure
out the Cloudbuild job ID to be able to call CloudbuildBuildCleanup()
instead.
Add methods to delete Compute instance and disk.
Add method to get Compute instance information. This is useful for
checking if the instance has been already deleted, or whether it still
exists.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Modify all relevant methods in the internal GCP library to accept
context from the caller.
Modify all places which call the internal GCP library methods to pass
the context.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
The previous version constructed multiple temporary variables and then
create job result from them. This was needed because we had multiple
upload targets but now that we have only one, this is only fragile
version of what can be done in a simplified way.
This PR removes the temporary variables and assigns errors and success
states right after the upload or build has finished.
Multiple upload targets are not supported by osbuild-composer any more.
Dropping support for this in worker therefore doesn't change anything
from the user's perspective, but it allows us to simplify the code a
bit.
Replace calls to "continue" with "return nil" because the job finished
correctly even though it failed to perform the task. But the failure was
reported to osbuild-composer for further processing so there is no need
to duplicate and report the same error in worker process
Drop support for LocalTarget, this has not been used in a long time,
and we don't really need to stay compatible across many releases
(just as long as we don't get problems with having to deploy in
lock-step), at least not yet.
Also drop support for KojiTarget, this has been replaced by the
osbuild-koji job type.
The previous implementation exited before reporting back to the worker
API in few branches. This left the compose status in RUNNING state even
though the worker did not work of the job any more. Refactoring the
API call into the `deref` part makes sure it gets called every time.
This commit only moves bits of the code around so that the status gets
back to osbuild-composer, but it still doesn't contain any useful
information in case osbuild fails etc. This will be introduced in
subsequent commits.
When shelling out for a CLI test the error returned from the Start()
command prints the exit code which is not very informative. Capturing
and printing stderr is a lot more useful.
osbuild now supports using the `--export` flag (can be invoked multiple
times) to request the exporting of one or more artefacts. Omitting it
causes the build job to export nothing.
The Koji API doesn't support the new image types (yet) so it simply uses
the "assembler" name, which is the final stage of the old (v1)
Manifests.
Fix a bug in the worker job implementation and GCP CLI upload tool,
which causes the code to report wrong error instance in case the image
import failed for some reason.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Add StorageListObjectsByMetadata() to internal GCP library. The method
allows one to search specific Storage bucket for objects based on
provided metadata.
Extend cloud-cleaner to search for any Storage objects related to the
image import, using custom metadata set on the object. Delete all found
objects.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Extend StorageObjectUpload() to allow setting custom metadata on the
uploaded object.
Modify worker's osbuild job implementation and GCP CLI upload tool to
set the chosen image name as a custom metadata on the uploaded object.
This will make it possible to connect Storage objects to specific
images.
Add News entry about image name being added as metadata to uploaded GCP
Storage object as part of worker job.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Extend internal GCP library to allow deleting Compute Node image and
instance. In addition provide function to load service account
credentials file content from the environment.
Change names used for GCP image and instance in `api.sh` integration
test to make them predictable. This is important, so that cloud-cleaner
can identify potentially left over resources and clean them up. Use the
same approach for generating predictable, but run-specific, test ID as
in GenerateCIArtifactName() from internal/test/helpers.go. Use SHA224
to generate a hash from the string, because it can contain characters
not allowed by GCP for resource name (specifically "_" e.g. in "x86_64").
SHA-224 was picked because it generates short enough output and it is
future proof for use in RHEL (unlike MD5 or SHA-1).
Refactor cloud-cleaner to clean up GCP resources and also to run cleanup
for each cloud in a separate goroutine.
Modify run_cloud_cleaner.sh to be able to run in environment in which
AZURE_CREDS is not defined.
Always run cloud-cleaner after integration tests for rhel8, rhel84 and
cs8, which test GCP.
Define DISTRO_CODE for each integration testing stage in Jenkinsfile.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
The internal GCP library was originally placed into `internal/upload`
directory, since its purpose was mainly to upload and import built
images to GCP.
Functionality for other cloud-provider-specific libraries is broader,
however scattered around the `internal/` directory based on purpose (e.g. in
`internal/boot` and `internal/upload`). Since all parts of provider-specific
library usually share some common pieces (e.g. authentication), it makes
sense to consolidate them into a single package (e.g. in
`internal/cloud/<provider>`).
Create `internal/cloud` directory, where all cloud-provider-specific
internal libraries should be consolidated. Start with GCP.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Most images share a build root, checkpoint the build root to speed up tests
at the potential cost of some disk space (in the cases the build roots are not the same).
Make the handling of GCP credentials more consistent with what is being
done e.g. for Azure. Make the GCP section in worker's configuration a
pointer so that it does not show up in the printed worker's
configuration during start up if it was not specified in the actual
configuration file.
Load the GCP credentials file, if provided, during the worker start up to
prevent failure later on while processing a job with GCP upload target.
Pass the loaded GCP credentials as []byte to the OSBuildJobImpl.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Modify worker's job implementation to try to share GCP image only if the
provided list of accounts is not empty.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Originally, the internal GCP library in `internal/upload/gcp` was
logging various information and errors. Refactor the code to move all
logging to callers of the library. As a result, some methods now return
additional information to preserve the same amount of information being
logged for GCP.
Refactor methods to have only single purpose and not do any extra work,
such as storage cleanup. Methods which create new resources now don't do
any cleanup at all. The caller is responsible to check for any errors
and perform any cleanup necessary. Necessary methods to perform cleanup
are provided.
Modify worker's job implementation and GCP CLI tool to explicitly do all
necessary cleanup, including in case of errors.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
The same test is run in distro/distro_test.go. The redundancy was probably
caused by a bitrot in several commits.
I decided to remove the test from distro implementations to reduce the amount
of duplicated code.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
This commit adds NewDefault() method to distroregistry that returns a slice
with all distributions supported by osbuild-composer. This way, there's only
one place where a distribution needs to be defined while its support
is being added to composer.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>