Defaults according to https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config:
Defaults to a chain of credential providers to search for credentials in
environment variables, shared credential file, and EC2 Instance Roles.
If nothing is specified fall back to whatever instance role.
Implement the structured errors as defined by the worker client.
Every error for each of the job types now returns a structured
error with a reason and a specific error code. This will make
it possible to differentiate between 4xx errors and 5xx errors.
This commit refactors the way errors are implemented in the workers,
but maintains backwards compatability in composer by checking for
both kinds of errors.
Running the job in this case is basically undefined, so let's just skip it
in order to not break anything.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
Pipeline names are added to each job before adding to the queue. When a
job is finished, the names are copied to the Result object as well. This
is done for both OSBuild and Koji jobs.
The pipeline names in the result are primarily used to separate package
lists into build and payload/image packages in two cases:
1. Koji builds: for reporting the build root and image package lists to
Koji (in Koji finalize).
2. Cloud API (v1 and v2): for reporting the payload packages in the
metadata request.
The pipeline names are also used to print the system log output in the
order in which pipelines are executed. This still isn't used when
printing the OSBuild Result (osbuild2.Result.Write()) and we still rely
on sorting by pipeline name
(see https://github.com/osbuild/osbuild-composer/pull/1330).
- koji-finalize:
Use v2 result type to collect RPM metadata.
The separation between the "build" pipeline and the rest is based on the
pipeline name, which isn't completely reliable since pipeline names can
be arbitrary.
Koji will fail a build if it specifies duplicate packages, so the RPM
lists are deduplicated. The "build" pipeline package list is also
deduplicated in case there are multiple build stages in the same
pipeline.
- osbuild:
Use v2 result type for printing build result to log.
Signed-off-by: Achilleas Koutsou <achilleas@koutsou.net>
To help along with debugging, this commit makes the worker able to print
the status of the different stages with a oneliner for each successfull
stages and a detailed message for failed ones.
Sample output:
Jul 23[..]: Build stages results:
Jul 23[..]: org.osbuild.rpm success
Jul 23[..]: org.osbuild.selinux success
Jul 23[..]: Stages results:
Jul 23[..]: org.osbuild.rpm success
Jul 23[..]: org.osbuild.fix-bls success
Jul 23[..]: org.osbuild.fstab success
Jul 23[..]: org.osbuild.grub2 success
Jul 23[..]: org.osbuild.locale success
Jul 23[..]: org.osbuild.timezone success
Jul 23[..]: org.osbuild.users failure:
Jul 23[..]: [/usr/lib/tmpfiles.d/journal-nocow.conf:26] Failed to resolve specifier: uninitialized /etc detected, skipping
Jul 23[..]: All rules containing unresolvable specifiers will be skipped.
Jul 23[..]: Failed to create file /sys/fs/selinux/checkreqprot: Read-only file system
Jul 23[..]: useradd: group 'toto' does not exist
Fixes#1584
Move the OSBuildStagesToRPMs function, associated test, and RPM type
from the worker into the rpmmd subpackge. We will use this function in
the cloud API to compile the NEVRAs for the new metadata endpoint.
If a user uses a temporary access key for login, a session token is also
needed.
This commit adds support for it to the internal aws library and also
to the osbuild-upload-aws helper. Note that this doesn't affect the main
osbuild-composer executable nor the worker. Everything here should work
as before and session tokens are not supported. Something for a follow up
if anyone needs it.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
Uploads an artifact to an S£ bucket and returns a presigned URL to allow
the user to download the file.
Although it uses a lot of common code with the AWS AMI upload target,
it's treated as a completely separate target.
Add method to fetch Cloudbuild job log.
Add method to parse Cloudbuild job log for created resources. Parsing is
specific to the Image import Cloudbuild job and its logs format. Add
unit tests for the parsing function.
Add method to clean up all resources (instances, disks, storage objects)
after a Cloudbuild job.
Modify the worker osbuild job implementation and also the GCP upload CLI
tool to use the new cleanup method CloudbuildBuildCleanup().
Keep the StorageImageImportCleanup() method, because it is still used by
the cloud-cleaner tool. There is no way for the cloud-cleaner to figure
out the Cloudbuild job ID to be able to call CloudbuildBuildCleanup()
instead.
Add methods to delete Compute instance and disk.
Add method to get Compute instance information. This is useful for
checking if the instance has been already deleted, or whether it still
exists.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Modify all relevant methods in the internal GCP library to accept
context from the caller.
Modify all places which call the internal GCP library methods to pass
the context.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
The previous version constructed multiple temporary variables and then
create job result from them. This was needed because we had multiple
upload targets but now that we have only one, this is only fragile
version of what can be done in a simplified way.
This PR removes the temporary variables and assigns errors and success
states right after the upload or build has finished.
Multiple upload targets are not supported by osbuild-composer any more.
Dropping support for this in worker therefore doesn't change anything
from the user's perspective, but it allows us to simplify the code a
bit.
Replace calls to "continue" with "return nil" because the job finished
correctly even though it failed to perform the task. But the failure was
reported to osbuild-composer for further processing so there is no need
to duplicate and report the same error in worker process
Drop support for LocalTarget, this has not been used in a long time,
and we don't really need to stay compatible across many releases
(just as long as we don't get problems with having to deploy in
lock-step), at least not yet.
Also drop support for KojiTarget, this has been replaced by the
osbuild-koji job type.
The previous implementation exited before reporting back to the worker
API in few branches. This left the compose status in RUNNING state even
though the worker did not work of the job any more. Refactoring the
API call into the `deref` part makes sure it gets called every time.
This commit only moves bits of the code around so that the status gets
back to osbuild-composer, but it still doesn't contain any useful
information in case osbuild fails etc. This will be introduced in
subsequent commits.
osbuild now supports using the `--export` flag (can be invoked multiple
times) to request the exporting of one or more artefacts. Omitting it
causes the build job to export nothing.
The Koji API doesn't support the new image types (yet) so it simply uses
the "assembler" name, which is the final stage of the old (v1)
Manifests.
Fix a bug in the worker job implementation and GCP CLI upload tool,
which causes the code to report wrong error instance in case the image
import failed for some reason.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Extend StorageObjectUpload() to allow setting custom metadata on the
uploaded object.
Modify worker's osbuild job implementation and GCP CLI upload tool to
set the chosen image name as a custom metadata on the uploaded object.
This will make it possible to connect Storage objects to specific
images.
Add News entry about image name being added as metadata to uploaded GCP
Storage object as part of worker job.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
The internal GCP library was originally placed into `internal/upload`
directory, since its purpose was mainly to upload and import built
images to GCP.
Functionality for other cloud-provider-specific libraries is broader,
however scattered around the `internal/` directory based on purpose (e.g. in
`internal/boot` and `internal/upload`). Since all parts of provider-specific
library usually share some common pieces (e.g. authentication), it makes
sense to consolidate them into a single package (e.g. in
`internal/cloud/<provider>`).
Create `internal/cloud` directory, where all cloud-provider-specific
internal libraries should be consolidated. Start with GCP.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Make the handling of GCP credentials more consistent with what is being
done e.g. for Azure. Make the GCP section in worker's configuration a
pointer so that it does not show up in the printed worker's
configuration during start up if it was not specified in the actual
configuration file.
Load the GCP credentials file, if provided, during the worker start up to
prevent failure later on while processing a job with GCP upload target.
Pass the loaded GCP credentials as []byte to the OSBuildJobImpl.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Modify worker's job implementation to try to share GCP image only if the
provided list of accounts is not empty.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Originally, the internal GCP library in `internal/upload/gcp` was
logging various information and errors. Refactor the code to move all
logging to callers of the library. As a result, some methods now return
additional information to preserve the same amount of information being
logged for GCP.
Refactor methods to have only single purpose and not do any extra work,
such as storage cleanup. Methods which create new resources now don't do
any cleanup at all. The caller is responsible to check for any errors
and perform any cleanup necessary. Necessary methods to perform cleanup
are provided.
Modify worker's job implementation and GCP CLI tool to explicitly do all
necessary cleanup, including in case of errors.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
This commit adds and implements org.osbuild.azure.image target.
Let's talk about the already implemented org.osbuild.azure target firstly:
The purpose of this target is to authenticate using the Azure Storage
credentials and upload the image file as a Page Blob. Page Blob is basically
an object in storage and it cannot be directly used to launch a VM. To achieve
that, you need to define an actual Azure Image with the Page Blob attached.
For the cloud API, we would like to create an actual Azure Image that is
immediately available for new VMs. The new target accomplishes it.
To achieve this, it must use a different authentication method: Azure OAuth.
The other important difference is that currently, the credentials are stored
on the worker and not in target options. This should lead to better security
because we don't send the credentials over network. In the future, we would
like to have credential-less setup using workers in Azure with the right
IAM policies applied but this requires more investigation and is not
implemented in this commit.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
The UploadImage method doesn't actually create an image. It creates a Page
Blob. Blob is something like S3 object but in the Azure terminology. Page
Blob means that's optimized for random access and it's the only blob type
that can be used to create images.
This commit cleans up the terminology so it's less confusing.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
Return GCP-specific target results form the worker, similar as it is
done for AWS.
Extend Cloud API to allow GCP-specific upload Options.
Modify Cloud API to return UploadOptions as part of the UploadStatus.
Modify Cloud API integration test to check returned upload Options and
upload Type.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Add the TargetResult struct to OSBuildJobResult. Include the 'options'
interface on TargetResult to contain target-specific information,
for example amiID and region from AWS. Expose 'options' on a status
call as an UploadStatus field. Add logic to support AWS within this
format, which can be used as a template for other targets.
Add support for GCP as an upload target to the internal API.
Extend the cloudapi to allow GCP as an upload target in the compose
request. Regenerate the cloudapi go code. Added GCP-specific upload
result component in the API definition, similar to AWS. It is not yet
used, but it will be once returning a target-specific result from
worker is supported.
Add support for GCP upload target to the worker job implementation.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Let's keep this on the same filesystem as the osbuild store, and
in particular stay away from /var/tmp and its scary semantics.
We are not aware of any issues caused by /var/tmp, but getting
rid of it means we don't have to think about that when debugging,
if nothing else.
Signed-off-by: Tom Gundersen <teg@jklm.no>