2 configurations for the listeners are now possible:
- enableJWT=false with client ssl auth
- enableJWT=true with https
Actual verification of the tokens is handled by
https://github.com/openshift-online/ocm-sdk-go.
An authentication handler is run as the top level handler, before any
routing is done. Routes which do not require authentication should be
listed as exceptions.
Authentication can be restricted using an ACL file which allows
filtering based on JWT claims. For more information see the inline
comments in ocm-sdk/authentication.
As an added quirk the `-v` flag for the osbuild-composer executable was
changed to `-verbose` to avoid flag collision with glog which declares
the `-v` flag in the package `init()` function. The ocm-sdk depends on
glog and pulls it in.
To help along with debugging, this commit makes the worker able to print
the status of the different stages with a oneliner for each successfull
stages and a detailed message for failed ones.
Sample output:
Jul 23[..]: Build stages results:
Jul 23[..]: org.osbuild.rpm success
Jul 23[..]: org.osbuild.selinux success
Jul 23[..]: Stages results:
Jul 23[..]: org.osbuild.rpm success
Jul 23[..]: org.osbuild.fix-bls success
Jul 23[..]: org.osbuild.fstab success
Jul 23[..]: org.osbuild.grub2 success
Jul 23[..]: org.osbuild.locale success
Jul 23[..]: org.osbuild.timezone success
Jul 23[..]: org.osbuild.users failure:
Jul 23[..]: [/usr/lib/tmpfiles.d/journal-nocow.conf:26] Failed to resolve specifier: uninitialized /etc detected, skipping
Jul 23[..]: All rules containing unresolvable specifiers will be skipped.
Jul 23[..]: Failed to create file /sys/fs/selinux/checkreqprot: Read-only file system
Jul 23[..]: useradd: group 'toto' does not exist
Fixes#1584
Move the OSBuildStagesToRPMs function, associated test, and RPM type
from the worker into the rpmmd subpackge. We will use this function in
the cloud API to compile the NEVRAs for the new metadata endpoint.
If a user uses a temporary access key for login, a session token is also
needed.
This commit adds support for it to the internal aws library and also
to the osbuild-upload-aws helper. Note that this doesn't affect the main
osbuild-composer executable nor the worker. Everything here should work
as before and session tokens are not supported. Something for a follow up
if anyone needs it.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
Koji image request handling now reads the exports defined by each image
type. All APIs now support reading the exports defined by each image
type. The worker still falls back to "assembler" in case the call comes
from an older version of composer.
Uploads an artifact to an S£ bucket and returns a presigned URL to allow
the user to download the file.
Although it uses a lot of common code with the AWS AMI upload target,
it's treated as a completely separate target.
Add method to fetch Cloudbuild job log.
Add method to parse Cloudbuild job log for created resources. Parsing is
specific to the Image import Cloudbuild job and its logs format. Add
unit tests for the parsing function.
Add method to clean up all resources (instances, disks, storage objects)
after a Cloudbuild job.
Modify the worker osbuild job implementation and also the GCP upload CLI
tool to use the new cleanup method CloudbuildBuildCleanup().
Keep the StorageImageImportCleanup() method, because it is still used by
the cloud-cleaner tool. There is no way for the cloud-cleaner to figure
out the Cloudbuild job ID to be able to call CloudbuildBuildCleanup()
instead.
Add methods to delete Compute instance and disk.
Add method to get Compute instance information. This is useful for
checking if the instance has been already deleted, or whether it still
exists.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Modify all relevant methods in the internal GCP library to accept
context from the caller.
Modify all places which call the internal GCP library methods to pass
the context.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
The previous version constructed multiple temporary variables and then
create job result from them. This was needed because we had multiple
upload targets but now that we have only one, this is only fragile
version of what can be done in a simplified way.
This PR removes the temporary variables and assigns errors and success
states right after the upload or build has finished.
Multiple upload targets are not supported by osbuild-composer any more.
Dropping support for this in worker therefore doesn't change anything
from the user's perspective, but it allows us to simplify the code a
bit.
Replace calls to "continue" with "return nil" because the job finished
correctly even though it failed to perform the task. But the failure was
reported to osbuild-composer for further processing so there is no need
to duplicate and report the same error in worker process
Drop support for LocalTarget, this has not been used in a long time,
and we don't really need to stay compatible across many releases
(just as long as we don't get problems with having to deploy in
lock-step), at least not yet.
Also drop support for KojiTarget, this has been replaced by the
osbuild-koji job type.
The previous implementation exited before reporting back to the worker
API in few branches. This left the compose status in RUNNING state even
though the worker did not work of the job any more. Refactoring the
API call into the `deref` part makes sure it gets called every time.
This commit only moves bits of the code around so that the status gets
back to osbuild-composer, but it still doesn't contain any useful
information in case osbuild fails etc. This will be introduced in
subsequent commits.
osbuild now supports using the `--export` flag (can be invoked multiple
times) to request the exporting of one or more artefacts. Omitting it
causes the build job to export nothing.
The Koji API doesn't support the new image types (yet) so it simply uses
the "assembler" name, which is the final stage of the old (v1)
Manifests.
Fix a bug in the worker job implementation and GCP CLI upload tool,
which causes the code to report wrong error instance in case the image
import failed for some reason.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Extend StorageObjectUpload() to allow setting custom metadata on the
uploaded object.
Modify worker's osbuild job implementation and GCP CLI upload tool to
set the chosen image name as a custom metadata on the uploaded object.
This will make it possible to connect Storage objects to specific
images.
Add News entry about image name being added as metadata to uploaded GCP
Storage object as part of worker job.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
The internal GCP library was originally placed into `internal/upload`
directory, since its purpose was mainly to upload and import built
images to GCP.
Functionality for other cloud-provider-specific libraries is broader,
however scattered around the `internal/` directory based on purpose (e.g. in
`internal/boot` and `internal/upload`). Since all parts of provider-specific
library usually share some common pieces (e.g. authentication), it makes
sense to consolidate them into a single package (e.g. in
`internal/cloud/<provider>`).
Create `internal/cloud` directory, where all cloud-provider-specific
internal libraries should be consolidated. Start with GCP.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Make the handling of GCP credentials more consistent with what is being
done e.g. for Azure. Make the GCP section in worker's configuration a
pointer so that it does not show up in the printed worker's
configuration during start up if it was not specified in the actual
configuration file.
Load the GCP credentials file, if provided, during the worker start up to
prevent failure later on while processing a job with GCP upload target.
Pass the loaded GCP credentials as []byte to the OSBuildJobImpl.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Modify worker's job implementation to try to share GCP image only if the
provided list of accounts is not empty.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Originally, the internal GCP library in `internal/upload/gcp` was
logging various information and errors. Refactor the code to move all
logging to callers of the library. As a result, some methods now return
additional information to preserve the same amount of information being
logged for GCP.
Refactor methods to have only single purpose and not do any extra work,
such as storage cleanup. Methods which create new resources now don't do
any cleanup at all. The caller is responsible to check for any errors
and perform any cleanup necessary. Necessary methods to perform cleanup
are provided.
Modify worker's job implementation and GCP CLI tool to explicitly do all
necessary cleanup, including in case of errors.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
This commit adds and implements org.osbuild.azure.image target.
Let's talk about the already implemented org.osbuild.azure target firstly:
The purpose of this target is to authenticate using the Azure Storage
credentials and upload the image file as a Page Blob. Page Blob is basically
an object in storage and it cannot be directly used to launch a VM. To achieve
that, you need to define an actual Azure Image with the Page Blob attached.
For the cloud API, we would like to create an actual Azure Image that is
immediately available for new VMs. The new target accomplishes it.
To achieve this, it must use a different authentication method: Azure OAuth.
The other important difference is that currently, the credentials are stored
on the worker and not in target options. This should lead to better security
because we don't send the credentials over network. In the future, we would
like to have credential-less setup using workers in Azure with the right
IAM policies applied but this requires more investigation and is not
implemented in this commit.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
The UploadImage method doesn't actually create an image. It creates a Page
Blob. Blob is something like S3 object but in the Azure terminology. Page
Blob means that's optimized for random access and it's the only blob type
that can be used to create images.
This commit cleans up the terminology so it's less confusing.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
Return GCP-specific target results form the worker, similar as it is
done for AWS.
Extend Cloud API to allow GCP-specific upload Options.
Modify Cloud API to return UploadOptions as part of the UploadStatus.
Modify Cloud API integration test to check returned upload Options and
upload Type.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Add the TargetResult struct to OSBuildJobResult. Include the 'options'
interface on TargetResult to contain target-specific information,
for example amiID and region from AWS. Expose 'options' on a status
call as an UploadStatus field. Add logic to support AWS within this
format, which can be used as a template for other targets.
Add support for GCP as an upload target to the internal API.
Extend the cloudapi to allow GCP as an upload target in the compose
request. Regenerate the cloudapi go code. Added GCP-specific upload
result component in the API definition, similar to AWS. It is not yet
used, but it will be once returning a target-specific result from
worker is supported.
Add support for GCP upload target to the worker job implementation.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Let's keep this on the same filesystem as the osbuild store, and
in particular stay away from /var/tmp and its scary semantics.
We are not aware of any issues caused by /var/tmp, but getting
rid of it means we don't have to think about that when debugging,
if nothing else.
Signed-off-by: Tom Gundersen <teg@jklm.no>
VMDK image has default name 'disk.vmdk' and there is no option to change the name when uploading to vSphere,
so I'm using symlink so that uploaded image has the name user specified instead of the default one.
New upload target for VMWare, similar to the ones for AWS and Azure,
allowing users to set credentials for their vSphere instance.
Commit also includes function that performs the actual upload.
Expose a more detailed job status result - specifically, include upload status
alongside image status. Expand openapi.yml accordingly and add an UploadStatus
field to the OSBuildJobResult struct. At the moment, only represent the
"success" and "failure" states of UploadStatus - to differentiate between
"pending" and "running" would involve significant design decisions and should be
addressed in a separate commit.
Previously, the checks that dependencies were successful were all over the
Run() method. This led to a issue #1101 (lovely binary number btw).
This commit rewrites the Run() method to:
1) Extract dynamic args. Return an error if they cannot be unmarshalled.
2) Check if dependencies were successful. If not, call kojiFail, update the
job and return.
3) Create the CGImport metadata and call kojiImport.
Fixes#1101
Signed-off-by: Ondřej Budai <ondrej@budai.cz>