Commit graph

107 commits

Author SHA1 Message Date
Sanne Raymaekers
8a8607cdf6 internal/vmware: add support for the GOVC_FOLDER option
When importing the ova it also creates a VM, and users don't always have
permission to register in the default folder.
2023-05-25 10:14:32 +02:00
Sanne Raymaekers
967306bc47 internal/upload: add import.ova support to vmware 2023-05-25 10:14:32 +02:00
Ondřej Budai
bd7f0741b2 upload/koji: always upload in the overwriting mode
We sometimes see the following error in the logs:
Fault(1000): upload path exists: /mnt/koji/work/osbuild-cg/osbuild-composer-koji-082e1c88/Fedora-IoT-38.raw.xz.

I think this happens when we retry the upload call of the first chunk due to
random network issues. The solution is to always upload in the overwriting
mode, which ignores the already existing file.
See https://pagure.io/koji/blob/175ecb5e8f3d45a1d244b227eb889321e5dd0a29/f/kojihub/kojihub.py#_15522

This is safe because:
1) We use UUIDs in the filename, which means that there should never be a real
   conflict.
2) The overwriting mode is actually the default mode in koji, see
   https://pagure.io/koji/blob/175ecb5e8f3d45a1d244b227eb889321e5dd0a29/f/koji/__init__.py#_3342

Signed-off-by: Ondřej Budai <ondrej@budai.cz>
2023-05-18 09:25:22 +02:00
Ondřej Budai
fdc4f54be8 upload/koji: add a retrying mechanism for CGImport
CGImport quite often fails with the following error:
Fault(1000): File size 735051776 for Fedora-IoT-38.raw.xz (expected 738785372)
doesn't match. Corrupted upload?

When I inspect the file manually, everything seems fine, though.
I believe that this because of NFS inconsistency when multiple DNS-balanced
kojihubs are used in the setup (which is what Fedora uses). The addded
loop implements a retrying mechanism for the CGImport call to try again
whenever we see this issue.

Note that this isn't caught by other HTTP retrying mechanism because a failed
XMLRPC call returns code 200.

Signed-off-by: Ondřej Budai <ondrej@budai.cz>
2023-05-18 09:25:22 +02:00
Ondřej Budai
943ead790e upload/azure: skip uploading empty pages
The size of the page blob is defined on creation and the blob is
zero-initialized. Therefore, we can just skip all the pages that contain
only zeros. This should save a lot of bandwidth if used on sparse files as
e.g. operating system images. (:
2023-04-04 09:09:43 +02:00
Ondřej Budai
abe6ccfb50 upload/azure: migrate from azure-storage-blob-go to azure-sdk-for-go
https://github.com/Azure/azure-storage-blob-go/ is deprecated, the main SDK
should be now used instead. Let's migrate the code. There should be no
functional changes.

Signed-off-by: Ondřej Budai <ondrej@budai.cz>
2023-04-04 09:09:43 +02:00
Ondřej Budai
9beddf626f upload/azure: remove the MD5 sum check
It doesn't actually make any sense. For Page Blobs, Azure doesn't compute any
hashes. The MD5 sum is basically just a property, which we set by one call and
get by the other call.

See
https://stackoverflow.com/questions/42229153/how-to-check-azure-storage-blob-file-uploaded-correctly/69319211#69319211

for more info.

Signed-off-by: Ondřej Budai <ondrej@budai.cz>
2023-04-04 09:09:43 +02:00
Brian C. Lane
7a4bb863dd Update deprecated io/ioutil functions
ioutil has been deprecated since go 1.16, this fixes all of the
deprecated functions we are using:

ioutil.ReadFile -> os.ReadFile
ioutil.ReadAll -> io.ReadAll
ioutil.WriteFile -> os.WriteFile
ioutil.TempFile -> os.CreateTemp
ioutil.TempDir -> os.MkdirTemp

All of the above are a simple name change, the function arguments and
results are exactly the same as before.

ioutil.ReadDir -> os.ReadDir

now returns a os.DirEntry but the IsDir and Name functions work the
same. The difference is that the FileInfo must be retrieved with the
Info() function which can also return an error.

These were identified by running:
golangci-lint run --build-tags=integration ./...
2023-03-07 09:22:23 -08:00
Diaa Sami
19f9ab7f58 koji: log unsuccessful requests only once 2023-03-02 15:48:12 +01:00
Diaa Sami
20c6fad7c2 osbuild-worker/koji: Add logging for koji requests/responses 2023-02-08 11:40:34 +01:00
Tomáš Hozza
4df3b0ca03 internal/upload/azure: make location optional in various methods
Make the `location` argument optional (can be now empty "") in
`RegisterImage()` and `CreateStorageAccount()` methods.

If the provided `location` argument is an empty string, then the location
is determined from the provided Resource Group instead.

Signed-off-by: Tomáš Hozza <thozza@redhat.com>
2022-10-27 19:33:43 +02:00
Tomáš Hozza
641f7a7d29 internal/upload/azure: add method for getting resource group location
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
2022-10-27 19:33:43 +02:00
Brian C. Lane
561bbbbdf3 azure: storageErr is already azblob.StorageError type 2022-09-15 03:57:40 -07:00
Tomas Hozza
95e2e75851 worker/osbuild: stop handling VMDK stream-optimized conversion
A backward compatibility code handling the conversion of VMDK image to
stream-optimized sub-format has been kept in the implementation since
PR#2529 [1] merged on May 4th 2022. Since this change, no API
implementation is submitting jobs, which would hit this conversion code,
because VMDK images are already being produced in the desired
sub-format.

On-premise deployments are expected to use the same composer and worker
versions. There are no composer / worker instances in production, which
are not running the modified code.

Delete the backward compatibility code.

[1] https://github.com/osbuild/osbuild-composer/pull/2529
2022-07-01 18:55:01 +01:00
Ondřej Budai
caadee87ec azure: add an option to tag page blobs
We want to start tagging page blobs so this commit adds a small tagging method
to our azure library and exposes it in the osbuild-upload-azure helper.

Example:

go run ./cmd/osbuild-upload-azure/ \
  -container azure-container \
  -image ./sample.vhd \
  -storage-access-key KEY \
  -storage-account account \
  -tag key:value \
  -tag hello:world \
  -tag bird:toucan

This commit also has to downgrade the azblob library version to 0.13 so the
API for blob tags is the same as the one currently shipped to Fedora.
This is suboptimal but it should unblock us for now.

Signed-off-by: Ondřej Budai <ondrej@budai.cz>
2022-06-13 21:06:01 +02:00
Ondřej Budai
f71ca8f0ca azure: move the .vhd extension logic to the callers
It always felt wrong that the method uploaded the blob under a different name
than the one specified in the blob metadata.

This commit moves the responsibility of specifying the right extension to
the callers. azure.EnsureVHDExtension helper was added to simplify this.

Signed-off-by: Ondřej Budai <ondrej@budai.cz>
2022-06-13 21:06:01 +02:00
Diaa Sami
e773d4896b koji: fix excessive logging & monitoring
update koji init & finalize to use custom leveled logging
This is mainly affects logging, but it also changes functionality slightly
since init & finalize are now using the customCheckRetry, they are able
to retry the "TLS timeout" error.
2022-04-05 23:48:30 +02:00
Diaa Sami
7c4d74481a koji: fix excessive logging & monitoring
update koji upload to use custom leveled logging, this only affects
logging
since uploading uses a different connection to send the chunks, it is
done separately in this commit
2022-04-05 23:48:30 +02:00
Diaa Sami
ed5cd56c5a koji: promote relevant logs to Info for monitoring
Add support for promoting certain `Debug` log messages to `Info` so we
can monitor them while the logging level set to `Info`, having it set
to Debug is far too noisy.
2022-04-05 23:48:30 +02:00
Diaa Sami
e6475d0e0e koji: make function non-member
follow-up to PR-2397
2022-04-05 23:48:30 +02:00
Diaa Sami
68639b4bf9 koji: increment retry counter only when retrying 2022-03-26 09:33:36 +01:00
Diaa Sami
6b08b8ed63 koji: don't decrement retry counter on the first call
After examining the logic of retryablehttp library, the callback does not happen for the first HTTP call, so no need to decrement when counting.
2022-03-26 09:33:36 +01:00
Diaa Sami
3496efe70d koji: initialize retryable client properly
Previously used client has MaxRetries of zero, so was not effectively
retrying
Fixes COMPOSER-1420
2022-03-26 09:33:36 +01:00
Diaa Sami
3ab2725042 koji: Reduce excessive logging by retryablehttp
Use LeveledLogger
Fixes COMPOSER-1394
2022-03-09 23:18:25 +00:00
Diaa Sami
e15998ced7 koji: add HTTP retries for uploads & init/finalize
and log number of retries for trackability
Fixes #2335
2022-03-06 11:04:37 +01:00
Diaa Sami
c1ae5b0881 Relax TCP timeouts for koji connections
See COMPOSER-1354 and linked tickets
2022-02-10 14:58:10 +01:00
Roy Golan
bee932e222 Add support for OCI upload provider
Signed-off-by: Roy Golan <rgolan@redhat.com>
2022-01-28 15:16:47 +01:00
Juan Abia
c8cf835db3 gosec: G401, G501 - Weak cryptographic primitive
azure, koji and gcp use md5 hashes. Gosec is not happy with it, so we
create exceptions for them (G401, G501).
2021-12-13 12:17:30 +02:00
sanne
c43ad2b22a osbuild-service-maintenance: Clean up expired images 2021-12-03 00:14:09 +00:00
Thomas Lavocat
010a1f5022 worker: Configure AWS credentials in the worker 2021-10-14 02:10:54 +01:00
Ondřej Budai
1e2ba4da64 upload/azure: use cheaper storage accounts
Previously, we used RAGRS which means that all our data was always replicated
to at least two regions for increased safety. This is cool but expensive, this PR
switches the API to use LRS that just uses one region.

Signed-off-by: Ondřej Budai <ondrej@budai.cz>
2021-08-17 17:51:23 +02:00
Ondřej Budai
385648223d spec: drop hacks for Fedora 32
There are not needed anymore, yay!

Signed-off-by: Ondřej Budai <ondrej@budai.cz>
2021-07-05 11:16:08 +02:00
Achilleas Koutsou
6b3920783f rpmmd: move RPM metadata tooling to internal pkg
Move the OSBuildStagesToRPMs function, associated test, and RPM type
from the worker into the rpmmd subpackge. We will use this function in
the cloud API to compile the NEVRAs for the new metadata endpoint.
2021-06-29 09:33:05 +01:00
Ondřej Budai
579a5df698 upload/aws: add support for session tokens
If a user uses a temporary access key for login, a session token is also
needed.

This commit adds support for it to the internal aws library and also
to the osbuild-upload-aws helper. Note that this doesn't affect the main
osbuild-composer executable nor the worker. Everything here should work
as before and session tokens are not supported. Something for a follow up
if anyone needs it.

Signed-off-by: Ondřej Budai <ondrej@budai.cz>
2021-06-28 13:14:19 +03:00
Achilleas Koutsou
e5b28c0bb3 New upload target: AWS S3
Uploads an artifact to an S£ bucket and returns a presigned URL to allow
the user to download the file.

Although it uses a lot of common code with the AWS AMI upload target,
it's treated as a completely separate target.
2021-06-18 14:02:09 +01:00
Tomas Hozza
075373a51e internal: Move GCP library to internal/cloud
The internal GCP library was originally placed into `internal/upload`
directory, since its purpose was mainly to upload and import built
images to GCP.

Functionality for other cloud-provider-specific libraries is broader,
however scattered around the `internal/` directory based on purpose (e.g. in
`internal/boot` and `internal/upload`). Since all parts of provider-specific
library usually share some common pieces (e.g. authentication), it makes
sense to consolidate them into a single package (e.g. in
`internal/cloud/<provider>`).

Create `internal/cloud` directory, where all cloud-provider-specific
internal libraries should be consolidated. Start with GCP.

Signed-off-by: Tomas Hozza <thozza@redhat.com>
2021-03-15 16:48:40 +00:00
Tomas Hozza
53cde684d3 GCP: simplify calls to Compute Node API
Reduce the code related to Compute Node v1 API calls in a similar way as
it is done in the API usage examples.

Signed-off-by: Tomas Hozza <thozza@redhat.com>
2021-03-12 12:17:02 +01:00
Tomas Hozza
7de2011beb GCP: refactor logging and storage cleanup
Originally, the internal GCP library in `internal/upload/gcp` was
logging various information and errors. Refactor the code to move all
logging to callers of the library. As a result, some methods now return
additional information to preserve the same amount of information being
logged for GCP.

Refactor methods to have only single purpose and not do any extra work,
such as storage cleanup. Methods which create new resources now don't do
any cleanup at all. The caller is responsible to check for any errors
and perform any cleanup necessary. Necessary methods to perform cleanup
are provided.

Modify worker's job implementation and GCP CLI tool to explicitly do all
necessary cleanup, including in case of errors.

Signed-off-by: Tomas Hozza <thozza@redhat.com>
2021-03-12 12:17:02 +01:00
Ondřej Budai
2e39d629a9 worker: add azure image upload target
This commit adds and implements org.osbuild.azure.image target.

Let's talk about the already implemented org.osbuild.azure target firstly:
The purpose of this target is to authenticate using the Azure Storage
credentials and upload the image file as a Page Blob. Page Blob is basically
an object in storage and it cannot be directly used to launch a VM. To achieve
that, you need to define an actual Azure Image with the Page Blob attached.

For the cloud API, we would like to create an actual Azure Image that is
immediately available for new VMs. The new target accomplishes it.
To achieve this, it must use a different authentication method: Azure OAuth.
The other important difference is that currently, the credentials are stored
on the worker and not in target options. This should lead to better security
because we don't send the credentials over network. In the future, we would
like to have credential-less setup using workers in Azure with the right
IAM policies applied but this requires more investigation and is not
implemented in this commit.

Signed-off-by: Ondřej Budai <ondrej@budai.cz>
2021-03-06 15:40:48 +00:00
Ondřej Budai
4b031a4692 upload/azure: rename azure.go to azurestorage.go
This file contains a client for Azure Storage API. As we soon introduce the
client for Azure API, we need a distinction here.

Signed-off-by: Ondřej Budai <ondrej@budai.cz>
2021-03-06 15:40:48 +00:00
Ondřej Budai
4f66ab5d7c upload/azure: rename Image to PageBlob
The UploadImage method doesn't actually create an image. It creates a Page
Blob. Blob is something like S3 object but in the Azure terminology. Page
Blob means that's optimized for random access and it's the only blob type
that can be used to create images.

This commit cleans up the terminology so it's less confusing.

Signed-off-by: Ondřej Budai <ondrej@budai.cz>
2021-03-06 15:40:48 +00:00
Ondřej Budai
478f69e092 upload/azure: move UploadImage under a new StorageClient struct
We will soon introduce new methods to the storage client.

Signed-off-by: Ondřej Budai <ondrej@budai.cz>
2021-03-06 15:40:48 +00:00
Ondřej Budai
f67ca8b616 azure: return an early error if unaligned
If the image size isn't aligned to 512 bytes, the Azure API returns very hard
to understand error message. Let's do this check ourselves early so we can
return a sane error.

Signed-off-by: Ondřej Budai <ondrej@budai.cz>
2021-03-06 15:40:48 +00:00
Tomas Hozza
ff95059748 internal/upload: Add support for upload to GCP and CLI tool using it
Add new internal upload target for Google Cloud Platform and
osbuild-upload-gcp CLI tool which uses the API.

Supported features are:
- Authenticate with GCP using explicitly provided JSON credentials
  file or let the authentication be handled automatically by the
  Google cloud client library. The later is useful e.g. when the worker
  is running in GCP VM instance, which has associated permissions with
  it.
- Upload an existing image file into existing Storage bucket.
- Verify MD5 checksum of the uploaded image file against the local
  file's checksum.
- Import the uploaded image file into Compute Node as an Image.
- Delete the uploaded image file after a successful image import.
- Delete all cache files from storage created as part of the image
  import build job.
- Share the imported image with a list of specified accounts.

GCP-specific image type is not yet added, since GCP supports importing
VMDK and VHD images, which the osbuild-composer already supports.

Update go.mod, vendor/ content and SPEC file with new dependencies.

Signed-off-by: Tomas Hozza <thozza@redhat.com>
2021-02-25 18:44:21 +00:00
Jozef Mikovic
0597ac48a7 upload/vmware: document uploadImage function 2021-02-16 19:06:01 +00:00
Jozef Mikovic
1a81489ef1 osbuild-worker: add target for upload to vmware
New upload target for VMWare, similar to the ones for AWS and Azure,
allowing users to set credentials for their vSphere instance.
Commit also includes function that performs the actual upload.
2021-02-16 19:06:01 +00:00
Major Hayden
2618e11bfe Apply tags to registered AMI
Adding the tag called `Name` to the AMI ensures that the name appears in
the *Name* column inside AWS' web console.

Fixes #1171.

Signed-off-by: Major Hayden <major@redhat.com>
2021-01-25 15:47:02 +01:00
Ondřej Budai
1b05192298 upload/azure: use the new azure/azblob API on Fedora 33+ & RHEL
Fedora 33 and rawhide got an updated version of the azblob library. Sadly, it
introduced a non-compatible API change. This commit does the same thing as
a67baf5a did for kolo/xmlrpc:

We now have two wrappers around the affected part of the API. Fedora 32 uses
the wrapper around the old API, whereas Fedora 33 and 34 (and RHEL with its
vendored deps) use the wrapper around the new API. The switch is implemented
using go build flags and spec file magic.

See a67baf5a for more thoughts.

Also, there's v0.11.1-0.20201209121048-6df5d9af221d in go.mod, why?

The maintainers of azblob probably tagged a wrong commit with v0.12.0 which
breaks go. The long v0.11.1-.* version is basically the proper v0.12.0 commit.
See https://github.com/Azure/azure-storage-blob-go/issues/236 for more
information.

Signed-off-by: Ondřej Budai <ondrej@budai.cz>
2021-01-06 16:31:28 +01:00
Ondřej Budai
4548923a09 upload/aws: fix architecture for aarch64 images
Previously, composer wrongly set x86_64 architecture even for aarch64 images.
This commit fixes it.

Signed-off-by: Ondřej Budai <ondrej@budai.cz>
2020-12-01 08:27:44 +01:00
Sanne Raymaekers
22c9f6af61 cloudapi: Share an ec2 snapshot/ami with an account 2020-11-26 13:08:18 +00:00