Drop `common.CurrentArch()` implementation and use
`arch.Current().String()` from the osbuild/images instead.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
ioutil has been deprecated since go 1.16, this fixes all of the
deprecated functions we are using:
ioutil.ReadFile -> os.ReadFile
ioutil.ReadAll -> io.ReadAll
ioutil.WriteFile -> os.WriteFile
ioutil.TempFile -> os.CreateTemp
ioutil.TempDir -> os.MkdirTemp
All of the above are a simple name change, the function arguments and
results are exactly the same as before.
ioutil.ReadDir -> os.ReadDir
now returns a os.DirEntry but the IsDir and Name functions work the
same. The difference is that the FileInfo must be retrieved with the
Info() function which can also return an error.
These were identified by running:
golangci-lint run --build-tags=integration ./...
The default number of threads (16) is OK for general use case. However,
we are being asked by RH IT to lower the number of threads when
uploading the image to Azure using proxy server.
Make the number of threads configurable in the worker configuration and
default to the currently used value if it is not provided.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Extend the worker's configuration to allow setting GCP Bucket to use
when uploading images to GCP. The value from the configuration is used
only if not provided in the TargetOptions of the job.
In GCP, the region of the bucket does not limit importing of the image
to a particular region. So it is completely possible to use a single
Bucket to import images to any and all regions.
Return an error in case no bucket name was set in the job nor in the
worker configuration.
Previously, the internal `OSBuildJobImpl` structure defined only
`GCPCreds` member. This is not practical, once there will be more
than one GCP-related variable.
Define a new `GCPConfiguration` structure, move the credentials variable
into it and use it in `OSBuildJobImpl` instead.
Add a new cloud API test that will build an edge-container,
upload it to the gitlab CI registry, fetch it from there,
run it and compare that the OSTree commit contained in it
is indeed the one we expect.
Co-Developed-By: Christian Kellner <christian@kellner.me>
If a `AuthFilePath` was configured, which should contain secrets
to access container registries, we set this on the `Client` so
that the secrets can be used during registry access.
Worker
------
Add configuration for the default container registry.
Use the default container registry if not provided as part
of the image name.
When using the default registry use the configured values
Return the image url as part of the result.
Composer Worker API
-------------------
Add `ContainerTargetResultOptions` to return the image url
Composer API
------------
Add UploadOptions to allow setting of the image name and tag
Add UploadStatus to return the url of the uploaded image
Co-Developed-By: Christian Kellner <christian@kellner.me>
Koji API removed by the previous commit was the last user of osbuild-koji job.
Let's remove it since nothing uses it. This also removes all of the
compatibility code in Cloud API, see concerns below:
Compatibility concerns:
- the internal deployment was moved to a completely different composer
instance, thus there are no old jobs
- Fedora deployment is still unused in prod, thus we don't care about keeping
backward compatibility of the old jobs
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
The struct is factored out 1:1. The only functional change in this commit is
worker now logging in case of a missing config (which means just loading the
defaults).
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
This commit moves the field to the koji struct where it actually belongs.
Also, it renames it to relax_timeout_factor for the sake of consistency.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
The TOML library translates the field names 1:1, so now you have to use:
[Composer]
proxy: "abcd"
This is not idiomatic though so let's add the toml tag to make it [composer].
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
Define supported job type names as constants and use them in all places,
instead of string literals.
There are multiple benefits of this approach. Using constants removed
the room for typos in the string literals. One can use autocompletion in
IDE for job types. Using constant makes it easier to find all references
where it is used and thus all places that are handling a specific job
type.
Use case
--------
If Endpoint is not set and Region is - upload to AWS S3
If both the Endpoint and Region are set - upload the Generic S3 via Weldr API
If neither the Endpoint and Region are set - upload the Generic S3 via Composer API (use configuration)
jobimpl-osbuild
---------------
Add configuration fields for Generic S3 upload
Support S3 upload requests coming from Weldr or Composer API to either AWS or Generic S3
Weldr API for Generic S3 requires that all connection parameters but the credentials be passed in the API call
Composer API for Generic S3 requires that all conneciton parameters are taken from the configuration
Adjust to the consolidation in Target and UploadOptions
Target and UploadOptions
------------------------
Add the fields that were specific to the Generic S3 structures to the AWS S3 one
Remove the structures for Generic S3 and always use the AWS S3 ones
Worker Main
-----------
Add Endpoint, Region, Bucket, CABundle and SkipSSLVerification to the configuration structure
Pass the values to the Server
Weldr API
---------
Keep the generic.s3 provider name to maintain the API, but unmarshel into awsS3UploadSettings
tests - api.sh
--------------
Allow the caller to specifiy either AWS or Generic S3 upload targets for specific image types
Implement the pieces required for testing upload to a Generic S3 service
In some cases generalize the AWS S3 functions for reuse
GitLab CI
---------
Add test case for api.sh tests with edge-commit and generic S3
Before, the autoscaling group discovery is failing with error:
Error getting the Autoscaling instances: MissingRegion MissingRegion:
could not find region configuration
Before the instance was vulnerable to an OTA update while processing a
request. Because there is no way of retriggering a job in Composer, it
is better to avoid this situation.
The way we are doing it is by setting the `protected` flag onto the
instance when a job is being processed. This way the AWS scheduler
does hopefully not shutdown the machine at the wrong time.
Main caveats of this solution:
* Starvation: If a worker keeps accepting new jobs, then it might not be
updated.
* Inconsistency: There exist a window between the job acceptation and the
protection where the worker can be shutdown without having the time to
protect itself.
In the internal deployment, we want to talk with composer over a http/https
proxy. This proxy adds new composer.proxy field to the worker config that
causes the worker to connect to composer and the oauth server using
a specified proxy.
NB: The proxy is not supported when connection to composer via unix sockets.
For testing this, I added a small HTTP proxy implementation, pls don't
use this in production, it's just good enough for tests.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
Address is always required so not passing one is a clear error, let's return
exit code 2 which go itself returns when bad arguments are passed in.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
Refactor the handling of GCP credentials in the worker to be equivalent
to what is done for AWS. The main idea is that the code decides which
credentials to use when processing each job. This change will allow
preferring credentials passed via upload `TargetOptions` with the job,
over the credentials configured in worker's configuration or the default
way of authenticating implemented by the Google library.
Move loading of GCP credentials to the internal `gcp` library into
`NewFromFile()` function accepting path to the file with credentials.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
jobimpl-osbuild
---------------
Add GenericS3Creds to struct
Add method to create AWS with Endpoint for Generic S3 (with its own credentials file)
Move uploading to S3 and result handling to a separate method (along with the special VMDK handling)
adjust the AWS S3 case to the new method
Implement a new case for uploading to a generic S3 service
awscloud
--------
Add wrapper methods for endpoint support
Set the endpoint to the AWS session
Set s3ForcePathStyle to true if endpoint was set
Target
------
Define a new target type for the GenericS3Target and Options
Handle unmarshaling of the target options and result for the Generic S3
Weldr
-----
Add support for only uploading to AWS S3
Define new structures for AWS S3 and Generic S3 (based on AWS S3)
Handle unmarshaling of the providers settings' upload settings
main
----
Add a section in the main config for the Generic S3 service for credentials
If provided pass the credentials file name to the osbuild job implementation
Upload Utility
--------------
Add upload-generic-s3 utility
Makefile
------
Do not fail if the bin directory already exists
Tests
-----
Add test cases for both AWS and a generic S3 server
Add a generic s3_test.sh file for both test cases and add it to the tests RPM spec
Adjust the libvirt test case script to support already created images
GitLabCI - Extend the libvirt test case to include the two new tests
This will allow us to use the service accounts which work against
identity.api.openshift.com. These are much easier to manage, especially
with the new multi-tenancy, as there's a single page to create/expire
them across an account.
They also have the added benefit of not expiring automatically when
they're not used like offline tokens, and immediate expiration when
desired.
The format is now always 'JOB_ID' (JOB_TYPE). This means that we also know
the job type when a job is finished or when it failed.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
This is backwards compatible, as long as the timeout is 0 (never
timeout), which is the default.
In case of the dbjobqueue the underlying timeout is due to
context.Canceled, context.DeadlineExceeded, or net.Error with Timeout()
true. For the fsjobqueue only the first two are considered.