It turned out that the upload target was never adopted by the service,
thus we are removing it as part of upload code consolidation.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
With the upload code consolidation to osbuild/images, we need to make
sure to configure the logger used by the library to keep logging the
same (or similar) messages when running osbuild-composer.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Delete the `internal/upload/koji` package and replace it with
`pkg/upload/koji` package provided by `osbuild/images`.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
This cleans up the linting results by adding checks for
integer underflow/overflow in several places, suppressing the error in
places where it has been checked, or fixing the types when possible.
This is similar to the depsolve job, and it shares the solver (which
supports locking, as does DNF itself). This will allow searching for
specific package names, names with globs, or names as substrings of
other names using * as the wildcard.
Related: RHEL-60136
This is a bit of an RFC commit, I noticed that when we discussed
a crash from the worker we looked at individual message from
syslog/journald for the stacktrace deatils. I was wondering if
having a more direct crash report would be more useful? We can
of course also add more logrus features to flag those with tags
like "crash" or something (I did not do that in this PR, I don't
know much about the operational side, sorry).
We need the ability to use different CloudWatch group for the
osbuild-executor on Fedora workers in staging and production
environment.
Extend the worker confguration to allow configuring the CloudWatch group
name used by the osbuild-executor. Extend the secure instance code to
instruct cloud-init via user data to create /tmp/cloud_init_vars file
with the CloudWatch group name in the osbuild-executor instance, to make
it possible for the executor to configure its logging differently based
on the value.
Cover new changes by unit tests.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Drop `common.CurrentArch()` implementation and use
`arch.Current().String()` from the osbuild/images instead.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
ioutil has been deprecated since go 1.16, this fixes all of the
deprecated functions we are using:
ioutil.ReadFile -> os.ReadFile
ioutil.ReadAll -> io.ReadAll
ioutil.WriteFile -> os.WriteFile
ioutil.TempFile -> os.CreateTemp
ioutil.TempDir -> os.MkdirTemp
All of the above are a simple name change, the function arguments and
results are exactly the same as before.
ioutil.ReadDir -> os.ReadDir
now returns a os.DirEntry but the IsDir and Name functions work the
same. The difference is that the FileInfo must be retrieved with the
Info() function which can also return an error.
These were identified by running:
golangci-lint run --build-tags=integration ./...
The default number of threads (16) is OK for general use case. However,
we are being asked by RH IT to lower the number of threads when
uploading the image to Azure using proxy server.
Make the number of threads configurable in the worker configuration and
default to the currently used value if it is not provided.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Extend the worker's configuration to allow setting GCP Bucket to use
when uploading images to GCP. The value from the configuration is used
only if not provided in the TargetOptions of the job.
In GCP, the region of the bucket does not limit importing of the image
to a particular region. So it is completely possible to use a single
Bucket to import images to any and all regions.
Return an error in case no bucket name was set in the job nor in the
worker configuration.
Previously, the internal `OSBuildJobImpl` structure defined only
`GCPCreds` member. This is not practical, once there will be more
than one GCP-related variable.
Define a new `GCPConfiguration` structure, move the credentials variable
into it and use it in `OSBuildJobImpl` instead.
Add a new cloud API test that will build an edge-container,
upload it to the gitlab CI registry, fetch it from there,
run it and compare that the OSTree commit contained in it
is indeed the one we expect.
Co-Developed-By: Christian Kellner <christian@kellner.me>
If a `AuthFilePath` was configured, which should contain secrets
to access container registries, we set this on the `Client` so
that the secrets can be used during registry access.
Worker
------
Add configuration for the default container registry.
Use the default container registry if not provided as part
of the image name.
When using the default registry use the configured values
Return the image url as part of the result.
Composer Worker API
-------------------
Add `ContainerTargetResultOptions` to return the image url
Composer API
------------
Add UploadOptions to allow setting of the image name and tag
Add UploadStatus to return the url of the uploaded image
Co-Developed-By: Christian Kellner <christian@kellner.me>
Koji API removed by the previous commit was the last user of osbuild-koji job.
Let's remove it since nothing uses it. This also removes all of the
compatibility code in Cloud API, see concerns below:
Compatibility concerns:
- the internal deployment was moved to a completely different composer
instance, thus there are no old jobs
- Fedora deployment is still unused in prod, thus we don't care about keeping
backward compatibility of the old jobs
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
The struct is factored out 1:1. The only functional change in this commit is
worker now logging in case of a missing config (which means just loading the
defaults).
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
This commit moves the field to the koji struct where it actually belongs.
Also, it renames it to relax_timeout_factor for the sake of consistency.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
The TOML library translates the field names 1:1, so now you have to use:
[Composer]
proxy: "abcd"
This is not idiomatic though so let's add the toml tag to make it [composer].
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
Define supported job type names as constants and use them in all places,
instead of string literals.
There are multiple benefits of this approach. Using constants removed
the room for typos in the string literals. One can use autocompletion in
IDE for job types. Using constant makes it easier to find all references
where it is used and thus all places that are handling a specific job
type.
Use case
--------
If Endpoint is not set and Region is - upload to AWS S3
If both the Endpoint and Region are set - upload the Generic S3 via Weldr API
If neither the Endpoint and Region are set - upload the Generic S3 via Composer API (use configuration)
jobimpl-osbuild
---------------
Add configuration fields for Generic S3 upload
Support S3 upload requests coming from Weldr or Composer API to either AWS or Generic S3
Weldr API for Generic S3 requires that all connection parameters but the credentials be passed in the API call
Composer API for Generic S3 requires that all conneciton parameters are taken from the configuration
Adjust to the consolidation in Target and UploadOptions
Target and UploadOptions
------------------------
Add the fields that were specific to the Generic S3 structures to the AWS S3 one
Remove the structures for Generic S3 and always use the AWS S3 ones
Worker Main
-----------
Add Endpoint, Region, Bucket, CABundle and SkipSSLVerification to the configuration structure
Pass the values to the Server
Weldr API
---------
Keep the generic.s3 provider name to maintain the API, but unmarshel into awsS3UploadSettings
tests - api.sh
--------------
Allow the caller to specifiy either AWS or Generic S3 upload targets for specific image types
Implement the pieces required for testing upload to a Generic S3 service
In some cases generalize the AWS S3 functions for reuse
GitLab CI
---------
Add test case for api.sh tests with edge-commit and generic S3
Before, the autoscaling group discovery is failing with error:
Error getting the Autoscaling instances: MissingRegion MissingRegion:
could not find region configuration
Before the instance was vulnerable to an OTA update while processing a
request. Because there is no way of retriggering a job in Composer, it
is better to avoid this situation.
The way we are doing it is by setting the `protected` flag onto the
instance when a job is being processed. This way the AWS scheduler
does hopefully not shutdown the machine at the wrong time.
Main caveats of this solution:
* Starvation: If a worker keeps accepting new jobs, then it might not be
updated.
* Inconsistency: There exist a window between the job acceptation and the
protection where the worker can be shutdown without having the time to
protect itself.