This adds a new field `Warnings` to the `ComposeReply`
struct, allowing to send back any warnings (e.g. deprecation
notices) generated during the `checkOptions` step of the
manifest initialization.
See also https://github.com/osbuild/weldr-client/pull/99 which
handles the weldr-client side of things.
Signed-off-by: Irene Diez <idiez@redhat.com>
This causes dnf-json to use separate caches, allowing them to run in
parallel, with one lock per distribution. Multiple depsolves with the
same distribution in the blueprint will continue to be serial.
It is difficult to tell if these are really running in parallel or not,
even with loggin, but it helps. They will all always start at the same
time (because they are run concurrently with goroutines) and if things
work right should be finished at about the same time.
ioutil has been deprecated since go 1.16, this fixes all of the
deprecated functions we are using:
ioutil.ReadFile -> os.ReadFile
ioutil.ReadAll -> io.ReadAll
ioutil.WriteFile -> os.WriteFile
ioutil.TempFile -> os.CreateTemp
ioutil.TempDir -> os.MkdirTemp
All of the above are a simple name change, the function arguments and
results are exactly the same as before.
ioutil.ReadDir -> os.ReadDir
now returns a os.DirEntry but the IsDir and Name functions work the
same. The difference is that the FileInfo must be retrieved with the
Info() function which can also return an error.
These were identified by running:
golangci-lint run --build-tags=integration ./...
DNF supports more than one GPG key. It is possible that one may be used for
signing packages, and another to sign the repository metadata. This
renamed GPGKey to GPGKeys internally. It does not change the on-disk
repository json format.
After introducing Go 1.18 to a project, it's required by law to convert at
least one method to a generic one.
Everyone hates IntToPtr, StringToPtr, BoolToPtr and Uint64ToPtr, so let's
convert them to the ultimate generic ToPtr one.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
When the store is written to disk it simplifies the ImageBuild details
into a simple image type string. This works fine for composes that match
the host's distro but isn't enough detail to load composes made for
other distros, especially if the image type name isn't supported on the
host. This results in cross distro compose results being lost after a
reboot.
This fix uses the distro information from the compose's blueprint to
determine which distro the image type should be loaded from. It assumes
that the architecture matches the hosts' arch -- this is currently
always true but in the future if cross-arch builds are added it will
need to be addressed in a different way.
newComposeFromV0, newComposesFromV0, and newStoreFromV0 now take a
pointer to the full distro registry instead of an Arch, this allows them
to access the correct image types for the distro selected by the
blueprint. When loading the composes from disk the blueprint distro is
loaded from the registry before checking the image type string.
This means that we do not have to change the store version or on disk
format, the only thing changing is how it decides to populate the
ImageBuild when reloading the store.
A number of tests use a fake test distro using fake architecture names.
These tests have been adjusted to use a fake distro registry with
overridden host architecture that matches the fake one.
If params.Ref is an empty string, it's set to the distro's default
ref. The only difference here is that the default ref also gets
verified.
It makes splitting out resolving ostree refs to a new job easier.
In the weldr and cloud apis, ostree.ResolveParams always got executed,
also for non-ostree image types. Make it more explicit by only resolving
if the image type is actually an ostree image.
Previously, it was expected from the user to provide the Object name
when uploading image to GCP. The object name does not matter much,
because the object is deleted once image import finishes. Make
the specification of the object name optional and generate it if not
provided.
Adjust the GCP Weldr test case to not provide the Object name when
uploading the image.
The user can still provide the Object name if needed.
There is a desire to make the worker as "dumb" as possible. Therefore it
is not desired to generate the AWS object key names in the worker if it
was not provided in the job.
Modify the worker code to not generate the AWS object key in any case
and instead set an error in case the object key was not provided.
Modify Weldr API implementation to generate the object key, if it was
not provided by the user. This is consistent with Cloud API
implementation.
Instead of using the ostree.RequestParams in the OSTReeImageOptions,
define a new struct specific to ImageOptions for the ostree parameters.
This is almost identical to the new ostree.CommitSpec but the meaning of
the parameters changes based on image type and it would not be clear if
the CommitSpec was used in all cases. For example, the parameters of
the new OSTreeImageOptions do not always refer to the same commit. The
URL and Checksum may point to a parent commit to be pulled in to base
the new commit on, while the Ref refers to the new commit that will be
built (which may have a different ref from the parent).
The ostree.ResolveParams() function now returns two strings, the
resolved ref, which is replaced by the defaultRef if it's not specified
in the request, and the resolved parent checksum if a URL is specified.
The URL does not need to be returned since it's always the same as the
one specified in the request.
The function has been rewritten to make the logic more clear.
The docstring for the function has been rewritten to cover all use cases
and error conditions.
With an empty or missing version number the commit message would not
include the version (which is set to 0.0.0 by calling Initialize). This
adds a call to Initialize() in the API code before constructing the
commit message. It also moves the check for non-empty blueprint name
into the Initialize call where it belongs.
This satisfies the linter complaint about potential Slowloris attack
where headers are read slowly in an attempt to DoS the server.
The uses of ListenAndServe are only for testing purposes and are not run
in the production server so ignore the lint errors in
osbuild-mock-openid-provider.
For each of the supported distros start a goroutine to depsolve
'filesystem' which will preload the metadata making subsequent responses
faster.
This is safe to do without limits because we only supposed a limited
number of distros, and without additional locking because this is the
the same as hitting the API with multiple depsolve requests at the same
time.
Instead of fetching all available packages from dnf-json and then
searching the results this uses SearchMetadata when a package name or
glob is passed to the API. It only uses FetchMetadata when fetching
the full list of packages.
This also fixes a bug where the error response to a projects/info
request used the id of 'ModulesError'. It now uses 'ProjectsError'.
The ellipsis operator was used as a hack to not need to pass any details
as an argument, but it makes what the end object will actually look like
less obvious. It also makes it impossible to pass an array to details
without getting a nested array.
Fixes#2874
Since the `jobStatus` functions return a `JobInfo`
struct that contains the `JobStatus`, it makes sense
to rename the function names for the sake of consistency.
The number of return values from the `jobStatus`
function was growing and getting out of hand. Not
all return values were being used in all cases
and so returning a single struct with the information
and status of a job makes more sense. Then in each case
the resulting fields can be used as needed.
The main reason is that there should be only one place where the
container resolution is happening, which is the worker, so that
we only have one central place to configure aspects of it, like
container credentials.
This is the first step to support embedding container images. Here
we add the `containers []container.Spec` argument to supply images
with resolved container specifications. For now all distros will
return an error in case a container is actually supplied since none
of them currently support embedding containers. NB: also no apis or
tools will actually resolve containers.
The test_distro Manifest, which is used in tests across multiple
packages, was using the old structure. Updated to the v2 structure and
adapted all tests.
Weldr makes assumptions about the names of the package sets. This
does not work in all cases, so should be reworked, but for now just
do enough that we don't regress.
The package sets for an image can depend on the blueprint, and
by the same logic there is no reason it should not be able to
depend on the image options.
This is so far a non-functional change, but makes a follow-up
commit simpler (though still without actually depending on
the image options to compute the package sets).
The osbuild export is specific to the upload target and different
targets may require using a different export. While osbuild-composer
still does not support multiple exports for osbuild jobs, this prepares
the ground for such support in the future.
The backward compatibility with older implementations of the composer
and workers is kept on the JSON (Un)mashaling level, where the JSON
message is always a super-set of the old and new way of providing the
exports to osbuild job.
Stop relying on the server interpreting the set `ImageName` in the
`OSBuildJob` as a signal to upload the image back to the worker server
and add an explicit "Worker Server" upload target to the job.
The filename of the image as produced by osbuild for a given export is
currently set in each target options type in the `Filename` struct
member. However, the value is not really specific to any target type,
but to the specific export used for the target. For this reason move the
value form target type options to the `Target` struct inside a new
struct `OsbuildArtifact` under the name`ExportFilename`.
The backward compatibility with older implementations of the composer
and workers is kept on the JSON (Un)mashaling level, where the JSON
object is always a super-set of the old and new way of providing the
export filename in the Target.
The `Filename` previously set in the `gcpUploadSettings` does not
provide any value. It is the filename of the image as produced by
osbuild for a given export. It may not correspond with the object name
when the image is uploaded to GCP storage and may not even correspond
with the image name after it is imported to GCE. Stop setting the value
and remove the variable from data structures.
This change should not have any impact on backward compatibility,
because the field will be ignored when (Un)Marshalling.
The `TargetErrors` is not used any more since PR#2192 [1] and there is
no need to keep the backward compatibility any more, because there are
no composer / worker instances in production, which are not running the
modified code.
In addition, delete unit tests covering this legacy error handling.
[1] https://github.com/osbuild/osbuild-composer/pull/2192
Add a new generic container registry client via a new `container`
package. Use this to create a command line utility as well as a
new upload target for container registries.
The code uses the github.com/containers/* project and packages to
interact with container registires that is also used by skopeo,
podman et al. One if the dependencies is `proglottis/gpgme` that
is using cgo to bind libgpgme, so we have to add the corresponding
devel package to the BuildRequires as well as installing it on CI.
Checks will follow later via an integration test.
Define supported job type names as constants and use them in all places,
instead of string literals.
There are multiple benefits of this approach. Using constants removed
the room for typos in the string literals. One can use autocompletion in
IDE for job types. Using constant makes it easier to find all references
where it is used and thus all places that are handling a specific job
type.
The `distro` package is now used for distro definitions supported by
osbuild-composer, not for introspecting the Host system. Move
`GetRedHatRelease()` and `GetHostDistroName()` functions to the `common`
package.
Use case
--------
If Endpoint is not set and Region is - upload to AWS S3
If both the Endpoint and Region are set - upload the Generic S3 via Weldr API
If neither the Endpoint and Region are set - upload the Generic S3 via Composer API (use configuration)
jobimpl-osbuild
---------------
Add configuration fields for Generic S3 upload
Support S3 upload requests coming from Weldr or Composer API to either AWS or Generic S3
Weldr API for Generic S3 requires that all connection parameters but the credentials be passed in the API call
Composer API for Generic S3 requires that all conneciton parameters are taken from the configuration
Adjust to the consolidation in Target and UploadOptions
Target and UploadOptions
------------------------
Add the fields that were specific to the Generic S3 structures to the AWS S3 one
Remove the structures for Generic S3 and always use the AWS S3 ones
Worker Main
-----------
Add Endpoint, Region, Bucket, CABundle and SkipSSLVerification to the configuration structure
Pass the values to the Server
Weldr API
---------
Keep the generic.s3 provider name to maintain the API, but unmarshel into awsS3UploadSettings
tests - api.sh
--------------
Allow the caller to specifiy either AWS or Generic S3 upload targets for specific image types
Implement the pieces required for testing upload to a Generic S3 service
In some cases generalize the AWS S3 functions for reuse
GitLab CI
---------
Add test case for api.sh tests with edge-commit and generic S3
Added CleanCache() method to the solver that deletes all the caches if
the total size grows above a certain (configurable) limit
(default: 500 MiB).
The function is called externally to handle errors (usually log or
ignore completely) and to avoid calling multiple times for multiple
depsolves of a single request.
The cleanup is extremely simple and is meant as a placeholder for more
sophisticated cache management. The goal is to simply avoid ballooning
cache sizes that might cause issues for users or our own services.
The repository checksums in the response from dnf-json aren't used
anywhere. Since we're making changes to dnf-json and depsolving, now is
a good opportunity to drop them completely.