Update the osbuild/images to the version which introduces "dot notation"
for distro release versions.
- Replace all uses of distroregistry by distrofactory.
- Delete local version of reporegistry and use the one from the
osbuild/images.
- Weldr: unify `createWeldrAPI()` and `createWeldrAPI2()` into a single
`createTestWeldrAPI()` function`.
- store/fixture: rework fixtures to allow overriding the host distro
name and host architecture name. A cleanup function to restore the
host distro and arch names is always part of the fixture struct.
- Delete `distro_mock` package, since it is no longer used.
- Bump the required version of osbuild to 98, because the OSCAP
customization is using the 'compress_results' stage option, which is
not available in older versions of osbuild.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
This adds a 'blueprint' section to the compose request. It also
restricts it so that only 'blueprint' or 'customizations' can be
included, but not both. The goal is to move to using 'blueprint' for all
customizations so that there is a single consistent interface for the
clients.
Where the openapi schemas are the same between the two they have been
shared, but a few are different. They are created with 'Blueprint*' as
their name.
This also re-adds the SSHKey schema removed by commit
bfad6d50e1, it is used by the Blueprint
Customization.
Allow passing module_hotfixes flag through the cloudapi.
This will enable depsolving on repositories that might be affected by modularity filtering.
Refs HMS-3202
Adds implementation of the 'fdo.di_mfg_string_type_mac_iface' dracut
variable to allow simplified installer images to pass this value to the
manufacturing-client.service.
Fix verbiage of groups customization, fields which accept an array
should be plural.
Remove the sshkey customization, sshkeys are merged into user
customizations anyway, so users should use the "users" customization
instead.
Since these customizations aren't in use yet, this edit should be fine.
See #3716
Unresponsive workers (>=1 hour of no status update) are cleaned up.
Several things are enabled by keeping track of workers, in future the
worker server could:
- keep track of how many workers are active
- see if a worker for a specific architecture is available
Add the new upload_statuses under the image_status in the result of the
ComposeStatus object. The first status is also included in the old
top-level 'upload_status' property for backwards compatibility.
Tests are updated to match the new results.
Test some valid and invalid combinations for the GetTargets() upload
target selection.
Includes tests with and without the upload options for the default
target.
Read the upload target types and options in the UploadTargets array of
the ImageRequest and initialise the Target array. If the top-level
(old) UploadOptions are also specified, prepend them to the array using
the image type's default target type.
Each upload target type is checked against a support map for
compatibility.
Add an array of targets in the imageRequest and return an array from
ImageRequest.GetTargets() (renamed from GetTarget()). Currently, the
function still only returns one target, the default for the image type
with the top level upload options.
Separate the target selection in GetTarget() into two steps. First
determine the default target name for the image type and then use the
name to initialise the target object. This is a bit more work (and
double switching) but will be needed to support selecting targets
externally.
Separate the handling of each individual target type into its own
function called by GetTarget()'s case switch. This makes the function
more readable and the target object creation reusable.
Added an empty line after each creation of irTarget to make it easier to
visually distinguish the cases that fall through.
Add an upload_targets field to the image request. This lets the API
caller specify multiple upload targets and upload options to be used.
If the upload target type does not match the upload options, the request
is invalid.
For backwards compatibility, the upload targets field is optional. If
it is not specified, the default upload target and upload options for
the image type are assumed, which is the same as the old behaviour.
Adding an explicit selection to the request makes it possible to support
multiple upload targets for the same image type. We plan to support
ostree commits being uploaded to both aws.s3 and pulp.
To report on the multiple upload requests, we add an upload_statuses
field to the ImageStatus response.
There is a lot of repeated checks for bp.Customization != nil, this
simplifies that by creating an empty blueprint.Customizations at the
top, and checking to see if it is still empty at the bottom and setting
it back to nil.
Includes a new test for calling with an empty (not nil)
v2.Customizations set on the request.
Include the osbuild/images module version in the Manifest job result.
The module has direct impact on image definitions and the content of
produced manifest, therefore including this information in the Manifest
job result is very helpful for various purposes (debugging,
traceability).
This will enable to embed this information in the Koji build metadata.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Extend the Manifest job result structure to hold information about
osbuild-composer version, which produced the manifest. This will be
useful for other job types which depend on it and can then push this
information further as needed.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Add the information about osbuid artifact to the target result.
Specifically the name of the osbuild pipeline which was exported for the
specific target, and the filename of the exported file.
This will later enable embedding this information in Koji build metadata
to make it easy to reproduce the image build using the attached
manifest.
The `KojiTargetResultOptions` previously contained information only
about the uploaded image file. And even then, some information, such as
the filename, were scattered in other structures such as
`KojiFinalizeJob` struct.
Since the plan is to start uploading also osbuild manifest and osbuild
build log to Koji, we need to extend the result options structure to
hold more information and also make it specific to which file is the
information related.
Rework the `KojiTargetResultOptions` to contain information about:
- the built image
- build log
- osbuild manifest
Information about each file contains:
- filename
- checksum type
- file checksum
- file size
For now, only the built image information is set and consumed by the
worker.
Add custom JSON (un)marshaler for `KojiTargetResultOptions` to handle
backward compatibility when old version of worker or composer server
interact with each other. Cover them with unit tests.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
These use 'pkg1' when depsolving, so they need an entry in the manifest
with the mocked checksum:
sha256:e50ddb78a37f5851d1a5c37a4c77d59123153c156e628e064b9daa378f45a2fe
Add support to the cloudapi for generating the tailoring file used
to customize the OpenSCAP remediation. This allows users to select and
unselect rules for the remediation and the `autotailor` stage generates
the tailoring file.
During development it can be very useful to store the results locally
instead of uploading to a remote system. This implements a development
only option to help with that.
To use it you need to add OSBUILD_LOCALSAVE to the server's environment.
This can be done by editing /usr/lib/systemd/system/osbuild-composer.service
and adding:
Environment="OSBUILD_LOCALSAVE=1"
You can then use an 'upload_options' object to skip trying to upload to
the default service for the type of image, eg:
"image_requests": [
{
"architecture": "x86_64",
"image_type": "guest-image",
"upload_options": {
"local_save": true
},
...
}]
The results will be saved to /var/lib/osbuild-composer/artifacts/UUID/
using the default filename for the image type.
If local_save is used without OSBUILD_LOCALSAVE being set it will return
an error with id=36 saying 'local_save is not enabled'.
This moves some of the code from the PostCompose function in handler.go
into methods on the OpenAPI ComposeRequest and ImageRequest structs.
In compose.go I have added several methods.
GetBlueprintWithCustomizations takes the ComposeRequest customizations
and builds a Blueprint struct.
GetPayloadRepositories returns the custom payload repos.
GetSubscription returns the ImageOptions setup with optional
subscription information from the request.
In imagerequest.go I have added GetTarget which takes the upload
options and returns a Target. This moves the giant switch statement,
which may also benefit from further simplification at some point.
GetOSTreeOptions returns the OSTree ImageOptions if there are ostree
settings in the ImageRequest.
GetImageOptions returns the distro.ImageOptions with the size set.
This commit only moves the code, making PostCompose easier to read. All
tests still pass.
cloudapi: Move the size handling to a method on ImageRequest
This adds a 'size' parameter to the image_request object. It can be used
to specify the minimum image size in bytes
This behaves in the same way as the size parameter of the weldr API
For raw images the root partition is grown to fill the available space.
For LVM images the PV uses the available space, but the LV does not,
leaving space available for other LVs to be created after boot.
See COMPOSER-1883
Remove all the internal package that are now in the
github.com/osbuild/images package and vendor it.
A new function in internal/blueprint/ converts from an osbuild-composer
blueprint to an images blueprint. This is necessary for keeping the
blueprint implementation in both packages. In the future, the images
package will change the blueprint (and most likely rename it) and it
will only be part of the osbuild-composer internals and interface. The
Convert() function will be responsible for converting the blueprint into
the new configuration object.
The duration middleware should come after the tenant channel middleware,
otherwise the tenant in the context will be empty. The status middleware
can come beforehand because it queries the request context right before
sending a response.