Error handling is structured in such a way that typically, a ServiceCodeError is passed
through the echo HTTP error, in reference to internally defined errors. We want to be able
to obtain and return specific external errors, for example during validation from openapi3.
Add a 'details' field to the serviceError struct, to contain extra / externally defined
information. Modify HTTPErrorHandler to anticipate either a string or a ServiceErrorCode
from echo, and respond accordingly. Edit the affected tests to expect the appropriate response.
When a DepsolveError exception occurs, the error message would print the
packages in the request. When the request arguments changed, the error
message handling wasn't updated and would fail to produce the correct
error message.
Compile a list of packages from all transactions and print them in the
error message as a comma-separated list.
Stages are procedural and named after the tool they wrap, but pipelines are declarative and should
be named after the kind of artefact they produce.
This splits the qemu (the tool) pipeline into qcow2, vmdk, and vpc (the formats) pipelines. In theory
we may have wanted to implemented through some shared helpers, but for now it seems trivial
enough that it is not worth it.
The ideal is that the constructor takes mandatory properties as arguments, and fields in the struct
are all optional.
This clarifies that across the pipelines (or leaves TODOs where work remains), and where possible
makes fields optional by providing a valid default value.
This adds more documentation and makes more properties implicitly inherited rather than
repeated. This makes for less boilerplate, and gives us fewer things to keep in sync.
The OSPipeline might need to know what disk layout it will be put onto, enforce this by making
the PartitionTable a property of the OSPipeline, and require child pipelines to query it when needed.
The pipeline package is exists conceptually between the distro and the osbuild packages, so move
it to the top level rather than as a child of distro.
No functional change.
This is a collection of minor cleanups:
- Start documenting the API
- Enforce dependent pipelines have the correct type where necessary
- Use data from dependent pipelines where possible
- Start enforcing required fields
- Move logic into the pipeline implementation where we can
This pulls out the OS pipeline, without changing the parameters. The dependency
between the OS pipeline and build pipeline, is now explicit, rather than by name.
RPM Spec
--------
Remove all Go dependecies
Add Start and End marker comments for bundling information
Add '-k' to goprep to preserve the vendor directory
tools
-----
Add script to update the RPM spec file to generate the indication lines
based on vendor/modules.txt
Packit
------
Run the new script as a post-upstream-clone hook
Makefile
--------
Run the new script on the generated spec file before generating the RPM
mockbuild.sh
------------
Run the new script before creating the RPM
We only need one runner and it should use the internal network for
access to all repositories.
Set a rule so it doesn't run on 'main' (makes no sense).
Set git depth to 500:
We need a long history in order to find the merge-base between the PR
and 'main'. It's unclear whether there's a straightforward way to find
the depth of the PR to limit the clone depth accurately. 500 should be
enough for any PR (I'd hate to see a PR that makes this statement
false).
The script runs the gen-manifests command first on the PR head and then
on the merge-base with the PR's base branch (typically 'main') and
checks for any differences. It creates a review comment on the PR on
GitHub if any changes are detected.
The message is posted as a simple COMMENT type review to inform the
author and reviewers that changes exist.
The script doesn't fail if there's a diff. CI shouldn't fail if changes
are detected since they can be intentional. The job fails if something
goes wrong with the script execution (manifest generation, comment
posting, etc).
The script exits immediately if not run from a PR.
The gen-manifests run is silenced with `> /dev/null`. In the future,
this should be handled by flags to the command itself to control the
output format noisiness.
The gen-manifests command is run 50 workers. Testing with 100 seemed to
make the execution stall, likely because of the resources on the worker.
We can experiment with this value more in the future.
Add a new option to `GenImagePrepareStages`, which is used by all
modern pipelines for partitioning, to optionally use the `sgdisk`
partitioning tool via `org.osbuild.sgdisk`.
THe `rpmmd.RepoConfig` configuration supports setting "package sets"
for each repository, which allows the associate the individual repos
to specific package sets. Add a new `package_set` option to the
repo configuration of the compose request so that this feature can
be used.
The worker server API handler `UploadJobArtifact()` was previously
silently discarding artifacts uploaded by the worker, if the server was
configured to not accept artifacts.
Change the behavior to return HTTP error "Bad Request" (`400`) to the
worker, in case it tries to upload artifact to the server, but the
server is configured to not accept any artifacts.
Add a new unit test testing the new behavior and adjust existing unit
tests, which were relying on the artifact being previously silently
discarded.
When the Koji target support was added to the osbuild job, based on the
osbuild-koji job, the meaning of target option values got messed up.
The side effect of the issue is that when Koji composes are
submitted via Cloud API the resulting image is currently always uploaded
back to the worker server.
`OsBuildKoji` job
-----------------
- `OSBuildKojiJob.ImageName` is set to the filename of the image as
exported by osbuild.
- `OSBuildKojiJob.KojiFilename` is set to the desired filename which
should be used when uploading the image to Koji.
`OsBuild` job + `KojiTargetOptions` before
------------------------------------------
- `OSBuildJob.ImageName` is set to the filename of the image as exported
by osbuild. This is done only by the Cloud API code for Koji composes.
Cloud API does not set this for regular composes and any other target.
The variable is set in common case only by Weldr API code with the
same meaning and it is used by the `OsBuild` job implementation as an
indication that the image should be uploaded back to the worker server.
- `Target.ImageName` is not set at all. Other targets use it for the
desired filename which should be used when uploading the image to the
target environment.
- `KojiTargetOptions.Filename` is set to the desired filename which
should be used when uploading the image to Koji. All other target
types use `Filename` variable in their options for the filename of the
image as exported by osbuild.
`OsBuild` job + `KojiTargetOptions` after
-----------------------------------------
- `OSBuildJob.ImageName` is still set to the filename of the image as
exported by osbuild. This is kept for a backward compatibility of new
composer with older workers.
- `Target.ImageName` is set to the desired filename which should be used
when uploading the image to Koji.
- `KojiTargetOptions.Filename` is set to the filename of the image as
exported by osbuild.
This change is backward incompatible, meaning that old worker won't be
able to handle Koji compose requests submitted via Cloud API using a new
composer and also a new worker won't be able to handle Koji compose
requests submitted by a new composer. This is intentional, because after
discussion with Ondrej Budai, the Cloud API Koji integration is
currently not used anywhere in production.