We only need one runner and it should use the internal network for
access to all repositories.
Set a rule so it doesn't run on 'main' (makes no sense).
Set git depth to 500:
We need a long history in order to find the merge-base between the PR
and 'main'. It's unclear whether there's a straightforward way to find
the depth of the PR to limit the clone depth accurately. 500 should be
enough for any PR (I'd hate to see a PR that makes this statement
false).
The script runs the gen-manifests command first on the PR head and then
on the merge-base with the PR's base branch (typically 'main') and
checks for any differences. It creates a review comment on the PR on
GitHub if any changes are detected.
The message is posted as a simple COMMENT type review to inform the
author and reviewers that changes exist.
The script doesn't fail if there's a diff. CI shouldn't fail if changes
are detected since they can be intentional. The job fails if something
goes wrong with the script execution (manifest generation, comment
posting, etc).
The script exits immediately if not run from a PR.
The gen-manifests run is silenced with `> /dev/null`. In the future,
this should be handled by flags to the command itself to control the
output format noisiness.
The gen-manifests command is run 50 workers. Testing with 100 seemed to
make the execution stall, likely because of the resources on the worker.
We can experiment with this value more in the future.
Add a new option to `GenImagePrepareStages`, which is used by all
modern pipelines for partitioning, to optionally use the `sgdisk`
partitioning tool via `org.osbuild.sgdisk`.
THe `rpmmd.RepoConfig` configuration supports setting "package sets"
for each repository, which allows the associate the individual repos
to specific package sets. Add a new `package_set` option to the
repo configuration of the compose request so that this feature can
be used.
The worker server API handler `UploadJobArtifact()` was previously
silently discarding artifacts uploaded by the worker, if the server was
configured to not accept artifacts.
Change the behavior to return HTTP error "Bad Request" (`400`) to the
worker, in case it tries to upload artifact to the server, but the
server is configured to not accept any artifacts.
Add a new unit test testing the new behavior and adjust existing unit
tests, which were relying on the artifact being previously silently
discarded.
When the Koji target support was added to the osbuild job, based on the
osbuild-koji job, the meaning of target option values got messed up.
The side effect of the issue is that when Koji composes are
submitted via Cloud API the resulting image is currently always uploaded
back to the worker server.
`OsBuildKoji` job
-----------------
- `OSBuildKojiJob.ImageName` is set to the filename of the image as
exported by osbuild.
- `OSBuildKojiJob.KojiFilename` is set to the desired filename which
should be used when uploading the image to Koji.
`OsBuild` job + `KojiTargetOptions` before
------------------------------------------
- `OSBuildJob.ImageName` is set to the filename of the image as exported
by osbuild. This is done only by the Cloud API code for Koji composes.
Cloud API does not set this for regular composes and any other target.
The variable is set in common case only by Weldr API code with the
same meaning and it is used by the `OsBuild` job implementation as an
indication that the image should be uploaded back to the worker server.
- `Target.ImageName` is not set at all. Other targets use it for the
desired filename which should be used when uploading the image to the
target environment.
- `KojiTargetOptions.Filename` is set to the desired filename which
should be used when uploading the image to Koji. All other target
types use `Filename` variable in their options for the filename of the
image as exported by osbuild.
`OsBuild` job + `KojiTargetOptions` after
-----------------------------------------
- `OSBuildJob.ImageName` is still set to the filename of the image as
exported by osbuild. This is kept for a backward compatibility of new
composer with older workers.
- `Target.ImageName` is set to the desired filename which should be used
when uploading the image to Koji.
- `KojiTargetOptions.Filename` is set to the filename of the image as
exported by osbuild.
This change is backward incompatible, meaning that old worker won't be
able to handle Koji compose requests submitted via Cloud API using a new
composer and also a new worker won't be able to handle Koji compose
requests submitted by a new composer. This is intentional, because after
discussion with Ondrej Budai, the Cloud API Koji integration is
currently not used anywhere in production.
After a depsolve, each package inherits the `IgnoreSSL` value from its
repository configuration.
This information is not yet used. It will be used to expose this
information to osbuild's org.osbuild.curl stage.
The test data is updated to match the new behaviour:
The test repository config specifies `IgnoreSSL=true` and the packages
in the response inherit the value.
The internal repository configuration (RepoConfig) supports IgnoreSSL
which, when set to `true`, will run a depsolve job with the dnf repo
parameter `sslverify` set to `false`.
The serialisable repo object (repository) did not support reading this,
so it was impossible to set in global repo configs (from
/usr/share/osbuild-composer/repositories and
/etc/osbuild-composer/repositories).
It was, however, possible to set it through the weldr API when adding a
new source.
Makes curl skip the verification step for secure connections and proceed
without checking.
The default (empty) value is 'false'.
osbuild counterpart: c8073b5836
The BaseSolver is an object which gets constructed when the worker
starts, and the subscriptions attached to it expire after about 3
days. By refreshing the subscriptions each time a new Solver is created,
valid subscriptions are used.
The return statement was forgotten when the Koji target support was
added. As a result, a Job with a failed Koji upload would be reported
as successful, while at the same time having a `JobError` set.
If dnf-json returns an error that is related to a repository, it uses
the ID to identify the repository that caused the error. Since IDs
can't easily be mapped back to a configuration, appending the URL and
name (if any) to the error message makes it easier to identify which
repository failed.
Keeping the ID in the message is also useful for finding the cache
directory of the repository if needed.
Adjust the timer for our automated releases to trigger the workflow at
8 UTC. This corresponds to 10am in most of our team's timezone and to
the reminder event in our team calendar.
OSBuild used to return the stage options as part of the result object
for v1 manifests. We didn't use this information anywhere. Currently
we convert v1 results to the v2 format while parsing the results of jobs
from old manifests (old distro definitions), but the StageOptions are
ignored and we only care about the StageMetadata.
Initially added as a copy of the osbuild v1 parser.
OSBuild used to return the stage options as part of the result object,
but this is no longer the case in v2.
More importantly, it doesn't seem like we used this information
anywhere, so it's useless.
We want to start tagging page blobs so this commit adds a small tagging method
to our azure library and exposes it in the osbuild-upload-azure helper.
Example:
go run ./cmd/osbuild-upload-azure/ \
-container azure-container \
-image ./sample.vhd \
-storage-access-key KEY \
-storage-account account \
-tag key:value \
-tag hello:world \
-tag bird:toucan
This commit also has to downgrade the azblob library version to 0.13 so the
API for blob tags is the same as the one currently shipped to Fedora.
This is suboptimal but it should unblock us for now.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
It always felt wrong that the method uploaded the blob under a different name
than the one specified in the blob metadata.
This commit moves the responsibility of specifying the right extension to
the callers. azure.EnsureVHDExtension helper was added to simplify this.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
`osbuild-mock-openid-provider`'s `/token` endpoint expects URL-encoded
values in the POST request body. Use the same values as those that would
be used by the worker when refreshing a token.
The internal composer instance still uses kojiapi for Brew builds,
instead of the cloudapi. Keep testing Koji builds via both APIs for now
to ensure that everything works.
Use the `EnsureJobChannel()` middleware in all `compose/<id>` endpoints.
Specifically in the:
- status
- metadata
- manifests
- logs
As a result, these endpoints now return `404` in case the server has JWT
enabled and the channel associated with the request does not match the
channel associated with the requested compose (job).
Extend the multi-tenancy unit test to ensure that these endpoints behave
as expected in case of match and mismatch between the request and
compose channels.
Add `EnsureJobChannel()` middleware method, intended for `compose/<id>`
endpoints. Its purpose is to ensure that the tenant channel set in
the request `echo.Context` matches the tenant channel associated with
the compose. In case of mismatch, `404` is returned.
Add `JobChannel()` method to the worker server implementation for
requesting channel associated with the job.
Extract the determination of tenant channel into a helper function.
This will simplify handler and middleware methods, which won't have
to implement the same logic by themselves.
Fix the multi-tenancy unit test to pass the appropriate context when
querying compose statuses, because the server that is being use has JWT
enabled and expects the tenant to be set in it.
Switch to using `osbuild` job type with `koji` upload target for Koji
build jobs, instead of using `osbuild-koji` job type.
Modify unit tests accordingly.
Previously, only a subset from all Koji Compose unit test cases were
run. Remove this limitation and run all defined unit tests, which were
copied from `kojiapi`.
In addition, fix unit tests and relevant cloudapi methods to make unit
tests pass.
Add `TestRouteWithReply()` to `test/helpers.go` to allow getting the
compose ID when submitting a new compose. This is needed to make some
unit tests deterministic.
Do not delete values from `fields` slice in `dropFields()` in
`test/helpers.go`. The behavior was previously not consistent.
If the top-level map contained the value, it was deleted from it, but
the nested maps also contained the value, it was not deleted. On the
other hand, if the top level map didn't contain the value, but nested
maps did contain it, the value was deleted from all nested maps.