These fields are not set by the depsolve job, they are only set and used
in tests so remove them. Errors are reported in the result.JobError
Related: Related: RHEL-60125
This refactors the server setup, splitting the depsolve and ostree
resolve goroutine creation into helper functions. It also removes the
use of channels, which was always set to "" (and in the case of the
multi-tenancy test an empty list, which acts the same).
Related: RHEL-60136
This connects all the pieces needed to implement the search.
If you POST a request to /search/packages like this:
{
"packages": [
"tmux"
],
"distribution": "fedora-41",
"architecture": "x86_64"
}
It will return details about the tmux packages that looks like this:
{
"packages": [
{
"arch": "x86_64",
"buildtime": "2024-10-10T00:19:06Z",
"description": "tmux is ...",
"license": "ISC AND BSD-2-Clause AND BSD-3-Clause AND SSH-short AND LicenseRef-Fedora-Public-Domain",
"name": "tmux",
"release": "2.fc41",
"summary": "A terminal multiplexer",
"url": "https://tmux.github.io/",
"version": "3.5a"
}
]
}
Resolves: RHEL-60136
The request is similar to a depsolve request, it must include the
distribution and architecture. It can optionally include a list of
repositories to search, but if they are not included it searches the
default repos for the distro:arch
Related: RHEL-60136
This adds support for sending a search job to the worker client,
gathering results, and handling errors.
The errors returned are the same as for the Depsolve job, since they
both use the osbuild-depsolve-dnf script via images/pkg/dnfjson.
Related: RHEL-60136
This is similar to the depsolve job, and it shares the solver (which
supports locking, as does DNF itself). This will allow searching for
specific package names, names with globs, or names as substrings of
other names using * as the wildcard.
Related: RHEL-60136
Update the weldr API to work with the new depsolve API.
Update tests to match (adding repo_id).
Co-authored-by: Achilleas Koutsou <achilleas@koutsou.net>
This also adds an actual repository json file for the test-disro.
Without this the repo.ListDistros() function doesn't return any actual
distros.
Related: RHEL-60125
and return the response to the client. This uses the worker to depsolve
the requested packages. The result is returned to the client as a list
of packages using the same PackageMetadata schema as the ComposeStatus
response. It will also time out after 5 minutes and return an error,
using the same timeout constant as depsolving during manifest
generation.
Related: RHEL-60125
In order to reuse PackageMetadata with DepsolveResponse and not include
unused fields this changes the sigmd5 entry to an optional field. This
doesn't effect the use of PackageMetadata in the Compose response since
it is always set, and it allows it to be omitted in the response for
depsolving.
Also adds a basic test for stagesToPackageMetadata
Related: RHEL-60125
This function only depends on the Blueprint (cloudapi request type, not
the internal/blueprint) so move it to a function on that so that it can
be reused by other users of the cloudapi Blueprint.
Related: RHEL-60125
Newer versions of the go compiler (1.24 in this case) fail when running
go test during a mock rebuild of the srpm created by 'make srpm' on
Fedora 42.
Even though we currently don't support go1.24, fix these so they don't
become an issue when we do.
This updates composer to use the updated API in images around the
seed handling for manifests, see images PR#1107 for details.
Note that this has no semantic changes yet. We could now simplfy
some things because images will auto-seed but that is for a followup.
Add a safeguard to ensure secure instances without valid
parent instances are terminated, as they are unnecessary to retain.
Typically, the parent does not exist if the secure instance is
older than 2 hours, but this check provides additional validation.
HMS-3632
Previously, the `OSTree` property in the Weldr API `ComposeRequest`
struct was not a pointer to the `ostree.ImageOptions` type. As a result,
it was initialized to an empty struct, even if not set in the client API
call.
As a result, the `OSTree` property in the `distro.ImageOptions` was
always not `nil`, when initializing the osbuild manifest. However, after
a change in `osbuild/images` [0], providing OSTree options for
non-OSTree image types is no longer considered valid. This caused a
failure to submit a new compose for any non-OSTree image type.
Change the `OSTree` property in Weldr `ComposeRequest` to be a pointer
and mark it as optional.
[0] https://github.com/osbuild/images/pull/1071
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Even in case of errors, as long as create fleet returns an instance,
attempt to use it.
In some cases AWS returns `InsufficientInstanceCapacity` but still
creates an instance:
```
msg="Won't retry CreateFleet with OnDemand instance, retry: false, errors: InsufficientInstanceCapacity: There is no Spot capacity available that matches your request.; Already launched instance ([i-...]), aborting create fleet"
msg="doCreateFleetRetry: returning retry: false, msg: [InsufficientInstanceCapacity: There is no Spot capacity available that matches your request. Already launched instance ([i-...]), aborting create fleet]"
msg="doCreateFleetRetry: cancelling retry, instance already exists: [i-...]"
msg="doCreateFleetRetry: setting retry to true"
msg="Checking to retry fleet create on error InsufficientInstanceCapacity (msg: There is no Spot capacity available that matches your request.)"
```
RHEL 10 (nightly) builds fail on stage with "Fatal glibc error: CPU does
not support x86-64-v3", this is most likely due to very old instance
types not supporting a specific instruction set.
We still see this error sometimes:
Unable to start secure instance: Unable to create fleet: InsufficientInstanceCapacity: There is no Spot capacity available that matches your request
This is awkward because the message mentions that there is no spot
capacity, even though the current code should retry on
InsufficientInstanceCapacity. I also confirmed this by searching for
the retries log messages: there are none in the logs.
We need a bigger hammer. Let's log everything that happens in the
createFleet method in order to have better understanding why the
retry logic isn't triggered. We should probably move most of the newly
added logs to the debug level, but let's delay that until we have
more insight into what's happening.
By surfacing the output even in case of an error, the fleet ID and
instance ID can be extracted if present. Thus the instance can be
terminated before its dependencies are deleted.
We're seeing some behaviour where create fleet is not retried and
subsequently the SI cleanup fails due to the security group already
being tied to an existing instance. There is no error that an instance
was launched anyway.
I got confused as the jobqueue interface is asymmetric.
It expects an object and returns a json.RawMessage
and when handing over to postgres this is abstracted
away by postgres
Fixes the special case that if no worker is available and we
generate an internal timeout and cancel the depsolve including all
followup jobs, no error was propagated.
When requeuing a job the next worker requesting the job would decrement
pending counter, but the pending counter only ever got incremented once,
when the job was first enqueued. Thus make sure to increment the pending
counter when a job is requeued.