Fixes the special case that if no worker is available and we
generate an internal timeout and cancel the depsolve including all
followup jobs, no error was propagated.
When requeuing a job the next worker requesting the job would decrement
pending counter, but the pending counter only ever got incremented once,
when the job was first enqueued. Thus make sure to increment the pending
counter when a job is requeued.
The usual convention to create new object is to prefix `New*` so
this commit renames the `WorkerClientError`. Initially I thought
it would be `NewWorkerClientError()` but looking at the package
prefix it seems unneeded, i.e. `clienterrors.New()` already
provides enough context it seems and it's the only error we
construct.
We could consider renaming it to `clienterror` (singular) too
but that could be a followup.
I would also like to make `clienterror.Error` implement the
`error` interface but that should be a followup to make this
(mechanical) rename trivial to review.
Unresponsive workers (>=1 hour of no status update) are cleaned up.
Several things are enabled by keeping track of workers, in future the
worker server could:
- keep track of how many workers are active
- see if a worker for a specific architecture is available
The duration middleware should come after the tenant channel middleware,
otherwise the tenant in the context will be empty. The status middleware
can come beforehand because it queries the request context right before
sending a response.
Only add the arch label for osbuild job types, as the finish metrics
behave similarly. Having arch labels on dequeue metrics for any other
job type (but not on the finish metrics) would produce weird results.
Register the custom middleware function to the worker
server. This function is responsible for recording all
the status codes of all the server's endpoints.
Due to a bug with echo/v4, a request to an endpoint using
the incorrect method should return a `405` error but returns
a `404` error instead when a middleware function is registered.
The worker `server_test` has been updated to reflect this.
If a job is unresponsive the worker has most likely crashed or been shut
down and the in-progress job been lost.
Instead of failing these jobs, requeue them up to two times. Once a job is lost
a third time it fails. This avoids infinite loops.
This is implemented by extending FinishJob to RequeuOrFinish job. It takes a
max number of requeues as an argument, and if that is 0, it has the same
behavior as FinishJob used to have.
If the maximum number of requeues has not yet been reached, then the running
job is returned to pending state to be picked up again.
Add support for embedding container images via the cloud API. For
this the container resolve job was plumbed into the cloud api's
handler and the API specification updated with a new `containers`
section that mimics the blueprint section with the same name.
The ellipsis operator was used as a hack to not need to pass any details
as an argument, but it makes what the end object will actually look like
less obvious. It also makes it impossible to pass an array to details
without getting a nested array.
Fixes#2874
Since the `jobStatus` functions return a `JobInfo`
struct that contains the `JobStatus`, it makes sense
to rename the function names for the sake of consistency.
The osbuild jobtype currently contains the
architecture as a suffix. Since the arch
is now being supplied as a label, the
`arch` suffix can be removed.
The number of return values from the `jobStatus`
function was growing and getting out of hand. Not
all return values were being used in all cases
and so returning a single struct with the information
and status of a job makes more sense. Then in each case
the resulting fields can be used as needed.
Add the architecture label to build jobs
which will enable filtering and monitoring
build jobs by architecture. Build job results
contain the `arch` field in the results struct,
this is then used to pass to the metrics, where
there is a value, otherwise it is set to an
empty string.
Remove a duplicate call to the `DequeueJobMetrics`
function in the worker server. This duplicate call
resulted in negative numbers for pending jobs in
the prometheus metrics.
Koji API removed by the previous commit was the last user of osbuild-koji job.
Let's remove it since nothing uses it. This also removes all of the
compatibility code in Cloud API, see concerns below:
Compatibility concerns:
- the internal deployment was moved to a completely different composer
instance, thus there are no old jobs
- Fedora deployment is still unused in prod, thus we don't care about keeping
backward compatibility of the old jobs
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
Add a new worker client error type `ErrorTargetError` representing that
at least one of job targets failed. The actual target errors are added
to the job detail.
Add a new `OSBuildJobResult.TargetErrors()` method for gathering a slice
of target errors contained within an `OSBuildJobResult` instance. Cover
the method with unit test.
The `TargetErrors` is not used any more since PR#2192 [1] and there is
no need to keep the backward compatibility any more, because there are
no composer / worker instances in production, which are not running the
modified code.
In addition, delete unit tests covering this legacy error handling.
[1] https://github.com/osbuild/osbuild-composer/pull/2192
Previously, we just used an empty struct when heartbeat failed. This is fine
for the osbuild job because it's treated as a failed one when
result.OSBuildResult == false which is the default value.
koji-finalize works differently though: It's in a failed state if there's
an job error of kojiError != "". So when failed heartbeat set the struct to
be empty, this was treated as success because there's no error.
Let's fix this by introducing a new error for the situation where we don't get
a heartbeat in time for a specific job.
The worker server API handler `UploadJobArtifact()` was previously
silently discarding artifacts uploaded by the worker, if the server was
configured to not accept artifacts.
Change the behavior to return HTTP error "Bad Request" (`400`) to the
worker, in case it tries to upload artifact to the server, but the
server is configured to not accept any artifacts.
Add a new unit test testing the new behavior and adjust existing unit
tests, which were relying on the artifact being previously silently
discarded.
Add `EnsureJobChannel()` middleware method, intended for `compose/<id>`
endpoints. Its purpose is to ensure that the tenant channel set in
the request `echo.Context` matches the tenant channel associated with
the compose. In case of mismatch, `404` is returned.
Add `JobChannel()` method to the worker server implementation for
requesting channel associated with the job.
Add new `JobDependencyChainErrors()` method for gathering a stack trace
of job errors from the job's dependencies which caused it to fail.
The `JobDependencyChainErrors()` implementation uses job-type specific
`...Status()` methods intentionally, because job-type specific status
methods check the job's result in a slightly different way and set
the result.JobError to a specific value. Due to this reason, it would
not be practical to introduce a generic `JobStatus()` method and get rid
of the `switch` block, because in reality, the new method would have
to implement an equivalent `switch` block as well.
Add unit test covering the method functionality.
Define supported job type names as constants and use them in all places,
instead of string literals.
There are multiple benefits of this approach. Using constants removed
the room for typos in the string literals. One can use autocompletion in
IDE for job types. Using constant makes it easier to find all references
where it is used and thus all places that are handling a specific job
type.
Change the definition of `EnqueueOSBuildAsDependency()` function to
accept a slice of job IDs on which the OSBuild job depends. Previously,
only the manifest job ID was accepted as the only possible dependency.
This change will be needed in order to enqueue OSBuild jobs for Koji,
which depends on two jobs.
Include a tenant label for all prometheus metrics. Modify
jobstatus function in the worker accordingly to return channel
so it can be passed to prometheus.
We are interested in the time it takes from a job could be dequeued
until it is, but if a job has dependencies that are not yet finished, it
cannot be dequeued.
Change the logic to measure the time since the last dependency was
dequeued rather than when the job was queued.
The purpose of this metric is to have an alert fire in case we have too
few workers processing jobs.