Drop `common.CurrentArch()` implementation and use
`arch.Current().String()` from the osbuild/images instead.
Signed-off-by: Tomáš Hozza <thozza@redhat.com>
Sends a status update to the worker server every 5 minutes.
Also fixes a bug where the body the worker client sent would be empty if
it had to refresh the JWT token. Instead of io.Reader use io.ReadSeeker
so the body can be reread to create the second request (after the token
refresh).
In the internal deployment, we want to talk with composer over a http/https
proxy. This proxy adds new composer.proxy field to the worker config that
causes the worker to connect to composer and the oauth server using
a specified proxy.
NB: The proxy is not supported when connection to composer via unix sockets.
For testing this, I added a small HTTP proxy implementation, pls don't
use this in production, it's just good enough for tests.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
This will allow us to use the service accounts which work against
identity.api.openshift.com. These are much easier to manage, especially
with the new multi-tenancy, as there's a single page to create/expire
them across an account.
They also have the added benefit of not expiring automatically when
they're not used like offline tokens, and immediate expiration when
desired.
This is backwards compatible, as long as the timeout is 0 (never
timeout), which is the default.
In case of the dbjobqueue the underlying timeout is due to
context.Canceled, context.DeadlineExceeded, or net.Error with Timeout()
true. For the fsjobqueue only the first two are considered.
The main changes are:
- Kind, Href, Id fields for every object returned
- Attach operationIds to each request, return it for errors
- Errors are predefined and queryable
2 configurations for the listeners are now possible:
- enableJWT=false with client ssl auth
- enableJWT=true with https
Actual verification of the tokens is handled by
https://github.com/openshift-online/ocm-sdk-go.
An authentication handler is run as the top level handler, before any
routing is done. Routes which do not require authentication should be
listed as exceptions.
Authentication can be restricted using an ACL file which allows
filtering based on JWT claims. For more information see the inline
comments in ocm-sdk/authentication.
As an added quirk the `-v` flag for the osbuild-composer executable was
changed to `-verbose` to avoid flag collision with glog which declares
the `-v` flag in the package `init()` function. The ocm-sdk depends on
glog and pulls it in.
The three new job types osbuild-koji, koji-init, and koji-finalize
allows the different tasks to be split appart and in particular for
there to be several builds on different architectures as part of a
given compose.
In addition to the arguments passed when scheduling a job, a job now
also takes the results of its dependencies as additional arguments. We
call these dynamic arguments for the lack of a better term.
The immediate use-case for this is to allow koji jobs to be split up
as follows:
- koji-init: Creates a koji build, and returns us a token.
- osbuild-koji: one job per architecture, depending on koji-init
having succeeded. Builds the image, and uploads it to koji,
returning metadata about the image produced.
- koji-finalize: uses the token from koji-init and the metadata
from osbuild-koji to import the build into koji if it succeeded
or mark it as failed if it failed.
Move the fact that the worker is requesting jobs of type "osbuild" out
of the client library.
For one, require consumers to pass accepted job types to RequestJobs()
and allow querying for the job type with the new Type() function.
Also, make OSBuildArgs() and Update() generic, requiring to pass an
argument that matches the job type.
Workers reported status via an `osbuild.Result`, which only includes
osbuild output. Make it report OSBuildJobResult instead, which was meant
to be used for this purpose and is already used as the result type in
the jobqueue.
While at it, add any errors produced by targets into this struct, as
well as an overall success flag.
Note that this breaks older workers returning the result of an osbuild
job to a new composer. I think this is fine in this case, for two
reasons:
1. We don't support running different versions of the worker and
composer in the weldr API, and remote workers aren't widely used yet.
2. Both osbuild.Result and worker.OSBuildJobResult have a top-level
`Success` boolean. Thus, logs are lost in such cases, but the overall
status of the compose is not.
Add "image_name" and "stream_optimized" fields to the osbuild job as
replacement for the local target options. The former signifies the name
of the uploaded artifact and whether an artifact should be uploaded at
all (only weldr API). The latter will be deprecated at some point, when
osbuild itself can make streamoptimized vmdk images.
This change separates what have always been two distinct concepts:
artifacts that are reported back to the composer node (in practice
always running on the same machine), and upload targets to clouds and
such. Separating them makes it easier to add job types that only allow
one upload target while keeping artifacts.
Keep the local target around, so that jobs that are scheduled can still
be run after an upgrade.
The server hasn't used common.ImageBuildState to mark a job as
successful or failed for a long time. Instead, it's using the job's
return argument for that. (Jobs don't have a high-level concept of
failing).
Drop the check in the server, and always send "FINISHED" from the client
for backwards compatibility.
Until now, all jobs were put as "osbuild" jobs into the job queue and
the worker API hard-coded sending an osbuild manifest and upload
targets.
Change the API to take a "type" and "args" keys, which are equivalent to
the job-queue's type and args. Workers continue to support only osbuild
jobs, but this makes other jobs possible in the future.
The worker API returns errors of the form:
{ "message": "..." }
Print the message of those errors when receiving an error on the client.
This adds an `Error` type to openapi.yml and marks all routes as
returning it on 4XX and 5XX.
Instead of sending a `token` to workers, send back to URLs:
1. "location": URL at which the job can be inspected (GET) and updated
(PATCH).
2. "artifact_location": URL at which artifacts should be uploaded to.
The actual URLs remain the same, but a client does not need to stitch
them together manually (except appending the artifact's name).
Unfortunately, the client code generated by `deepmap` does not lend
itself to this style of APIs. Use standard http.Client again, which is a
partial revert of 0962fbd30.
Don't give out job ids to workers, but `tokens`, which serve as an
indirection. This way, restarting composer won't confuse it when a stray
worker returns a result for a job that was still running. Also,
artifacts are only moved to the final location once a job finishes.
This change breaks backwards compatibility, but we're not yet promising
a stable worker API to anyone.
This drops the transition tests in server_test.go. These don't make much
sense anymore, because there's only one allowed transition, from running
to finished. They heavily relied on job slot ids, which are not easily
accessible with the `TestRoute` API. Overall, adjusting this seemed like
too much work for their benefit.
The code generator uses the `operationID` field to generate server
handlers, client functions, and types. Use simpler names to make the
generated code easier to read.
In the same way `osbuild.Manifest` is the input to the osbuild API,
`osbuild.Result` is the output. Move it to the `osbuild` package where
it belongs.
This is not a functional change.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Rather than Manifest() returning an osbuild.Manifest object, introduce a
new distro.Manifest object which represents it as an opaque, JSON
serializable object. This new type has the following properties:
1) its serialization is compatible with the input to osbuild,
2) any valid osbuild input can be deserialized into it, and
3) marshalling and unmarshaling to and from JSON is lossless.
This means that even as we change the subset of valid osbulid manifests
that we support, we can still load any previous state from disk, and it
will continue to work just as before, even though we can no longer
deserialize it into our internal notion of osbuild.Manifest.
This fixes the underlying problem of which #685 was a symptom.
Signed-off-by: Tom Gundersen <teg@jklm.no>
The `jobs/:job_id/builds/:build_id/image` route was awkward: the
`:jobid` was actually weldr's compose id and `:build_id` was always `0`.
Change it to `jobs/:job_id/artifacts/:name`, where `:job_id` is now a
job id, and `:name` is the name of the artifact to upload. In the
future, it could support uploading more than one artifact.
This allows removing outputs from `store`, which is now back to being a
pure JSON-store. Take care that `weldr` returns (and deletes) images
from the new (or for backwards compatibility, the old) location.
The `org.osbuild.local` target continues to exist as a marker for the
worker to know whether it should upload artifacts.
The store is responsible for two things: user state and the compose queue. This
is problematic, because the rcm API has slightly different semantics from weldr
and only used the queue part of the store. Also, the store is simply too
complex.
This commit splits the queue part out, using the new jobqueue package in both
the weldr and the rcm package. The queue is saved to a new directory `queue/`.
The weldr package now also has access to a worker server to enqueue and list
jobs. Its store continues to track composes, but the `QueueStatus` for each
compose (and image build) is deprecated. The field in `ImageBuild` is kept for
backwards compatibility for composes which finished before this change, but a
lot of code dealing with it in package compose is dropped.
store.PushCompose() is degraded to storing a new compose. It should probably be
renamed in the future. store.PopJob() is removed.
Job ids are now independent of compose ids. Because of that, the local
target gains ComposeId and ImageBuildId fields, because a worker cannot
infer those from a job anymore. This also necessitates a change in the
worker API: the job routes are changed to expect that instead of a
(compose id, image build id) pair. The route that accepts built images
keeps that pair, because it reports the image back to weldr.
worker.Server() now interacts with a job queue instead of the store. It gains
public functions that allow enqueuing an osbuild job and getting its status,
because only it knows about the specific argument and result types in the job
queue (OSBuildJob and OSBuildJobResult). One oddity remains: it needs to report
an uploaded image to weldr. Do this with a function that's passed in for now,
so that the dependency to the store can be dropped completely.
The rcm API drops its dependencies to package blueprint and store, because it
too interacts only with the worker server now.
Fixes#342
This package does not contain an actual queue, but a server and client
implementation for osbuild's worker API. Name it accordingly.
The queue is in package `store` right now, but about to be split off.
This rename makes the `jobqueue` name free for that effort.
2020-04-16 01:02:16 +02:00
Renamed from internal/jobqueue/client.go (Browse further)