It was passing it through to the non-system delete function
and not returning an error. This checks for system repos first and
returns a 400, SystemSource error response if it is in the system list.
This changes store.DeleteSource to DeleteSourceByName for v0 use and
DeleteSourceByID for v1 usage.
It includes a new client function DeleteSourceV1, adds a new test, and
converts the tests for the previous Source V1 API commits to use
DeleteSourceV1.
This commit changes the store.GetAllSources to distinguish between
getting the source by the Name field, or by the ID (the key to the map)
using GetAllSourcesByName and ...ByID.
SourceConfig.RepoConfig() now takes an id parameter because SourceConfig
only stores the Name, not the ID.
In weldr I split the sourceInfoHandler into 2 separate functions for v0
and v1 behavior, with the core of the old function refactored as
getSourceConfigs and used by both of them.
This also adds new structs for the SourceResponseV0 and SourceResponseV1
as well as helper functions for converting to/from store.SourceConfig
This commit changes the store.PushSource function to take the key as
well as the SourceConfig so that it can be used for v0 or v1.
It adds helper functions for decoding the toml/json into a new
SourceConfig interface type which lets the core source/new code be
shared between the versions.
It also adds tests for the new API behavior.
This is the first patch in a series to add APIv1 support to the
/projects/source routes. The change involves using the store.Sources key
in a different way (as an id instead of as a duplicate of the struct's
Name field) but does not actually involve changing the Sources json in
the store.
In the V0 API the name of the source was used as the identifier, and
there was no short id. In V1 the source is identified by the API using
a short id, and the Name is just a field in the struct to describe the
source. This will become more obvious with the /projects/source/info
response.
This commit changes the following:
Changes store.ListSources to ListSourcesByName and explicitly pulls the
name from the source struct instead of the key. v0 will use this
function call.
Adds store.ListSourcesById which returns the source key as the
identifier. This is used by v1.
Adds a new weldr.SourcesListV1 response type, even though it is exactly
the same as the V1 response in this specific case. I thought it would be
better to have one called V1 than to reuse the V0 struct and possibly
confuse people.
The /projects/source/list API now lists the sources by name for v0 and id for v1.
A test has been added. You will notice it still uses v0 to push and
delete the sources. These will be updated when the new version of the
functions are added in subsequent commits.
This is how it is used in the rest of the code, as a name to represent
the repository in the weldr API. Rename to match its use, and avoid
confusion with the ID passed to dnf-json, which is not the same.
Signed-off-by: Tom Gundersen <teg@jklm.no>
The `jobs/:job_id/builds/:build_id/image` route was awkward: the
`:jobid` was actually weldr's compose id and `:build_id` was always `0`.
Change it to `jobs/:job_id/artifacts/:name`, where `:job_id` is now a
job id, and `:name` is the name of the artifact to upload. In the
future, it could support uploading more than one artifact.
This allows removing outputs from `store`, which is now back to being a
pure JSON-store. Take care that `weldr` returns (and deletes) images
from the new (or for backwards compatibility, the old) location.
The `org.osbuild.local` target continues to exist as a marker for the
worker to know whether it should upload artifacts.
Empty names are not allowed, and blueprint names should only contain
characters matching: ^[a-zA-Z0-9._-]+$
This also adds tests for the various places where the blueprint name
could potentially be wrong.
A timeout doesn't make sense on this level, because it is very difficult
to estimate how long downloading rpm metadata takes. Drop it completely
in favor of higher-level timeouts in the test runner.
Fixes#601
The store is responsible for two things: user state and the compose queue. This
is problematic, because the rcm API has slightly different semantics from weldr
and only used the queue part of the store. Also, the store is simply too
complex.
This commit splits the queue part out, using the new jobqueue package in both
the weldr and the rcm package. The queue is saved to a new directory `queue/`.
The weldr package now also has access to a worker server to enqueue and list
jobs. Its store continues to track composes, but the `QueueStatus` for each
compose (and image build) is deprecated. The field in `ImageBuild` is kept for
backwards compatibility for composes which finished before this change, but a
lot of code dealing with it in package compose is dropped.
store.PushCompose() is degraded to storing a new compose. It should probably be
renamed in the future. store.PopJob() is removed.
Job ids are now independent of compose ids. Because of that, the local
target gains ComposeId and ImageBuildId fields, because a worker cannot
infer those from a job anymore. This also necessitates a change in the
worker API: the job routes are changed to expect that instead of a
(compose id, image build id) pair. The route that accepts built images
keeps that pair, because it reports the image back to weldr.
worker.Server() now interacts with a job queue instead of the store. It gains
public functions that allow enqueuing an osbuild job and getting its status,
because only it knows about the specific argument and result types in the job
queue (OSBuildJob and OSBuildJobResult). One oddity remains: it needs to report
an uploaded image to weldr. Do this with a function that's passed in for now,
so that the dependency to the store can be dropped completely.
The rcm API drops its dependencies to package blueprint and store, because it
too interacts only with the worker server now.
Fixes#342
os.Exit() doesn't execute defer-ed functions, OTOH we call panic()
in case of setup errors, which does run defer-ed functions. So move
the actual test setup and execution in a separate function which will
execute all defer-ed functions and return the exit code from the
test suite to TestMain.
Also adds new types to weldr/json.go to support them.
ComposeEntry had to be duplicated instead of used as-is because it
enforces image type strings that do not match what the API uses (the API
types are all lower case, the internal names are capitalized).
Currently, if a TOML source is added with no name, or the source is
incorrectly inside a [section] it will add an empty source, causing
depsolving to crash.
This adds tests for 'name' and 'type' fields as a minimum requirement,
and returns an API error if they are empty or missing.
This also includes unit and integration tests.
Closes PR#462
The mock server used by unit tests is slightly different than the
running server, mostly related to package names that are hard-coded.
This adds a bool to testState that can be used in the tests to alter the
expected behavior. It should be used as little as possible.
With this change the integration tests can now also be run as unit tests
against the mocked server. The way it works is this:
internal/client/unit_test.go sets up the mock server and is built
when the `integration` build tag is *not* included.
internal/client/integration_test.go sets up the connection to an
existing server and is built when the `integration` build tag *is*
included.
The test code is built and run for both cases.
Currently they all pass for the integration test run. The unit test
cases need some work because the mocked server isn't a real server with
real depsolving and package lists. A future commit will fix this.
Copy weldrcheck's utils.go into client, switch to using TestState struct
to hold global test data. Only build unit_test.go if integration has not
been selected.
This is in preparation to moving weldrcheck code into client *_test.go
files so that the test code can be shared and run against a mock server
during unit testing, or against a running WELDR API server during
integration testing.
weldr needs to know the host architecture. Rather than pinning
a string, pin a real Arch object, and query its name when we
need it.
This verifies the validitiy of the architecture for the given
distro before it is passed to weldr, rather than lazily on
demand.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Mixing the way to build a distribution with where to get the source
packages from is wrong: it breaks pre-release repos, local mirrors, and
other use cases. To accommodate those, we introduced
`/etc/osbuild-composer/repositories`.
However, that doesn't work for the RCM API, which receives repository
URLs to use from outside requests. This API has been wrongly using the
`additionalRepos` parameter to inject those repos. That's broken,
because the resulting manifests contained both the installed repos and
the repos from the request.
To fix this, stop exposing repositories from the distros, but require
passing them on every call to `Manifest()`. This makes `additionalRepos`
redundant.
Fixes#341
This converts the client and weldrcheck functions to use http.Client for
connections instead of passing around the socket path and opening it for
each test.
Currently the responses are all embedded in the weldr API functions.
They need to be usable by the client helper functions so I'm copying
them here and giving them names.
A later commit will go through the API can refactor it to use these
instead of the embedded ones.
This package will contain functions for communicating with the API that
can be used in both the integration tests as well as in a future cmdline
tool similar to composer-cli
When possible the client functions will return the same structures used
by weldr/api.go which have been exported in weldr/json.go