Convert some of the fields in the `RepoConfig` struct
to pointers. Since `RepoConfig` will be used to convert
custom repositories to an array of `osbuild.YumRepository`,
we need to ensure that fields that are not set explicitly
are not saved to the `/etc/yum.repos.d` repository files.
Update the internal RepoConfig object to
accept a slice of baseurls rather than a
single field. This change was needed to
align RepoConfig with the dnf spec [1].
Additionally, this change adds custom json
marshal and unmarshal functions to ensure
backwards compatibility with older workers.
Add json tags to the internal rpmmd config
since this is serialized in dnfjson.
Add unit tests to check the serialization
is okay.
[1] See dnf.config
The search command is more complicated than depsolve and dump. It needs
to return results based on the requested package names and globs.
Add a number of mock responses for the new search command, including
search results, all packages, and error responses that are triggered by
using special package names: nonexistingpkg, badpackage1, baddepsolve.
The repository checksums in the response from dnf-json aren't used
anywhere. Since we're making changes to dnf-json and depsolving, now is
a good opportunity to drop them completely.
Defined a Hash() method on rpmmd.RepoConfig that calculates a SHA-256 ID
for a repository based on its configuration. Identical configurations
should produce the same ID. The Name and ImageTypeTags of a repository
aren't taken into account. These attributes affect a repository's
functional configuration.
This ID lets us change the way we handle repository configurations in a
few places:
- Preparing the depsolve job arguments is simpler since we have
predictable IDs for the repository configurations. We don't need to
rely on the index of a RepoConfig in a list to identify or access it,
which prevented us from building a list of all repository
configurations, since we needed them to be placed in the list in a
certain order.
- Associating packages from the depsolve result with the repository
configuration (in depsToRPMMD) no longer relies on an ID string
converted from and back to an integer index. Repositories define
their own IDs.
- Tests are a bit messier now but the changes simplify the main code, so
it's an acceptable trade-off.
- Fixtures need to change based on the repository configuration for
the test.
- We need to calculate the ID for the repository configuration for
the temporary file server URL.
The rpmrepo mock contains code to be used for testing depsolving. It
creates a file server that serves the metadata in test/data/testrepo and
can be used as a repository for depsolve tests.
The dnfjson tests perform a single depsolve with an expected response.
The chain depsolve tests perform multiple depsolves that should produce
the same expected response:
- Single transaction using the ChainDepsove() function
- Two transactions for the same packages split in two with no extra
repositories
- Two transactions for the same packages split in two with the main
repository redefined
dnfjsontest: squash
All calls to rpmmd.Depsolve() are now replaced with the equivalent call
to solver.Depsolve() (or dnfjson.Depsolve() for one-off calls).
Attached an unconfigured dnfjson.BaseSolver to all APIs and server
configurations where rpmmd.RPMMD used to be. This BaseSolver instance
loads the repository credentials from the system and carries the cache
directory, much like the RPMMD field used to do. The BaseSolver is used
to create an initialised (configured) solver with the platform variables
(module platform ID, release ver, and arch) before running a Depsolve()
or FetchMetadata() using the NewWithConfig() method.
The FillDependencies() call in the modulesInfoHandler() of the weldr API
has been replaced by a direct call to the Depsolve() function. This
rpmmd function was only used here. Replacing the rpmmd.Depsolve() call
in rpmmd.FillDependencies() with dnfjson.Depsolve() would have created
an import cycle. The FillDependencies() function could have been moved
to dnfjson, but since it's only used in one place, moving the one-line
function body into the caller is ok.
For testing:
The mock-dnf-json is compiled to a temporary directory during test
initialisation and used for each Depsolve() or FetchMetadata() call.
The weldr API tests now use the mock dnfjson. Each rpmmd_mock.Fixture
now also has a dnfjson_mock.ResponseGenerator.
All API calls in the tests use the proper functions from dnfjson and
only the dnf-json script is mocked. Because of this, some of the
expected results in responses_test had to be changed to match correct
behaviour:
- The "builds" array of each package in the result of a module or
project list is now sorted by version number (ascending) because we
sort the package list in the result of dnfjson by NVR.
- 'check_gpg: true' is added to the expected response of the depsolve
test. The repository configs in the test weldr API specify 'CheckGPG:
True', but the mock responses returned it as false, so the expected
result didn't need to include it. Since now we're using the actual
dnfjson code to convert the mock response to the internal structure,
the repository settings are correctly used to set flag to true for
each package associated with that repository.
- The word "occurred" was mistyped as "occured" in rpmmd and is now
fixed in dnfjson.
The cases are directly copied (or lightly adapted) from
rpmmd_mock/fixtures.
The purpose of the mocks/dnfjson package is to create files with data
for testing the dnfjson package without the need to call the dnf-json
script. Each public function creates a file with test responses in the
same format as the dnf-json script's responses (either valid results or
errors). The dnfjson.Solver can be configured to call the new
./cmd/mock-dnf-json program with the test data file as an argument and a
valid dnf-json request for input. The mock-dnf-json checks the input
request for unknown fields before responding with the contents of the
file.
Each test case file contains two responses, one for each command
supported by dnf-json: "depsolve" and "dump". mock-dnf-json responds
with the appropriate data based on the command in the request. This is
necessary for tests that require both commands in the same call, e.g.,
tests of the weldr API's moduleInfoHandler() which fetches a package
list and then needs to depsolve a subset of those packages.
There are also cases when we want one of the two responses to be an
error. The mock-dnf-json program will return with an error code if it
can successfully unmarshal the intended response into the dnfjson.Error
type.
Add a convenience method `DepsolvePackageSets()` to the `RPMMD`
interface. The method is expected to depsolve all provided package sets
in a chain or separately, based on the provided arguments, and return
depsolved PackageSpecs sets.
The intention is to have a single implementation of how are package sets
depsolved and then use it from all places in composer (API and tools
implementations).
Adjust necessary mock implementations and add a unit test testing the
new interface method implementation.
There is a problem with blueprint changes, once the server is restarted
the previous changes are all lost because they are not serialized to
disk.
This adds test fixture support so that new tests can be added before
fixing the problem. It adds store.FixtureOldChanges with blueprints
changes and empty blueprints.
Related: rhbz#1922845
This is backwards compatible, as long as the timeout is 0 (never
timeout), which is the default.
In case of the dbjobqueue the underlying timeout is due to
context.Canceled, context.DeadlineExceeded, or net.Error with Timeout()
true. For the fsjobqueue only the first two are considered.
The problem: osbuild-composer used to have a rather uncomplete logic for
selecting client certificates and keys while fetching data from
repositories that use the "subscription model". In this scenario, every
repo requires the user to use a client-side TLS certificate. The problem
is that every repo can use its own CA and require a different pair of
a certificate and a key. This case wasn't handled at all in composer.
Furthermore, osbuild-composer can use remote workers which complicates
things even more.
Assumptions: The problem outlined above is hard to solve in the general
case, but Red Hat Subscription Manager places certain limitations on how
subscriptions might be used. For example, a subscription must be tight to
a host system, so there is no way to use such a repository in osbuild-composer
without it being available on the host system as well.
Also, if a user wishes to use a certain repository in osbuild-composer it
must be available on both hosts: the composer and the worker. It will come
with different pair of a client certificate and a key but otherwise, its
configuration remains the same.
The solution: Expect all the subscriptions to be registered in the
/etc/yum.repos.d/redhat.repo file. Read the mapping of URLs to certificates
and keys from there and use it. Don't change the manifest format and let
osbuild guess the appropriate subscription to use.
Composer does not have 1:1 mapping of what can be the Host Distro name
and the names of supported distributions held in the Distroregistry.
The fact that the host distro `Name()` method as passed to the Weldr API
does not return the same name as what is used as distro name for
repository definitions. This makes it hard to use `distro.Distro` and
`distro.Arch` directly and rely on the values returned by them as their
name.
Add `New*HostDistro()` to all distro definitions, accepting the name
that should be returned by the distro's `Name()` method. This is useful
mainly if the host distro is Beta or Stream variant of the distro.
Change the distroregistry.Registry to contain host distro as a separate
value set when creating it using `New()` function. This value is
returned by `Registry.FromHost()` method. Determining the host distro is
handled by the `NewDefault()` function. Move the distro name mangling to
distroregistry package. Add relevant unit tests.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
fedoratest was yet another dummy distribution used by unit tests. After
the rework of test_distro, there is no reason to not use it as the only
distro implementation for testing purposes.
Remove fedoratest distro and replace it with test_distro in all affected
tests.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
My goal is to add a method to distroregistry to return Registry with
all supported distributions. This way, all supported distributions
would be defined only on one place.
To achieve this, the Registry must live outside the distro package
because the distro implementation depends on it and this would create
a circular dependency unsupported by Go.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
This replaces Packages() and BuildPackages() by returning a map of
package sets, the semantics of which is up to the distro to define.
They are meant to be depsolved and the result returned back as a
map to Manifest(), with the same keys.
No functional change.
Signed-off-by: Tom Gundersen <teg@jklm.no>
testjobqueue did not implement the JobQueue interface correctly (noted
in its package comment), making it impossible to write tests for
JobQueue itself.
Replace its use everywhere with fsjobqueue operating on a temporary
directory.
The `jobs/:job_id/builds/:build_id/image` route was awkward: the
`:jobid` was actually weldr's compose id and `:build_id` was always `0`.
Change it to `jobs/:job_id/artifacts/:name`, where `:job_id` is now a
job id, and `:name` is the name of the artifact to upload. In the
future, it could support uploading more than one artifact.
This allows removing outputs from `store`, which is now back to being a
pure JSON-store. Take care that `weldr` returns (and deletes) images
from the new (or for backwards compatibility, the old) location.
The `org.osbuild.local` target continues to exist as a marker for the
worker to know whether it should upload artifacts.
This way we can make more of the store fields and types private in
follow up commits.
This is not a functional change.
Signed-off-by: Tom Gundersen <teg@jklm.no>
The store is responsible for two things: user state and the compose queue. This
is problematic, because the rcm API has slightly different semantics from weldr
and only used the queue part of the store. Also, the store is simply too
complex.
This commit splits the queue part out, using the new jobqueue package in both
the weldr and the rcm package. The queue is saved to a new directory `queue/`.
The weldr package now also has access to a worker server to enqueue and list
jobs. Its store continues to track composes, but the `QueueStatus` for each
compose (and image build) is deprecated. The field in `ImageBuild` is kept for
backwards compatibility for composes which finished before this change, but a
lot of code dealing with it in package compose is dropped.
store.PushCompose() is degraded to storing a new compose. It should probably be
renamed in the future. store.PopJob() is removed.
Job ids are now independent of compose ids. Because of that, the local
target gains ComposeId and ImageBuildId fields, because a worker cannot
infer those from a job anymore. This also necessitates a change in the
worker API: the job routes are changed to expect that instead of a
(compose id, image build id) pair. The route that accepts built images
keeps that pair, because it reports the image back to weldr.
worker.Server() now interacts with a job queue instead of the store. It gains
public functions that allow enqueuing an osbuild job and getting its status,
because only it knows about the specific argument and result types in the job
queue (OSBuildJob and OSBuildJobResult). One oddity remains: it needs to report
an uploaded image to weldr. Do this with a function that's passed in for now,
so that the dependency to the store can be dropped completely.
The rcm API drops its dependencies to package blueprint and store, because it
too interacts only with the worker server now.
Fixes#342
The following commit will introduce support for forced architecture in
dnf-json. The APIs already have this kind of information, so we can
simply pass it to the Depsolve and FetchMetadata functions.
The same types are used in the weldr API as internally. We want
to avoid sharing serialized types like this, as it easily leads
to layering vialotions.
For now just make the translation explicity, in a follow-up
we will introduce types dedicated to serialization in the weldr
API.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Only the weldr API has the concept of a default distro. Pass that distro
explicitly to `PushCompose()` and fetch the distro from the compose in
all other functions that accessed Store.Distro.
When we used the dnf-based pipelines, we were relying on the fact
that the metadata was unlikely to have changed between we generated
the pipeline and called osbuild. We achieved this by always updating
to the most recent metadata on every call to rpmmd.Depsolve that
would end up in a pipelin.
Refreshing the metadata is time-consuming, and something we want
to avoid if at all possible. Now that our pipelines no longer
rely on this property, we can drop the flushing.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Return errors from all distro's New() functions instead of logging and
returning nil. Also, return errors instead of panicking from
NewRegistry() and NewDefaultRegistry().
WithSingleDistro() doesn't follow go's naming convention for creating
objects (New*). Rename it to NewRegistry() and rename the old
NewRegistry() to NewDefaultRegistry().
The idea is that NewRegistry() can be used to create full Registry
objects from outside the package. NewDefaultRegistry() is a convenience
function that creates a Registry with all known distros.
the name was misleading because the function could do more than just
download package list. In PushComposeRequest it is also used to fetch
checksums for the repositories, therefore I decided to rename it to
reflect this usage.
If the Epoch is > 0 the it should be added to the front of the version,
separated by a colon.
Also include a depsolve package with a non-zero Epoch and adjust the
tests accordingly.
The dependencies are not sorted, so depending on what order they were
returned in the freeze route would or would not return the correct
results (exhibited by the version being the original glob instead of the
EVRA).
This also fixes the tests so that the depsolve results are slightly
unsorted by adding a dep-package3 to the start of the list.
The current `NewRegistry` implementation allows for nil values in the
map, but this leads to subtle bugs when using the registry. This patch
enforces non-nil values by introducing additional checks before we
insert the value into the map.
The change unfortunately breaks a lot of tests and therefore it is
necessary to create additional mock: distro.
The new mock is used instead of the previous "real" implementation,
which used to contain nil values.
We must avoid depending on the host's state in any way. This achieves
isolation in the following ways:
- rather than the default config file /dev/null is used
- rather than sharing the host persistent state dir a temporary one
is used and thrown away for each call
- the module_platform_id is set explicitly per supported distro, rather
than taken from /etc/os-release.
Optionally, the cache directory can be configured, as we may want to keep
this separate from the host, if for no other reason than accounting.
However, the cache appears to be well-behaved, so we can keep sharing
it between calls (or even with the host). This speeds up things
considerably, so this is definitely what we want.
Signed-off-by: Tom Gundersen <teg@jklm.no>
In our base distro definitions we exclude packages in addition to
including them. Extend dnf-json to support this, so we can depsolve
the base package set as well as the packages added in blueprints.
Signed-off-by: Tom Gundersen <teg@jklm.no>
When support for osbuild result was added into osbuild-composer it was in
a bit hacky way - localtarget's location was reused as a path for the
result. This didn't make much sense because we want to store the result
even when image build has no localtarget.
Several past commits made store less dependant on the localtarget. The
responsibility for "holding the paths" to build artifacts was gradually
switched from the localtarget to the store while still maintaining
backwards compatibility - localtarget.Location still pointed at the
correct location.
This commit finishes the switch: local target now has no Location field.
The store is now fully responsible for managing the artifacts and paths
to them. LocalTarget is now just a simple "switch" - if image build has it,
then worker uploads an image into the store and it's then available for
download using the weldr API.
The compose now contains multiple image builds, but Weldr API does not
support this feature. Use the first image build every time.
Also start using the new types instead of plain strings.
We were using fedora-30 as a test-distro and tar as test-output, but
that causes lots of churn in the tests when we refactor things. Use
the test distro instead, when generic functionality is being tested
and restrict testing of the individual distros to the distro-specific
tests.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Introduce a DistroRegister object. For now this does not introduce
any functional changes, as the object is always instantited to be
the same. However, in follow-up patches it will get options.
Signed-off-by: Tom Gundersen <teg@jklm.no>
dnf-json relies on dnf's ability to cache repository metadata. This is
important, because the API calls it quite often to serve requests for
package lists and depsolves.
However, osbuild's dnf stage always fetches new metadata, because it
doesn't have access to the host's cache. Since metadata is valid for
some time, even after a repository changed, the checksum we put in
the pipeline might be old.
Force a new metadata download when producing the pipeline. This is still
not perfect, but greatly reduces the probability of putting stale
metadata into the pipeline.
Instead of having a static repository checksum, set it dynamically from
the metadata that osbuild-composer last saw. This is implemented in
dnf-json, which returns the checksums for each repository on every call.
This enables the use of repositories that change over time, such as
fedora-updates. Note that the osbuild pipeline will break when such a
repository changes. This is intentional: pipelines have to be
reproducible.
This commit introduces basic support for upload API. Currently, all the routes
required by cockpit-composer are supported (except for /compose/log).
Also, ComposeEntry struct is moved outside of the store package. I decided
to do it because it isn't connected in any way to store, it's more connected
to API. Due to this move there's currently a known bug that image size is
not returned. This should be solved by moving Image struct inside Compose
struct by follow-up PR.
Make osbuild-composer use FromHost() directly. Everywhere else needs to
specify the distro explicitly.
Also don't panic when a distro doesn't exist. Instead, return nil. Make
sure all callers check for that.
Prior this commit there wasn't an easy to populate the store. The only way
was to call the weldr API or store methods. This design made testing of
various edges quite hard.
This commit adds store fixtures - an easy way how to define store state
before each test case.
In addition, the fixtures were refactored so that new instances are created
prior each test. Before this change the tests were in some cases dependant
on each other.
These endpoint are similar in many ways, therefore just one commit. Their
functionality is basically same as in lorax except for error messages and
weird edge cases when handling trailing slashes.
closes#64, closes#65
We want to test API methods which calls dnf. Unfortunately, calling dnf
is expensive operation - it requires network access and downloading
a lot of (meta)data. This commit changes the rpmmd implementation
so that it can be mocked.