Convert weldrcheck to use the standard go testing framework along with
the github.com/stretchr/testify/require assert package.
This also removes the cmd/osbuild-weldr-tests and builds the test binary
directly from the weldrcheck package. This makes it easier to organize
the code instead of putting it all into a single main_test.go file.
This makes two changes simultaneously, to avoid too much churn:
- move accessors from being on the blueprint struct to the
customizations struct, and
- pass the customizations struct rather than the whole blueprint
as argumnet to distro.Manifest().
@larskarlitski pointed out in a previous review that it feels
redundant to pass the whole blueprint as well as the list of
packages to the Manifest funciton. Indeed it is, so this
simplifies things a bit.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This was never actually used anywhere, as passing it to dnf-json
was a noop.
We may want to reconsider the concept of a source/repo name and
how it differs from an ID, but for now drop the name.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This will eventually replace the remote_location property. The latter
pins a specific location (a specific mirror), but the two former
can together be used to re-resolve to a more suitable mirror at the
time/place the package will actually be downloaded.
Rather than pinning mirrors in the osbuild manifests, we want to be
able to include the metalink and relative locations so each worker
can use mirrors closer to them.
This would be particularly important when pipelines are rebuilt in
the future, and the best mirrors may have changed.
Signed-off-by: Tom Gundersen <teg@jklm.no>
When we used the dnf-based pipelines, we were relying on the fact
that the metadata was unlikely to have changed between we generated
the pipeline and called osbuild. We achieved this by always updating
to the most recent metadata on every call to rpmmd.Depsolve that
would end up in a pipelin.
Refreshing the metadata is time-consuming, and something we want
to avoid if at all possible. Now that our pipelines no longer
rely on this property, we can drop the flushing.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Conceptually, we used to insert the high-level packages and package
groups into the pipeline together with the expected repository
metadata checksum.
osbuild, using the dnf stage, would then fetch the metadata, verify
that its checksum is correct, compute the dependencies, and install
the packages.
Among the problems this has is that it made it impossible to cache
and share the resolved metadata as well as the rpms. Moreover,
as the checksum was at the repository-level, rather than at the
package level, it meant that we would refuse to build a pipeline
as soon as there were any changes at all to the repository, as we
could no longer guarantee the installed packages would be the same.
As of this patch, all repository and metadata handling is done by
composer, rather than osbuild. This means that the resolved metadata
can be cached between runs, which and it means that we can now
pin individual packages, rather than the entire repository. Meaning,
that as long as the rpms are still available, we are able to build
a pipeline.
The downloading of rpms is now done by a source helper in osbuild,
which means that they can be cached and shared between runs too.
One consequence of this change is that we resolve the location of
each rpm in composer, and pass that to the worker. As the worker
may not be in the same location, we do not want to use metalinks
in composer for this, as it would pin the repository closest to
composer, rather than the runner. Instead, we now manually select
a baseurl for each repository, which should be generally the
most useful one. Fedora helpfully provides such baseurls, so
this should work ok.
The most important thing to verify when checking this commit, is
that the image info in our test-cases remains unchanged.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This gives us more readable output. Both because it gives just a
diff, rather than the whole object as a string, but also as it
captures differences between the objects that thir string
representation does not.
In particular, if a field is an interface I, and T implements I,
then an object of type T and a pointer to the same object can both
be assigned to a variable of type I. Either way, the JSON
representation is the same, but the objects (correctly) do not
compare equal.
This is a pain to debug.
Signed-off-by: Tom Gundersen <teg@jklm.no>
A ComposeRequest is data used to submit a compose to the store, so it
should live in that package.
Remove the json marshalling test, because ComposeRequest is never
marshalled to JSON.
This will allow to use types from `distro` in the ComposeRequest struct.
The response is different for JSON and TOML requests. If it is JSON it
will always return a 200, but any blueprints with errors will be in the
errors list.
If TOML has an error it will return an error 400 with the error in a
standard API error response with status set to false.
The JSON and TOML parsers differ in how they handle an empty body so
check for a ContentLength of zero first and return a "Missing
blueprint" error to the client.
Includes updated tests for the JSON path, and new tests for empty TOML
blueprints.
If the blueprint doesn't exist, or the commit for the selected blueprint
doesn't exist it will return an error.
This also fixes the blueprints/undo/ route to return the correct error
to the caller.
If an unknown blueprint or workspace is deleted it will now return an
error.
Also fixes the blueprints DELETE handlers to return the correct error to
the client. Includes a new test.
Previously the order that changes were made to blueprints was not being
saved. I worked around this by sorting by timestamp, but it only has 1s
resolution so it is very likely to end up with changes having the same
timestamp, especially when running tests.
This adds a new variable to the Store, it is a list of the commit hashes
for each blueprint, in the order they were made.
Since this is a change to the Store schema the first time the new code
is run with the old store state it needs to populate the commit list, as
best it can, with the existing data. To do that it sorts the changes for
each blueprint by timestamp and version and saves this ordering into the
new BlueprintsCommits list.
A POST to this route will tag the latest commit of a blueprint as a new
revision. The revision numbers start at 1 and increment on each call.
If the latest commit has already been tagged it ignores the request.
This converts the client and weldrcheck functions to use http.Client for
connections instead of passing around the socket path and opening it for
each test.
A manifest is struct made up of a pipeline and a sources object. So
far all our sources objects are empty, but we have moved from
using pipelines to manifests everywhere, in preparation for
generating pipelines that require sources.
Make the same change in the test cases.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This is to avoid any confusion with the Compose struct in the store,
which contains the pinned rpmmd data and the pipeline, among other
things.
The struct in the test cases represent the user input to the compose
route, so rename it 'compose-request', to make that clearer.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This is unused for now, but we support passing the actual package specs
return from rpmmd.Depsolve() to distro.Pipeline(), so we should support
doing that also in the tests.
So far all our distros ignore the passed in packages, so no test-cases
make use of this yet.
Signed-off-by: Tom Gundersen <teg@jklm.no>
The intention is that the compose struct fully specifies the test
case, and pipeline and image-info specifies the expected outputs.
Lastly, the boot struct specifies how to boot-test the image.
The checksum does not fit into this scheme, as it is computed from
the compose by querying rpmmd, and it is then passed as an input to
distro.Pipeline in order to compute the pipeline.
Introduce a new struct, rpmmd, which will eventually contain all
the data returned from rpmmd.Depsolve and later passed to
distro.Pipeline. For now it only contains the checksum.
This is not a functional change.
Signed-off-by: Tom Gundersen <teg@jklm.no>
If the user creates a new blueprint with no version specified, the
blueprint struct uses "0.0.0" as the default version. Blueprint tests
for a blueprint with an empty version now expect no error.
This is not a behavioral change, as all distros currently use
empty source objects. But when we move over to rpm-based pipelines,
this will change.
Make the same change to osbuild-pipeline, so these stay in sync.
Signed-off-by: Tom Gundersen <teg@jklm.no>
For now, this simply wraps Pipeline and Sources, and retruns the
resulting manifest object. In the future, Pipeline and Sources
may be dropped from the interface.
Signed-off-by: Tom Gundersen <teg@jklm.no>
A manifest is simply a struct containing a sources and a pipeline
object. We want to store and transfer pielines always with their
sources, and will use the manifest for this.
When serialized, a manifest can be the input to `osbuild`, just
like a bare pipeline can be. This means there will be no need
to pass in sources separately on the commandline.
Signed-off-by: Tom Gundersen <teg@jklm.no>
We were verifying two things: if the passed distroArg exists in the
distribution mapping in common/types.go and if the it is an actually
registered distro. Since you cannot have distros registered that don't
correspond to a type, the first test is unnecessary.
Merge the two tests by moving the (much better) error message down into
the second test. This makes DistributionExists redundant, because
Registry.GetDistro() checks this implicitly.
Also, move ListDistributions() to the Registry object, because we want
to show distributions that are actually registered.
Add a test which checks that Registry.List() works and that all included
distributions register correctly.
This changes it to an int pointer so that the JSON will output null.
This means it needs to be checked for nil or for 0 in go.
0 is not a valid revision in the WELDR response, they always start at 1
and increment for each new revision tag so either way is a valid way
to indicate it isn't set.
These tests build on the client functions to run integration tests on a
running API server.
It uses the reflect module to examine the methods attached to the
checkBlueprintsV0 struct and run the ones with names that start with
'Check', also checking the type signature of the functions and failing
the test if any of them don't match.
This will make it easier to add more checks without needing to add
boilerplate call/registration of the functions in the top level runner.
Just add the new function with the right name and signature and it will
be run when checkBlueprintsV0.Run() is called.
Checks for other API routes should be added to their own modules. There
will be some duplication of the Run function in each, but I think that
it will help keep things more manageable by separating them instead of
putting them all into a single giant Run() call.
Currently the responses are all embedded in the weldr API functions.
They need to be usable by the client helper functions so I'm copying
them here and giving them names.
A later commit will go through the API can refactor it to use these
instead of the embedded ones.
This package will contain functions for communicating with the API that
can be used in both the integration tests as well as in a future cmdline
tool similar to composer-cli
When possible the client functions will return the same structures used
by weldr/api.go which have been exported in weldr/json.go
Return errors from all distro's New() functions instead of logging and
returning nil. Also, return errors instead of panicking from
NewRegistry() and NewDefaultRegistry().
These packages (and their tests) shouldn't access the distro package,
because that's cyclic.
Also, these packages should only test the objects they expose.
WithSingleDistro() doesn't follow go's naming convention for creating
objects (New*). Rename it to NewRegistry() and rename the old
NewRegistry() to NewDefaultRegistry().
The idea is that NewRegistry() can be used to create full Registry
objects from outside the package. NewDefaultRegistry() is a convenience
function that creates a Registry with all known distros.
rpmmd looked at the CACHE_DIRECTORY environment variable to set a path
for the dnf repository cache. Aside from being a smelly thing to do
from a library, this breaks osbuild-pipeline and osbuild-dnf-json-tests,
which don't run as systemd services and thus don't have CACHE_DIRECTORY
set.
Explicitly pass the cache directory to rpmmd. Keep using a path based on
CACHE_DIRECTORY for osbuild-composer. Use the user's `.cache` directory
for osbuild-pipeline and a temporary directory for the tests.