This is how it is used in the rest of the code, as a name to represent
the repository in the weldr API. Rename to match its use, and avoid
confusion with the ID passed to dnf-json, which is not the same.
Signed-off-by: Tom Gundersen <teg@jklm.no>
t.Parallel() makes defer weird. This commit moves the temporary dir setup
and teardown to the parallel subtest, so the teardown gets called after
the tests are finished.
More detailed explanation:
Consider this code:
```go
t.Run("group", func(t *testing.T) {
setUp()
t.Run("parallel test1", parallelTest1)
t.Run("parallel test2", parallelTest1)
t.Run("sequential test", sequentialTest)
cleanUp()
})
```
When the group subtest is started, firstly parallelTest1 is run. When
t.Parallel() in the parallelTest1 is hit, its execution is suspended and
parallelTest2 is started. Once again, t.Parallel() is hit and the
parallelTest2 is suspended. Now, sequentialTest is started. It doesn't
contain t.Parallel() so it's run without any interruptions. The group subtest
continues its execution and calls the cleanUp method. After the cleanup is
done, the group subtest returns and it's time for the important part:
The parallelTest1 and 2 are resumed and run in parallel. In other words,
the code after t.Parallel is run when the PARENT subtest returns.
The important (and weird) part is that the parent subtest of the parallel
subtests returns before its parallel children finish. However, just the inner
anonymous function returns. The containing t.Run("group", func(){...})
function does not return until all the parallel subtests are finished.
How to solve this? One solution is to move the cleanup out of the group
subtest:
```go
t.Run("group", func(t *testing.T) {
setUp()
t.Run("parallel test1", parallelTest1)
t.Run("parallel test2", parallelTest1)
t.Run("sequential test", sequentialTest)
})
cleanUp()
```
How does this work? The group subtest is not marked as parallel, so it just
runs from its beginning to its end (including the parallel subtests). After
it finishes, then the cleanUp is called.
In this commit I chose a different method: I just moved the clean up into each
parallel subtest. I think this is clearer than all that parallel magic I
tried explaining in previous paragraphs.
See the section "Run a group of tests in parallel":
https://blog.golang.org/subtests
and this paragraph:
https://golang.org/pkg/testing/#hdr-Subtests_and_Sub_benchmarks
Run multiple dnf-json tests in parallel to speed up execution. The
total number of parallel tests is limited by the number of CPUs in
the machine that is running the tests.
Signed-off-by: Major Hayden <major@redhat.com>
For one, this allows us to use `require` instead of `assert` and
`continue`, which was awkward to read. That works because it's ok to
fail a subtest: the remaining subtests are executed after it.
Also, this shows which test was executed, making debugging easier:
=== RUN TestCrossArchDepsolve
=== RUN TestCrossArchDepsolve/fedora-30
=== RUN TestCrossArchDepsolve/fedora-30/x86_64
=== RUN TestCrossArchDepsolve/fedora-30/x86_64/ami
...
You can now also run those sub tests in isolation:
osbuild-dnf-json-tests -test.run TestCrossArchDepsolve/fedora-30/x86_64/ami
Lastly, it enables running those tests in parallel (not part of this
patch) by calling `t.Parallel()`.
This function prints the formatted unexpected error message, instead of
the error struct in golang syntax.
It also allows to remove the error in the assertion message.
In PR#395 we discussed the spelling of archs vs. arches and we agreed to
use arches. This patch only renames the public method `ListArchs`in the
`Distro` interface.
This test makes sure we can run depsolve for all build packages sets and
base packages sets for all image types for all architectures for all
defined Fedora versions. In could run the same for RHEL, but I haven't
yet implemented it, because such tests cannot run in public Internet.
The following commit will introduce support for forced architecture in
dnf-json. The APIs already have this kind of information, so we can
simply pass it to the Depsolve and FetchMetadata functions.
This was never actually used anywhere, as passing it to dnf-json
was a noop.
We may want to reconsider the concept of a source/repo name and
how it differs from an ID, but for now drop the name.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This allows us to take advantage of the `testing` package. It also gives
the resulting test binary common command line arguments (same as `go
test`).
Tests need to be compiled with `go test -c`, which injects a `Main()`
that calls the Test* functions.
This is not supported by the golang rpm macros. Thus, build this binary
by calling `go test -c` directly, but taking care to pass the same
linker flags as the `%gobuild` macro.
Mark the test binary with the `integration` build constraint, so that
`go test ./...` doesn't pick them up. That's only for unit tests.
The idea is to move all other test binaries to this scheme as well.
Spec file changes by Lars Karlitski <lars@karlitski.net>
rpmmd looked at the CACHE_DIRECTORY environment variable to set a path
for the dnf repository cache. Aside from being a smelly thing to do
from a library, this breaks osbuild-pipeline and osbuild-dnf-json-tests,
which don't run as systemd services and thus don't have CACHE_DIRECTORY
set.
Explicitly pass the cache directory to rpmmd. Keep using a path based on
CACHE_DIRECTORY for osbuild-composer. Use the user's `.cache` directory
for osbuild-pipeline and a temporary directory for the tests.
the name was misleading because the function could do more than just
download package list. In PushComposeRequest it is also used to fetch
checksums for the repositories, therefore I decided to rename it to
reflect this usage.