Mixing the way to build a distribution with where to get the source
packages from is wrong: it breaks pre-release repos, local mirrors, and
other use cases. To accommodate those, we introduced
`/etc/osbuild-composer/repositories`.
However, that doesn't work for the RCM API, which receives repository
URLs to use from outside requests. This API has been wrongly using the
`additionalRepos` parameter to inject those repos. That's broken,
because the resulting manifests contained both the installed repos and
the repos from the request.
To fix this, stop exposing repositories from the distros, but require
passing them on every call to `Manifest()`. This makes `additionalRepos`
redundant.
Fixes#341
A job's purpose is to build an osbuild manifest and upload the results
somewhere. It should not know about which distro was used to generate
the pipeline.
Workers depended on the distro package in two ways:
1. To set an osbuild `--build-env`. This is not necessary anymore in new
versions of osbuild. More importantly, it was wrong: it passed the
runner from the distro that is being built, instead of one that
matches the host.
This patch simply removes that logic.
2. To fetch the output filename with `Distro.FilenameFromType()`. While
that is useful, I don't think it warrants the dependency.
This patch uses the fact that all current pipelines output exactly
one file and uploads that. This should probably be extended in the
future to upload all output files, or to name them explicitly in the
upload target.
The worker should now compile to a smaller binary and do less
unnecessary work on startup (like reading repository files).
Now that `Store.PushCompose()` takes a `Distro` as argument, the rcm API
can use that function as well. This moves them both through the same
code path, reducing duplication.
Remove `PushComposeRequest()` and the corresponding struct. It was
supposed to allow composes with multiple output types and architectures,
but that was not yet implemented. Merging the two now simplifies moving
the compose queue out of the store in a future commit, which will then
tackle multi-image-type composes as well.
Only the weldr API has the concept of a default distro. Pass that distro
explicitly to `PushCompose()` and fetch the distro from the compose in
all other functions that accessed Store.Distro.
In the post-dnf-stage world, `Distro.Manifest` expects the full list of
depsolved packages. This is similar to what weldr does, but much
simpler, because the rcm API only cares about base packages.
`ComposeRequest` included a `common.Distribution`, which had to be
resolved in PushComposeRequest. Use a real `distro.Distro` object here,
and push resolving it to the rcm package.
Change the `Distribution` on the (lower-case) `composeRequest` to a
string. This struct represents the incoming request. Since we're now
resolving the real distro object from the registry in the same function,
it seems redundant to validate the incoming distro twice.
Convert weldrcheck to use the standard go testing framework along with
the github.com/stretchr/testify/require assert package.
This also removes the cmd/osbuild-weldr-tests and builds the test binary
directly from the weldrcheck package. This makes it easier to organize
the code instead of putting it all into a single main_test.go file.
In the current PRs we have an issue with the linter failing on a dead code
in integration tests. The linter isn't failing on main_test.go file
because it somehow ignores the integration tag. However, it fails on
other files which main_test.go depends on and which have the integration
tag.
This commit tells golangci-lint to always use the integration tag while doing
the inspection.
There's now nothing preventing us from using the newer and better Go
implementation of image tests.
Also, the travis config was simplified to be more readable.
By default, image test executable runs only test cases for the same distro
as the host's one. On Travis there's Ubuntu, so we need to adjust the
behaviour and run the cases for a distro specified by command line
arguments.
We need to use different values for path constants when running the tests
on the Travis CI. This is the first step to achieve this.
Note that this commit may be reverted when Travis CI is dropped.
Use $STATE_DIRECTORY environment variable which is set by systemd
because we use: StateDirectory=osbuild-composer in the service unit.
also change systemd unit to include STATE_DIRECTORY, because
RHEL comes with older systemd version, so we need to set this variable explicitly.
This makes two changes simultaneously, to avoid too much churn:
- move accessors from being on the blueprint struct to the
customizations struct, and
- pass the customizations struct rather than the whole blueprint
as argumnet to distro.Manifest().
@larskarlitski pointed out in a previous review that it feels
redundant to pass the whole blueprint as well as the list of
packages to the Manifest funciton. Indeed it is, so this
simplifies things a bit.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This was never actually used anywhere, as passing it to dnf-json
was a noop.
We may want to reconsider the concept of a source/repo name and
how it differs from an ID, but for now drop the name.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This will eventually replace the remote_location property. The latter
pins a specific location (a specific mirror), but the two former
can together be used to re-resolve to a more suitable mirror at the
time/place the package will actually be downloaded.
Rather than pinning mirrors in the osbuild manifests, we want to be
able to include the metalink and relative locations so each worker
can use mirrors closer to them.
This would be particularly important when pipelines are rebuilt in
the future, and the best mirrors may have changed.
Signed-off-by: Tom Gundersen <teg@jklm.no>
When we used the dnf-based pipelines, we were relying on the fact
that the metadata was unlikely to have changed between we generated
the pipeline and called osbuild. We achieved this by always updating
to the most recent metadata on every call to rpmmd.Depsolve that
would end up in a pipelin.
Refreshing the metadata is time-consuming, and something we want
to avoid if at all possible. Now that our pipelines no longer
rely on this property, we can drop the flushing.
Signed-off-by: Tom Gundersen <teg@jklm.no>
s390x uses LXC, we need to be run in a real VM for the osbuild tests
to work.
We did not yet have s390x support anyway, so these were noops.
Signed-off-by: Tom Gundersen <teg@jklm.no>
It is no longer necesary to install yum, nor use a build environment
even though we are running this in ubuntu VMs. The rpm stage needed
by the build pipeline works just fine on stock ubuntu.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Conceptually, we used to insert the high-level packages and package
groups into the pipeline together with the expected repository
metadata checksum.
osbuild, using the dnf stage, would then fetch the metadata, verify
that its checksum is correct, compute the dependencies, and install
the packages.
Among the problems this has is that it made it impossible to cache
and share the resolved metadata as well as the rpms. Moreover,
as the checksum was at the repository-level, rather than at the
package level, it meant that we would refuse to build a pipeline
as soon as there were any changes at all to the repository, as we
could no longer guarantee the installed packages would be the same.
As of this patch, all repository and metadata handling is done by
composer, rather than osbuild. This means that the resolved metadata
can be cached between runs, which and it means that we can now
pin individual packages, rather than the entire repository. Meaning,
that as long as the rpms are still available, we are able to build
a pipeline.
The downloading of rpms is now done by a source helper in osbuild,
which means that they can be cached and shared between runs too.
One consequence of this change is that we resolve the location of
each rpm in composer, and pass that to the worker. As the worker
may not be in the same location, we do not want to use metalinks
in composer for this, as it would pin the repository closest to
composer, rather than the runner. Instead, we now manually select
a baseurl for each repository, which should be generally the
most useful one. Fedora helpfully provides such baseurls, so
this should work ok.
The most important thing to verify when checking this commit, is
that the image info in our test-cases remains unchanged.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This includes the packgase and build-packages used by each pipeline.
For now, this information is not used anywhere, but when we move
from dnf to rpm-based pipelines, this is what will be used instead
of the repo metadata checksum.
Signed-off-by: Tom Gundersen <teg@jklm.no>
These tests are generated by regenerating each of the fedora-30 tests
with only the distro field changed to fedora-30.
```
for case in f30-*.json; do
cat $case | jq '.["compose-request"]' | jq '.distro = "fedora-31"' | sudo ./tools/generate-test-cases .osbuild | jq . | sponge f31-$case
done
``
Signed-off-by: Tom Gundersen <teg@jklm.no>
For tarballs, this is currently not supported, so no point in generating
the images in the first place. This will still be done during testing to
boot-test them.
Signed-off-by: Tom Gundersen <teg@jklm.no>
So far we only have f30, x86_64 images to be boot tested. In follow-ups
we expect to test all distros, all architectures and all image types
as boot tests.
And we also expect to do some sanity testing for all the blueprint
features we support without booting.
The AMI images can boot with an empty blueprint, the other image types
need an ssh key embedded in order to be able to connect and verify
that they booted successfully.
Signed-off-by: Tom Gundersen <teg@jklm.no>
No longer distinguish between the tests appart from their distro and
architecture from travis' point of view.
We must distinguish based on architecture, as we do not yet support
cross-architecture builds. We also split by distro as there is no
benefit to running tests for different distros on the same VM, as
they will not be able to share any chaches, so we might as well
parallellize them.
Tests that apply to the same distro/architecture combo are now
always run on the same VM as to utilize any caching we are able
to do.
Now that local_boot and empty_blueprint tests have been merged,
this will not increase the number of tests currently run on a
given VM, so should not affect the total running time of the
tests.
Signed-off-by: Tom Gundersen <teg@jklm.no>
As in 616b6250c7, add the needed
ssh key to the formerly empty blueprint, and use this test-case
for booting as well as pipeline generation verification.
For the ext4-partition image type, we also needed to add the
openssh-server package, as @core is not included by default.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This gives us more readable output. Both because it gives just a
diff, rather than the whole object as a string, but also as it
captures differences between the objects that thir string
representation does not.
In particular, if a field is an interface I, and T implements I,
then an object of type T and a pointer to the same object can both
be assigned to a variable of type I. Either way, the JSON
representation is the same, but the objects (correctly) do not
compare equal.
This is a pain to debug.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This commit makes the osbuild-image-tests binary doing the same set of tests
like the old test/run script.
Changes from test/run:
- qemu/nspawn are now killed gracefully. Firstly, SIGTERM is sent.
If the process doesn't exit till the timeout, SIGKILL is sent.
I changed this because nspawn leaves some artifacts behind when killed
by SIGKILL.
- the unsharing of network namespace now works differently because of
systemd issue #15079