Commit graph

31 commits

Author SHA1 Message Date
Tom Gundersen
10a9f16852 test: move all test manifests to get fedora packages from kernel mirrors
This replaces the round-robin mirror at fedoraproject.org, as that was
proving to be quite unreliable.

This is a short-term fix before add metalink support.

Signed-off-by: Tom Gundersen <teg@jklm.no>
2020-06-07 22:08:34 +02:00
Christian Kellner
e036f86119 test: fix freq, passno types everywhere
Both 'freq' and 'passno' org.osbuild.fstab2 stage options need to
be of type int, not string.
2020-05-06 15:42:23 +02:00
Tom Gundersen
afd94b1017 test/pipelines: drop sources.json
This was unused, as the test pipelines now contains the sources
inline.

Signed-off-by: Tom Gundersen <teg@jklm.no>
2020-04-15 15:29:52 +02:00
Tom Gundersen
9d79d5fcc3 stages/grub2: default to disabling legacy support
For the sake of backwards compatibility, legacy support was enabled
by default. Flip this around, so that leaving the parameter out
means disabling it.

This is more intuitive, and will pave the way for dropping support
for the value being a bool in the future.

`osbuild-composer` always passes the argumnet explicitly, though
still always as a boolean.

Signed-off-by: Tom Gundersen <teg@jklm.no>
2020-04-14 23:47:08 +02:00
Tom Gundersen
26f5135a5f tests: move from dnf- to rpm-based pipelines
This should produce the same images (except for no more dnf metadata),
but avoids depsolving (and network access) from stages.

The following helper script around osbuild-composer's dnf-json was
used for the translation:

```

import json
import os
import subprocess
import sys

def fetch_repos(sources, repo):
    if isinstance(repo, str):
        repo = sources["org.osbuild.dnf"]["repos"][repo]

    return repo, repo["gpgkey"]

def convert_stage(sources, stage, cachedir):
    gpgkeys = []
    repos = []
    for repoid, repo in enumerate(stage["options"]["repos"]):
        repo, gpgkey = fetch_repos(sources, repo)
        repo["id"] = repoid
        repo["name"] = f"repo-{repoid}"
        repos.append(repo)
        gpgkeys.append(gpgkey)

    arguments = {}
    arguments["cachedir"] = cachedir
    arguments["module_platform_id"] = stage["options"]["module_platform_id"]
    arguments["package-specs"] = stage["options"]["packages"]
    arguments["exclude-specs"] = stage["options"].get("exclude_packages", [])
    arguments["repos"] = repos

    call = {}
    call["command"] = "depsolve"
    call["arguments"] = arguments

    r = subprocess.run(["./dnf-json"],
                   input=json.dumps(call),
                   stdout=subprocess.PIPE,
                   encoding="utf-8",
                   check=True)
    pkgs = json.loads(r.stdout)["dependencies"]

    packages = []
    urls = {}
    for p in pkgs:
        packages.append(p["checksum"])
        urls[p["checksum"]] = p["remote_location"]

    options = {}
    options["gpgkeys"] = gpgkeys
    options["packages"] = packages

    stage["name"] = "org.osbuild.rpm"
    stage["options"] = options

    return urls

def convert_pipeline(sources, pipeline, cachedir):
    urls = {}
    if "build" in pipeline:
        u = convert_pipeline(sources, pipeline["build"]["pipeline"], cachedir)
        urls = {**urls, **u}
    for stage in pipeline["stages"]:
        if stage["name"] == "org.osbuild.dnf":
            u = convert_stage(sources, stage, cachedir)
            urls = {**urls, **u}
    return urls

manifest = json.load(sys.stdin)
urls = convert_pipeline(manifest["sources"], manifest["pipeline"], f"{os.getcwd()}/dnf-cache")
sources = { "org.osbuild.files": { "urls": urls }}
json.dump({"sources": sources, "pipeline": manifest["pipeline"]}, sys.stdout)
                                                                                                                                                                                                                            75,9          Bot
```

Signed-off-by: Tom Gundersen <teg@jklm.no>
2020-03-03 00:05:26 +01:00
Tom Gundersen
320c08b76c tests/dnf: use baseurl over metalink
Using a metalink resolves to a specific mirror at runtime, and
downloads each rpm from that repository.

We want to move to using the org.osbuild.files source, which means
that we must save the url to each rpm in the source definition, which
will be determined by which mirror is used to generate the config.

If we use metalinks to generate the source configuration, the mirror
used will be arbitrary. Instead, we want to pick the best mirror
explicitly, ideally in a way that is independent of the location
depsolving happens in (which will be different from the location
the rpms are downloaded to).

We can choose explicitly by passing baseurl rather than metalink
to dnf, so move in that direction now by replacing all metalinks
by baseurls in our dnf configuration.

Signed-off-by: Tom Gundersen <teg@jklm.no>
2020-03-03 00:05:26 +01:00
Tom Gundersen
4e9f5d4473 tests/pipelines: embed all sources with their respective pipelines
We now support sources and pipelines being passed to osbuild as one.

This will make the transformation from dnf to rpm stage simpler, as
the source object will then be different for each stage, so having
a shared one as now would be cumbersome.

Signed-off-by: Tom Gundersen <teg@jklm.no>
2020-03-03 00:05:26 +01:00
Tom Gundersen
ff8fda9e53 tests/dnf: always specify platform_module_id
As long as this matches the build environment, this does not make
a differenece, but let us not depend on this.

This will be useful when automatically transforming dnf to rpm
pipelines, as the platform_module_id is needed as input to
osbuild-composer's dnf-json tool.

Performed using this script:

```

cat $1 | jq '(.stages[]? | select(.name == "org.osbuild.dnf") | .options.module_platform_id) |= . + "platform:f30"'  | sponge $1
cat $1 | jq '(.build.pipeline.stages[]? | select(.name == "org.osbuild.dnf") | .options.module_platform_id) |= . + "platform:f30"'  | sponge $1
```

Signed-off-by: Tom Gundersen <teg@jklm.no>
2020-03-03 00:05:26 +01:00
Tom Gundersen
d24d594173 test/sources/files: use baseurl
The chosen mirror was flakey, causing CI to fail. Move to using the baseurl.

Signed-off-by: Tom Gundersen <teg@jklm.no>
2020-02-25 15:04:09 +01:00
Tom Gundersen
1d588b8e86 stages/rpm: adapt to use the files source
Drop the rpm downloading and instead use the files source. This gives
us caching for free, and is the last missing step before we can
deprecate the dnf stage.

The main benefit of the rpm over the dnf stage is that we pin the package
versions rather than the repo metadata version. This will allow us to
support continuously changing repositories as individual packages are much
less likely to change than the repos iteself, and old packages are meant
to stay around for some time, unlike the repo metadata which is instantly
swapped out.

Depsolving is also slow on the first run, which we were always hitting as
the depsolving was always happening in a fresh container.

Based on a patch by Lars Karlitski.

Signed-off-by: Tom Gundersen <teg@jklm.no>
2020-02-06 19:01:12 +01:00
Lars Karlitski
510e2b1e94 osbuild: introduce sources
Pipelines encode which source content they need in the form of
repository metadata checksums (or rpm checksums). In addition, they
encode where they fetch that source content from in the form of URLs.
This is overly specific and doesn't have to be in the pipeline's hash:
the checksum is enough to specify an image.

In practice, this precluded using alternative ways of getting at source
packages, such as local mirrors, which could speed up development.

Introduce a new osbuild API: sources. With it, a stage can query for a
way to fetch source content based on checksums.

The first such source is `org.osbuild.dnf`, which returns repository
configuration for a metadata checksum. Note that the dnf stage continues
to verify that the content it received matches the checksum it expects.

Sources are implemented as programs, living in a `sources` directory.
They are run on the host (i.e., uncontained) right now. Each source gets
passed options, which are taken from a new command line argument to
osbuild, and an array of checksums for which to return content.

This API is only available to stages right now.
2019-12-23 01:12:38 +01:00
Lars Karlitski
e590dee93b assemblers/tar: fix compression
Commit 283281f broke compression by appending the argument last to the
tar command line. It needs to appear before the file.

Fix that and add a test.

[teg: add minor fix]
2019-12-10 12:07:08 +01:00
Lars Karlitski
64713449ce Introduce runners
We've been using a generic `osbuild-run`, which sets up the build
environment (and works around bugs) for all build roots. It is already
getting unwieldy, because it tries to detect the OS for some things it
configures. It's also about to cause problems for RHEL, which doesn't
currently support a python3 shebang without having /etc around.

This patch changes the `build` key in a pipeline to not be a pipeline
itself, but an object with `runner` and `pipeline` keys. `pipeline` is
the build pipeline, as before. `runner` is the name of the runner to
use. Runners are programs in the `runners` subdirectory.

Three runners are included in this patch. They're copies of osbuild-run
for now (except some additions for rhel82). The idea is that each of
them only contains the minimal setup code necessary for an OS, and that
we can review what's needed when updating a build root.

Also modify the `--build-pipeline` command line switch to accept such a
build object (instead of a pipeline) and rename it accordingly, to
`--build-env`.

Correspondingly, `OSBUILD_TEST_BUILD_PIPELINE` → `OSBUILD_TEST_BUILD_ENV`.
2019-11-25 13:05:22 +01:00
Tom Gundersen
d15bbaa05d test: remove redundant tests and Vagrant integration
The tests from the integration_tests directory, were superseded
by the new stage tests.

The Vagrant integration seems not to have been working since
ea68bb0c26, as a test-setup.py was
dropped there, which it relies on. Remove it for now. If we want
that back, we should consider that in a separate PR.

Signed-off-by: Tom Gundersen <teg@jklm.no>
2019-10-13 12:08:08 +02:00
Tom Gundersen
21df63ba31 stages/dnf: embed the gpgkey in the pipeline
Downloading the gpg key is fragile and kept causing our tests to fail.
In general, we want to limit the network access, so let's just embed
the gpg keys directly in the pipeline.

Fixes #133.

Signed-off-by: Tom Gundersen <teg@jklm.no>
2019-10-12 14:59:01 +02:00
Lars Karlitski
9fbe80722b assemblers: add org.osbuild.rawfs
This assembler outputs an image file which only contains the file
system.
2019-10-07 10:10:51 +02:00
Lars Karlitski
3e57f13380 stages/dnf: exclude-packages → exclude_packages 2019-10-03 12:53:01 +02:00
Tom Gundersen
72c3157162 assemblers/qemu: replace grub2-install
Background:

grub2 works in three stages:
 - The first stage is found in the first 440 bytes of the master
   boot record, and its only purpose is to load and execute the
   second stage. This stage is static, and just copied from the rpm
   without modification.
 - The second stage is found in the gap between the MBR and the
   first partition, and may be up to 31kB in size. This stage is
   specific to the host and must contain the instructions for
   finding the right file system and subdirectory for the grub2
   config and modules on the host, as well as the modules needed
   to do this.
 - The third stage is found in the `normal` module, which loads
   grub2.conf, which in turn may load more modules and perform
   arbitrary instructions.

Problem:

grub2-install is responsible for installing all these stages on the
target image. This goes against our design, as modifications outside
the filesystem should happen in the assembler, but modifications to
the filesystem should happen in a stage. In particular, we don't
want the contents of the image to differ in any way from the output
tree that is stored in our content store (the output of our last
stage). This causes a practical problem at the moment, as our
selinux stage is ran before the assembler, and as such the grub
modules do not get selinux labels applied.

It turns out that we could split grub2-install in two as we want,
by passing `--no-bootsector` to it to install only the modules,
and copy/genereta the two first stages as files under /boot and
then run `grub2-bios-setup` to write the stages from /boot into
the image where they belong.

Regrettably, this does not work as both `grub2-install` and
`grub2-bios-setup` introspect the system and block devices they
are being run on to generate the right configuration. This is not
what we want, as we would like to specifcy the config explicitly
and run them independently of the target image. The specific bug
we get in both cases is that the canonical path containing our
object store cannot be found.

Before osbuild this was not a problem, as other installers would
instal and assemble everything directly in the target image as a
loopback device. Something we explicitly do not want to do.

Solution:

This patch essentially reimplements grub2-install, or rather the
parts of it that we need. One change in behavior from the upstream
tool is that we no longer write the level one and level two boot
loaders to /boot before moving them into place, but just write them
directly where they belong (so they do not end up on the
filesystem).

The parts that copy files into /boot are now in the grub2 installer
and the parts that write the level one/two bootloaders are in the
qemu assembler.

This achieves a few principles I think we should always adher to:
 - never run tools from the target image (no chroot)
 - don't read/copy files from the target image that was written
   by other stages. We already try to avoid sharing state, and
   by treating the image as write-only, we avoid accidentally
   sharing state through the target tree.

Based-on-suggestions-from: Javier Martinez Canillas <javierm@redhat.com>
With-god-like-debugging-and-fixes-by: Lars Karlitski <lubreni@redhat.com>
Signed-off-by: Tom Gundersen <teg@jklm.no>
2019-10-02 15:10:37 +02:00
Tom Gundersen
f470c3f3a3 assemblers/qemu: fix the partition UUID in the pipeline
Otherwise, sfdik would pick one at random. We want our images to be
reproducible to the extent possible, so we must move all randomness
out of the assemblers when we can.

Signed-off-by: Tom Gundersen <teg@jklm.no>
2019-10-02 15:10:37 +02:00
Tom Gundersen
34098bf6c6 assembler: rename qcow2 to qemu and add support for more formats
Opt in to supporting the most common ones, if we want to support more
we can add support as the need arises.

Signed-off-by: Tom Gundersen <teg@jklm.no>
2019-09-29 19:05:55 +02:00
Tom Gundersen
840bfd580c stages/dnf: don't name the repositories
The names carry no information, and do not affect the produced image.
Generate them instead.

Signed-off-by: Tom Gundersen <teg@jklm.no>
2019-09-29 19:04:39 +02:00
Tom Gundersen
4ba125e393 pipeline: stop naming pipelines
This key carries no information and is never used anywhere. The json
files are not meant to be human readable, so simply drop this.

Signed-off-by: Tom Gundersen <teg@jklm.no>
2019-09-29 18:59:45 +02:00
Lars Karlitski
57c82a00d0 stages/dnf: verify repository checksum
Require "checksum" option for each repository, which contains the
checksum of the `repodata/repomd.xml` file. This file (indirectly)
contains checksums for all packages.

Verify that the metadata dnf downloaded to install packages matches that
checksum. This way, this stage will give an error when a reposiory
changed between putting together the pipeline and running it.
2019-09-24 20:17:04 +02:00
Lars Karlitski
0dd939b658 stages/dnf: only write known options to repo file
Don't pass through arbitrary options. This means that pipeline repo
objects don't have the same options as dnf repo files anymore:

1. Hard code repo name to repo id. The name has no influence on the
resulting image and should thus not appear in a pipeline.

2. Set gpgcheck=1 when gpgkey is given. It defaults to false, which
means that all sample and test pipelines didn't verify packages. It
would have failed anyway, because the container doesn't have the key
referenced in /etc. Change all gpgkeys to refer to the key id and import
them manually.

3. Don't allow lists for baseurl and gpgkey. We can add that if we need
it at some point.
2019-09-24 20:17:04 +02:00
Lars Karlitski
93da5caa69 stages/dnf: add mandatory basearch argument
We've been effectively using the basearch of the host, making the stage
non-reproducible: if the same pipeline was run on machines with
different architectures, it would produce different results. However,
pipelines producing different outputs must be different. Thus, this
patch includes the basearch in the pipeline.

In principle, this allows cross-arch builds. dnf should be the only
stage running binaries from the target tree. This is not yet tested.
2019-09-24 20:17:04 +02:00
Ondřej Budai
a8df4ca2dc tests: Do not use compression in tar assembler
XZ compression is really slowing down our tests. Additionally, we dump
the resulting image right after the tests are done (at least on CI).
Let's just dump the compression.
2019-09-10 14:55:40 +02:00
Ondřej Budai
7fabcfe333 stages/locale: Refactor locale stage to look like similar ones
The locale stage now cannot be used to set the keymap. Use the keymap
stage instead. Also, the stage was refactored to look like keymap and
timezone stages just to be consistent (systemd-firstboot is now used).
2019-09-10 09:22:26 +02:00
Lars Karlitski
2c73187046 assemblers/qcow2: Pass size explicitly
Don't try to guess how much room the filesystem will take up. In
practice, most people will want to specify a size anyway, depending on
their use case.

As is typical for osbuild, there are no convenience features for the
pipeline (it's not meant to be written manually). `size` must be given
in bytes and it must be a multiple of 512.
2019-09-01 23:04:25 +02:00
Tom Gundersen
a41ce99521 test: make the testsuite passive rather than active
Let the image be responsible for running its own test, and simply
listen for the output from the testsuite.

Hook this up with a standard f30 image that contains a simple boot
test case, using systemctl to verify that all services started
correctly.

This replaces the old web-server test, giving similar functionality.
The reason for the change is twofold: this way the tests are fully
specificed in the pipeline, so easier to reproduce. Moreover, this
is less intrusive, as the test does not require network support in
the image.

Signed-off-by: Tom Gundersen <teg@jklm.no>
2019-08-30 12:00:47 +02:00
Tom Gundersen
5854ceea42 stages/grub2: make booting in ro/rw mode configurable
Move the decision whether the root fs should be mounted ro or rw
into the pipeline configuration.

Update the pipelines accordingly.

Signed-off-by: Tom Gundersen <teg@jklm.no>
2019-08-26 09:25:42 +03:00
Martin Sehnoutka
ea68bb0c26 Test refactoring
The testing script is getting too big and not very well organized. In
this commit a new module `integration_tests` is introduced that contains
parts of the original testing script split into multiple files. The
content should be the same, the only difference is that now you can run
the tests by invoking `python3 -m test`.
2019-08-13 15:14:58 +02:00