Use the images provided by `osbuild/containers` tagged as GHCI (GitHub
CI). These images are fully under our control, cached on the GitHub
infrastructure, and prepared to run `systemd-nspawn` and friends in a
docker container.
The GHCI infrastructure is versioned. New updates to the CI
infrastructure are not automatically picked up. Instead, the `v1` tag
has to be explicitly redirected to new image builds to deploy them. If
a new deployment causes CI failures, we can simply redirect the `v1` tag
back to the previous image builds and get the previous behavior back.
The `osbuild/containers` repository contains the required
infrastructure for this logic. If new dependencies are required in the
CI environment, the respective Dockerfiles must be updated. As a
temporary workaround (e.g., as part of a PR that introduces this), you
can simply add `dnf install -y <package>` to the required entries in
`.github/workflows/*`.
This changes the `modprobe nbd` invocation to be non-fatal on failure,
since it might very well fail on reasonable setups. `modprobe` fails if
it cannot find a module in `/lib/modules`, even if it could reasonably
well figure out whether a module is already loaded. The reason is that
it needs the metadata from the module file to find the required modules
parameters.
If you have `nbd` already loaded but not mapped in `/lib/modules`, the
current call will cause test failures, even though the test would run
smoothly.
Fix this by never requiring `modprobe nbd` to succeed, but instead rely
on the tests failing if accessing `nbd` fails.
This drops the directory './test/testing-rpms'. The directory was
introduced in:
commit d975effc42
Author: Martin Sehnoutka <sehnoutka.martin@gmail.com>
Date: Thu Jul 25 11:12:27 2019 +0200
improve vagrant test and its documentation
It used to be the automatic target directory to store rpms created via
`make copy-rpms-to-test`. This target no longer exists. It was dropped
in:
commit 59b7b545b2
Author: Lars Karlitski <lars@karlitski.net>
Date: Fri Mar 6 11:07:52 2020 +0100
Makefile: remove vagrant rules
Introduce a third test-group called `src` alongside `mod` and `run.
This will contain tests that run against the source code of osbuild.
This initial commit introduces `test/src/test_pylint.py` which will run
the python linter against all our sources.
Use the new `locate_test_data()` helper to get access to test-data.
Guard the test with `have_test_data()` to skip it in case test-data
access is not available.
Use the `can_modify_immutable()` helper from the TestBase parent class
so we do not duplicate the code in multiple places. Similarly, make use
of the `have_rpm_ostree()` helper.
Add a new base class called `TestBase` to our test-suite. This allows
sharing common code between our tests without requiring them to import
each other. Furthermore, it paves the way towards executing all our
tests as part of the `unittest` framework, including pylint and others.
For now, this adds the following features to `TestBase`:
* Common test-guards that are shared between our tests, like
`can_modify_immutable()` or `have_rpm_ostree()`.
* Accessors to the test-checkout. This is `have_test_checkout()` to
check whether the running test has a repository checkout, and
`locate_test_checkout()` to get a path to the repository checkout.
This will allow us to put pylint and friends into the unittest
framework, guard them properly, and still allow running the tests
from a global install which might not have access to a checkout.
For now, we always assume we run from a checkout.
* Accessors to test-data. If we start installing tests as a module
into the system, we cannot bundle test-data together with code.
Therefore, two accessors `have_test_data()` and `locate_test_data()`
are implemented to guard access to test data. If a checkout is
available, it will be used to locate test-data.
In the future, we want to be able to pass a separate path to the
test-data, thus allowing us to install tests into a system.
Drop the `kwargs` forwarding from buildroot.run() to subprocess.run().
We do not use it other than for `stdin=subprocess.DEVNULL`. Set that
option directly instead.
Doing the kwargs forwarding mixes the argument namespaces and is very
hard to read. It is not clear from the call-site which argument goes to
buildroot.run() and which to subprocess.run().
Lastly, it requires us to manually fetch `check` just to make pylint
happy. Lets just drop this dance and make the API explicit.
We currently don't seem to use anything that requires us to use
the draft 7 of the specification. The minimum version that we
need is draft 4, which is also supported by the python-jsonschema
version in RHEL 8.2 (which is 2.6.0).
Attempt osbuild testing on the internal Jenkins deployment with
nodes that are destroyed after each use. The internal Jenkins looks for
a Jenkinsfile inside the `schutzbot` directory.
Let's not remove the `jenkins` directory (used by jenkins.osbuild.org)
yet until we know the internal Jenkins is stable and performs well.
Signed-off-by: Major Hayden <major@redhat.com>
Extract the `suppress_oserror()` function from the ObjectManager and
make it available as utility for other code as well.
This also adds a bunch of tests that verify it works as expected.
This changes `-chardev stdio` to `-chardev file` and uses a temporary
file to communicate with QEMU.
This fixes an issue where `-chardev stdio` hangs if `STDIN` is not a
TTY. I could not figure out how to make it work without a TTY, and it
does not print any meaningful diagnostics. Problem is, in CI and other
automated runners, we do not necessarily have a TTY as STDIN.
This just switches to a temporary file, which seems to work under all
circumstances.
Drop the --build-env command-line argument. It is not used by anything.
Furthermore, our manifests now allow embedding build-environments, so
there is little reason to continue supporting this.
Now that as a result of commit 4d2f15f all symlinks have been
dropped from the individual module paths, the search for module
contents can be simplified again.
In case `--libdir` is not specified on the command line, and thus
`args.libdir` is `None`, pass the standard `/usr/lib/osbuild` path
to the meta.Index constructor. Otherwise no schema information can
be found.
By using a small Jenkins pipeline in the repository, we can define
almost all of our testing parameters in the repo itself and not inside
Jenkins. 🥳
This also allows us to use the GitHub Branch Source plugin and
auto-discover new repositories without `ok to test` bombs in
pull requests.
Signed-off-by: Major Hayden <major@redhat.com>
The truthiness of the `Schema` object itself now contains the
schema validation as well, i.e. schema is only valid if schema
information is present and said information passes validation.
Up until now it was verified that loading the `STAGE_OPTS` data
works, i.e. that it contains valid JSON, but the schema itself
was not verified. Use the new `Schema.check` method to do that.
The _validator member of `Schema` is used as an indicator whether
the provided schema is valid. The `check` method will, in case
that _validator is not set attempt to validate the schema data,
if present and set the _validator member if schema data is set and
validation has passed. On failure, i.e. missing schema information
or invalid schema data, the ValidationResult will contain the
respective error.
This option will print the manifest in JSON, including all the ids,
to stdout. It will not build the pipeline, but the input manifest
will be validated and if that fails the validation result will be
return in JSON.
Add a option to all description methods to include the respective
ids in the description. Defaults to False to preserve the original
output which is used in the tests.
Change all the schemata to not allow additional properties. This
should help with misspelled properties as well as missing schema
information in the stage itself.
Done via a small python3 script:
--- 8< --- 8< --- 8< --- 8< --- 8< --- 8< --- 8< --- 8< ---
import os
import sys
def list_stages(base):
return [(base, f) for f in os.listdir(base) if f.startswith("org.osbuild")]
stages = list_stages("stages")
stages += list_stages("assemblers")
def find_line(lines, start):
for i, l in enumerate(lines):
if l.startswith(start):
return i
return None
NOADD = '"additionalProperties": false'
for stage in stages:
with open(f"{stage[0]}/{stage[1]}", "r") as f:
print(f"{stage[0]}/{stage[1]}", file=sys.stderr)
data = f.readlines()
i = find_line(data, 'STAGE_OPTS = """')
if i:
data.insert(i+1, NOADD + ",\n")
else:
i = find_line(data, 'STAGE_OPTS = ""')
if i:
data[i] = f'STAGE_OPTS = """\n'
data.insert(i+1, NOADD + "\n")
data.insert(i+2, '"""\n')
with open(f"{stage[0]}/{stage[1]}", "w") as f:
f.writelines(data)
In the "test_tar" test the compression can be None, which in turn
will be serialized as "compression: null" in the options dict.
However, this is not a valid option according to the schema. The
schema could be adapted to allow for this but it is probably
better to just omit empty or null values.
Install the schema to %{_datadir}/osbuild/schema and provide a
link from %{pkgdir}/schema to that location so that the osbuild
library can easily access the schemata.
Validate the options of stages and assembler of the pipeline
before running it. A validation failure will abort the run.
Errors are printed in human readable unless `--json` is passed;
For each error a human readable message together with a path
to the object with the error is given. The syntax of the path
is such it can be used via the `jq` command to select the item.
Add checks for the new validation API, specifically check that
errors in the main manifest, the stages and stage options,
assembler and assembler options. Also ensure that there are no
duplicated errors due to pipeline nesting (e.g. build pipelines).
This new module contains utilities that help to introspect parts
that constitute the inner parts of osbuild, i.e. its stages
and assembler (which is also considered a type of stage in
this context). It contains the `StageInfo` class that can that
contains meta-information about the individual stage, such as
a short information (`info`), a longer description (`desc`) and
its JSON schema. A new Schema class represents schema data and
has a `validation` method that can be used to validate that json
data conforms to said schema.
A `Index` class can be used to obtain `StageInfo` and `Schema`
for entities identified via `klass` and `name`.
A top level `validate` method is introduced that can validate
manifest data.
Internally it uses the `jsonschema` package so add that as a
requirement and Install this dependency in the CI.
Using the "$schema" without any specific version has been
deprecated[1] and will lead to warnings like:
DeprecationWarning: The metaschema specified by
$schema was not found. Using the latest draft to validate,
but this will raise an error in the future
Fix this by pointing $schema to "draft-07", which is the latest
version fully supported the jsonschema python package, that is
being used internally.
[1]
"The possibility to declare $schema without specific version
(http://json-schema.org/schema#) was deprecated after Draft 4
and should no longer be used."
https://json-schema.org/understanding-json-schema/reference/schema.html
This reverts commit 33844711cd.
There are systems were our runners have no standard python3 location
available. They will fix the environment before invoking any further
utilities. Therefore, we cannot rely on `python3 foo.py` to work in our
ad-hoc containers.
This simply reverts the behavior back to using the shebang.