`tox` is a standard testing tool for Python projects, this allows you to
test locally with all your installed Python version with the following
command:
`tox -m test -p all`
To run the tests in parallel for all supported Python versions.
To run linters or type analysis:
```
tox -m lint -p all
tox -m type -p all
```
This commit *also* disables the `import-error` warning from `pylint`,
not all Python versions have the system-installed Python libraries
available and they can't be fetched from PyPI.
Some linters have been added and the general order linters run in has
been changed. This allows for quicker test failure when running
`tox -m lint`. As a consequence the `test_pylint` test has been removed
as it's role can now be fulfilled by `tox`.
Other assorted linter fixes due to newer versions:
- use a str.join method (`consider-using-join`)
- fix various (newer) mypy and pylint issues
- comments starting with `#` and no space due to `autopep8`
This also changes our CI to use the new `tox` setup and on top of that
pins the versions of linters used. This might move into separate
requirements.txt files later on to allow for easier updating of those
dependencies.
This allows setting Entrypoint (as well as Cmd) in the oci image,
as per the spec:
https://github.com/opencontainers/image-spec/blob/main/config.md
Note: These two are not equivalent, the Cmd part is replaced by the
argument on the `docker run` commandline, whereas the entrypoint is
kept from the config, so it is important to expose both of these.
Drop `CAP_MAC_ADMIN` from the default capabilities which is needed
to write and read(!) unknown SELinux labels. Adjust the stages
that need to read or write SELinux labels accordingly.
Drop CAP_{NET_ADMIN,SYS_PTRACE} from the default capabilities which
are only needed to run bwrap from inside a stage which is done by
the `ostree.commit` and `ostree.preptree` stages, so retain them
directly there.
Commit 92cc269 fixed a bug where `/var` was copied into `/var`
resulting in `/var/var`. Sadly the fix broke copying links,
like `bin -> usr/bin`, where now the content of the link would
be copied but not the link itself. Use the `-t` command line
flag for `cp` which should ensure that we copy links as links
but also copy the contents for `/var` should the target dir,
i.e. `/var` already exist.
In the ostree assembler, `var`, `usr` and `boot` are copied from
the built tree to a newly initialized and ostree-conforming root
filesystem. The way in which `cp` was called resulted in the
source being created inside the target, if the latter existed.
This was the case for `var` resulting in `var/var`.
Use `cp ${source}/. {target}` to fix that.
Reported-by: Luca Bruno <luca.bruno@coreos.com>
Work around a bug on aarch64[1] where `qemu-img` would hang
about a third of the time when converting images. To be able
to activate the work-around based on the environment, i.e.
only on certain distributions, introduce an environment
variable, `OSBUILD_QEMU_IMG_COROUTINES`, that is set in the
runner and then picked up in the assembler.
[1] https://bugs.launchpad.net/qemu/+bug/1805256
Add a new option `qcow2_comapt` which can be used explicitly
select the compatibility level of the qcow2 file format. Qemu
version 1.1 introduced extensions to the format that became
the default with 1.7, which are not readable by qemu < 1.1.
Thus if the resulting qcow2 should be read by such older qemu
versions, the compatibility level needs to be set to 0.10.
This is, like the stage with the same name, an assembler that
will exit with an error code (default 255, but can be specified
via the assembler options). It is mostly useful for testing.
Instead of including SELinux labels for the content layers via the
`--selinux` tar option, make sure selinux labels are not included by
using the `--no-selinux` option.
The inclusion of the labels was a mistake, since they should be
determined by the target system because selinux labels are not
namespaced. On RHEL/Fedora the SELinux label used is something like
`system_u:object_r:container_ro_file_t:s0` for all the files in the
container.
Including the label was leading to permission problems because
the files had a different label on the host and programs inside
the container get `EACCES`, i.e. Permission denied, errors when
accessing files with the different label.
Interestingly this does not happen on Fedora 33 but only on RHEL.
One possibility is that the overlayfs kernel driver in RHEL is
behaving differently on RHEL than on Fedora.
Add the ability to opt out of preserving the ACLs, SELinux
contexts and extended attributes. It is opt out instead of
opt in since the assembler by default tries to preserve as
much as possible.
All of these options, i.e. SELinux labels, ACLs and extended
attributes (xattrs), are opt-in and thus were currently ignored.
This lead to trees that had their selinux labels missing and
were thus incorrect.
Instead of using the `Assemblers` class to represent assemblers,
use the `Stage` class: The `Pipeline.add_assembler` method will
now instantiate and `Stage` instead of an `Assembler`. The tree
that the pipeline built is converted to an Input (while loading
the manifest description in `format/v1.py`) and all existing
assemblers are converted to use that input as the tree input.
The assembler run test is removed as the Assembler class itself
is not used (i.e. run) anymore.
Instead of reading the arguments from sys.stdin, which requires
that stdin is setup properly for that in the runner, use the new
api.arguments() method to directly fetch the arguments.
Also fix missing newlines between imports and methods to be more
PEP-8 complaint, where needed.
Instead of reading the arguments from sys.stdin, which requires
that stdin is setup properly for that in the runner, use the new
api.arguments() method to directly fetch the arguments.
Also fix missing newlines between imports and methods to be more
PEP-8 complaint, where needed.
Add a new `os_version` option that will result in the `version`
metadata being set as commit metadata. This will then be shown
in the `rpm-ostree status` output.
In three places we have more than 7 instances attributes, but less
then 10; instead of disabling the warning for all these cases,
increase the limit to a reasonable size of 10 and re-enable the
warnings in all the places.
The `capture_output` argument for subprocess.run was added in 3.7,
but want to support 3.6 as well. Change all the usages of it with
`stdout=subprocess.PIPE` that will have the same effect, at least
for stdout.
Fix all occurrences of format-strings without any interpolation. pylint
warns about those (and for some reason did not do so for our modules).
A followup will fix the pylint tests, so make sure all the warnings are
resolved.
For all currently supported modules, i.e. stages and assemblers,
convert the STAGE_DESC and STAGE_INFO into a proper doc-string.
Rename the STAGE_OPTS into SCHEMA.
Refactor meta.ModuleInfo loading accordingly.
The script to be used for the conversion is:
--- 8< --- 8< --- 8< --- 8< --- 8< --- 8< --- 8< --- 8< ---
import os
import sys
import osbuild
import osbuild.meta
from osbuild.meta import ModuleInfo
def find_line(lines, start):
for i, l in enumerate(lines):
if l.startswith(start):
return i
return None
def del_block(lines, prefix):
start = find_line(lines, prefix)
end = find_line(lines[start:], '"""')
print(start, end)
del lines[start:start+end+1]
def main():
index = osbuild.meta.Index(os.curdir)
modules = []
for klass in ("Stage", "Assembler"):
mods = index.list_modules_for_class(klass)
modules += [(klass, module) for module in mods]
for m in modules:
print(m)
klass, name = m
info = ModuleInfo.load(os.curdir, klass, name)
module_path = ModuleInfo.module_class_to_directory(klass)
path = os.path.join(os.curdir, module_path, name)
with open(path, "r") as f:
data = list(f.readlines())
i = find_line(data, "STAGE_DESC")
print(i)
del data[i]
del_block(data, "STAGE_INFO")
i = find_line(data, "STAGE_OPTS")
data[i] = 'SCHEMA = """\n'
docstr = '"""\n' + info.desc + "\n" + info.info + '"""\n'
doclst = docstr.split("\n")
doclst = [l + "\n" for l in doclst]
data = [data[0]] + doclst + data[1:]
with open(path, "w") as f:
f.writelines(data)
if __name__ == "__main__":
main()
Add a new assembler that takes a tree and creates a Open Container
Initiative[2] image according to the OCI image format[2]. The final
result is a tarball, aka a "orci-archive", that can be pulled into
podman with `podman pull oci-archive:<archive>`. Currently the only
required options are `filename` and `architecture`.
[1] https://www.opencontainers.org/
[2 ]https://github.com/opencontainers/image-spec/
Introduce a new `tar` option, which when given together with the
required `tar.filename` option, will result in the output of the
assembler being a tarball that contains the repo and the compose
information (`compose.json`).
Requires the `tar` command to be present in the build root. Modify
the sample to use that option and include the tar for the build
pipeline.
Change all the schemata to not allow additional properties. This
should help with misspelled properties as well as missing schema
information in the stage itself.
Done via a small python3 script:
--- 8< --- 8< --- 8< --- 8< --- 8< --- 8< --- 8< --- 8< ---
import os
import sys
def list_stages(base):
return [(base, f) for f in os.listdir(base) if f.startswith("org.osbuild")]
stages = list_stages("stages")
stages += list_stages("assemblers")
def find_line(lines, start):
for i, l in enumerate(lines):
if l.startswith(start):
return i
return None
NOADD = '"additionalProperties": false'
for stage in stages:
with open(f"{stage[0]}/{stage[1]}", "r") as f:
print(f"{stage[0]}/{stage[1]}", file=sys.stderr)
data = f.readlines()
i = find_line(data, 'STAGE_OPTS = """')
if i:
data.insert(i+1, NOADD + ",\n")
else:
i = find_line(data, 'STAGE_OPTS = ""')
if i:
data[i] = f'STAGE_OPTS = """\n'
data.insert(i+1, NOADD + "\n")
data.insert(i+2, '"""\n')
with open(f"{stage[0]}/{stage[1]}", "w") as f:
f.writelines(data)
Drop the `osbuild -> ../osbuild` symlink from all module directories.
We now properly initialize the PYTHONPATH to provide the imported
osbuild module from the host environment. Therefore, these links are no
longer needed.
The sources run from the host environment, so they should just pick them
up from the environment the same way osbuild itself does.
By default, xz only uses one CPU core even if multiple cores are
available. If xz compression is chosen, allow xz to use all of the
cores available.
Signed-off-by: Major Hayden <major@redhat.com>
Add a new assembler that takes a file system tree that is already
conforming to the ostree system layout[1], creates a new repository
in archive mode and commits the file system tree to it. Afterwards,
a reference is created with the value supplied in `ref`.
The repository is located at the `/repo` directory and additional
metadata is /compose.json which contain the compose information.
Currently uses rpm-ostree to do the actual committing. In the future
this might change to plain ostree.
[1] https://ostree.readthedocs.io/en/stable/manual/adapting-existing/
VHDX is the best format for uploading to AWS, thus this commit adds the
support for it. Pros over other formats supported by AWS:
- vmdk - doesn't work, qemu-img probably needs some special options
- vhd - the image size gets round up (I can get only a >=7GB volume from
a 6GB image)
- ova - just a wrapper over vmdk/vhd/vhdx adding some metadata
- raw - no compression, the images are huge
Also, the format specification is open, therefore I can't see any issues
with it.
The GUID Partition Table (GPT) layout supports assigning UUIDs for
individual partitions. Add support for specifying those in the
partition description.
The grub prefix ("/boot/grub2") should be defined as relative to the
mountpoint of the filesystem containing it, i.e. /boot/grub2 if it is
on the root filesystem or /grub2 if boot is on a separate partition.
Support the s390x bootloader zipl (z Initial Program Loader). We
supply the parameters for the kernel+initrd as well es the target,
i.e. the boot partition where the bootmap is creating, the device,
here called 'targetbase', to install the bootloader on, including
parameters describing the device (type, blocksize) and also the
offset of the partition containing the target from the start of
device (in sectors).
The kernel and initrd are found via the bootloader entry, ignoring
the rescue kernel.
Since zipl needs the device as well as access to the boot partition
the image is bound to a loopback device. Also keep the filesystem
tree mounted during the execution of the zipl installation.