For all currently supported modules, i.e. stages and assemblers,
convert the STAGE_DESC and STAGE_INFO into a proper doc-string.
Rename the STAGE_OPTS into SCHEMA.
Refactor meta.ModuleInfo loading accordingly.
The script to be used for the conversion is:
--- 8< --- 8< --- 8< --- 8< --- 8< --- 8< --- 8< --- 8< ---
import os
import sys
import osbuild
import osbuild.meta
from osbuild.meta import ModuleInfo
def find_line(lines, start):
for i, l in enumerate(lines):
if l.startswith(start):
return i
return None
def del_block(lines, prefix):
start = find_line(lines, prefix)
end = find_line(lines[start:], '"""')
print(start, end)
del lines[start:start+end+1]
def main():
index = osbuild.meta.Index(os.curdir)
modules = []
for klass in ("Stage", "Assembler"):
mods = index.list_modules_for_class(klass)
modules += [(klass, module) for module in mods]
for m in modules:
print(m)
klass, name = m
info = ModuleInfo.load(os.curdir, klass, name)
module_path = ModuleInfo.module_class_to_directory(klass)
path = os.path.join(os.curdir, module_path, name)
with open(path, "r") as f:
data = list(f.readlines())
i = find_line(data, "STAGE_DESC")
print(i)
del data[i]
del_block(data, "STAGE_INFO")
i = find_line(data, "STAGE_OPTS")
data[i] = 'SCHEMA = """\n'
docstr = '"""\n' + info.desc + "\n" + info.info + '"""\n'
doclst = docstr.split("\n")
doclst = [l + "\n" for l in doclst]
data = [data[0]] + doclst + data[1:]
with open(path, "w") as f:
f.writelines(data)
if __name__ == "__main__":
main()
Add a new assembler that takes a tree and creates a Open Container
Initiative[2] image according to the OCI image format[2]. The final
result is a tarball, aka a "orci-archive", that can be pulled into
podman with `podman pull oci-archive:<archive>`. Currently the only
required options are `filename` and `architecture`.
[1] https://www.opencontainers.org/
[2 ]https://github.com/opencontainers/image-spec/
Introduce a new `tar` option, which when given together with the
required `tar.filename` option, will result in the output of the
assembler being a tarball that contains the repo and the compose
information (`compose.json`).
Requires the `tar` command to be present in the build root. Modify
the sample to use that option and include the tar for the build
pipeline.
Change all the schemata to not allow additional properties. This
should help with misspelled properties as well as missing schema
information in the stage itself.
Done via a small python3 script:
--- 8< --- 8< --- 8< --- 8< --- 8< --- 8< --- 8< --- 8< ---
import os
import sys
def list_stages(base):
return [(base, f) for f in os.listdir(base) if f.startswith("org.osbuild")]
stages = list_stages("stages")
stages += list_stages("assemblers")
def find_line(lines, start):
for i, l in enumerate(lines):
if l.startswith(start):
return i
return None
NOADD = '"additionalProperties": false'
for stage in stages:
with open(f"{stage[0]}/{stage[1]}", "r") as f:
print(f"{stage[0]}/{stage[1]}", file=sys.stderr)
data = f.readlines()
i = find_line(data, 'STAGE_OPTS = """')
if i:
data.insert(i+1, NOADD + ",\n")
else:
i = find_line(data, 'STAGE_OPTS = ""')
if i:
data[i] = f'STAGE_OPTS = """\n'
data.insert(i+1, NOADD + "\n")
data.insert(i+2, '"""\n')
with open(f"{stage[0]}/{stage[1]}", "w") as f:
f.writelines(data)
Drop the `osbuild -> ../osbuild` symlink from all module directories.
We now properly initialize the PYTHONPATH to provide the imported
osbuild module from the host environment. Therefore, these links are no
longer needed.
The sources run from the host environment, so they should just pick them
up from the environment the same way osbuild itself does.
By default, xz only uses one CPU core even if multiple cores are
available. If xz compression is chosen, allow xz to use all of the
cores available.
Signed-off-by: Major Hayden <major@redhat.com>
Add a new assembler that takes a file system tree that is already
conforming to the ostree system layout[1], creates a new repository
in archive mode and commits the file system tree to it. Afterwards,
a reference is created with the value supplied in `ref`.
The repository is located at the `/repo` directory and additional
metadata is /compose.json which contain the compose information.
Currently uses rpm-ostree to do the actual committing. In the future
this might change to plain ostree.
[1] https://ostree.readthedocs.io/en/stable/manual/adapting-existing/
VHDX is the best format for uploading to AWS, thus this commit adds the
support for it. Pros over other formats supported by AWS:
- vmdk - doesn't work, qemu-img probably needs some special options
- vhd - the image size gets round up (I can get only a >=7GB volume from
a 6GB image)
- ova - just a wrapper over vmdk/vhd/vhdx adding some metadata
- raw - no compression, the images are huge
Also, the format specification is open, therefore I can't see any issues
with it.
The GUID Partition Table (GPT) layout supports assigning UUIDs for
individual partitions. Add support for specifying those in the
partition description.
The grub prefix ("/boot/grub2") should be defined as relative to the
mountpoint of the filesystem containing it, i.e. /boot/grub2 if it is
on the root filesystem or /grub2 if boot is on a separate partition.
Support the s390x bootloader zipl (z Initial Program Loader). We
supply the parameters for the kernel+initrd as well es the target,
i.e. the boot partition where the bootmap is creating, the device,
here called 'targetbase', to install the bootloader on, including
parameters describing the device (type, blocksize) and also the
offset of the partition containing the target from the start of
device (in sectors).
The kernel and initrd are found via the bootloader entry, ignoring
the rescue kernel.
Since zipl needs the device as well as access to the boot partition
the image is bound to a loopback device. Also keep the filesystem
tree mounted during the execution of the zipl installation.
Include the `bootloader` options in the STAGE_OPTS json schema.
Commit 8fcf7d5c4… introduce the `bootloader` option but the
corresponding schema entry was omitted.
With the introduction of the `bootloader` option, grub2 legacy
installation setting changed. Before, grub2 legacy installation
was dependent on the partition scheme, i.e. only when dos/mbr
layout was used grub2 got installed. After the change the default
is to install it unless `bootloader.type" is explicitly set, even
if the partition layout is GPT. But a legacy grub2 installation
on GPT requires a BIOS boot partition, so the new default is not
right for the case of pure (non-hyrid) UEFI images.
Therefore revert to the old behavior of only defaulting to grub2
legacy if the option is not explicitly set *and* the partition
layout is "dos"/"mbr".
Adapt the f30-qcow2-gpt sample, which is non-uefi grub2 legacy
but with GPT and a bios boot partition, to explicitly request
the grub2 bootloader.
As noted in earlier commits the grub2 boot image needs to be patched
to contain the position of the grub2 core. By default, the location
in the boot image is hard-coded to be the mbr gap (sector 1) but for
GPT partition schemes a separate BIOS boot partition is used that is
located at a "random" location. Refactor the code to generalize the
boot image patching, where the default mbr gap location is just a
special case of the general.
The GRUB2 bootloader in legacy mode, i.e. non-EFI mode, consists of
several stages. The fist one place in the in the Master Boot Record
of the disk will load and execute the next, second stage, consisting
of core modules and the grub kernel. The first bit is also known as
'boot' and the second as 'core'. When the 'MBR' partition layout is
being used, there is a gap between the Master Boot Record (MBR) and
the first partition (for historical and performance reasons). The
core image normally is placed into this gap (call the MBR gap).
When the partition layout is 'gpt' there is no standard gap that can
be used, instead a special partition ("BIOS boot" [1]) needs to be
created that can store the grub2 core image. Additionally, the 'boot'
image need to modified to point the sector of that partition. The
core image itself also needs to be modified with the information of
the location its own second sector. The location of the pointers
were taken from the grub2 source ([2] at commit [3]). For the 'boot'
image it is 'GRUB_BOOT_MACHINE_KERNEL_SECTOR' (0x5c) from 'pc/boot.h'
and for the core image "0x200 - GRUB_BOOT_MACHINE_LIST_SIZE (12)" to
be found in 'pc/diskboot.S'.
[1] https://en.wikipedia.org/wiki/BIOS_boot_partition
[2] https://github.com/rhboot/grub2
[3] 2a2e10c1b39672de3d5da037a50d5c371f49b40d
Extract the small piece of code that writes the grub2's boot image,
i.e. the first stage of the bootloader that will in turn jump to
the second stage. Currently the position of the core is hard-coded
to be the MBR gap, i.e. the gap between the MBR and the start of
the first partition. This is not a necessity, e.g. when using a
dedicated BIOS boot partition on GPT partition layouts. This re-
factoring should make it easier to add code dealing with such
situations.
Introduce support for ppc64le (Open Firmware). The main difference
to x86 legacy, i.e. non-efi, is that no stage 1 is required because
the core image is stored on a special 'PReP' partition, which must
be marked as bootable. The firmware then looks for that partition
and directly loads the core from there and executes it.
Introduce a `platform` parameter for the grub installer code which
controls various platform depended aspects, including a) the path
for the modules, b) what modules are compiled into the core, c) if
the boot image is written to the MBR and 4) where to write the core
image, i.e. mbr-gap or PReP partition.
Extract the function that writes the grub2 core to the image file.
The only supported location currently is the MBR gap, which is the
gap between the Master Boot Record and the first partition, which
for historical and performance reasons was aligned to a certain
sector (used to be 64 but now is even larger with 2048). In the
future other locations for the grub2 core will be supported such
as the PReP partition (ppc64le) or bios-boot (GPT hybrid booting).
Make the bootloader selection explicit by introducing a new option
called `bootloader`, which is an object, containing the `type` and
options belonging to the bootloader. For now only boot-loader that
is supported is "grub2".
Instead of hard-coding "msdos1", determine this partition id
dynamically based on the partition table type and the index
of the partition that contains /boot/grub2, which normally is
either a separate boot partition or the root partition. In
order to be able to do so, set the index of each Partition
when the partition information is read back via `sfdisk`.
NB: partition indexes start at 1 for grub2.
The filesystem module that grub2 needs to have in the core image
is the filesystem containing the grub modules, specifically the
"normal.mod", as well as the grub configuration. In the standard
case, which is also what osbuild uses, this is /boot/grub2; thus
we actually do want the filesystem containing that directory and
its type not the root filesystem.
Explain the concept and reason behind the grub2 core as well as the
details behind the selection of the core modules that get included.
Also elaborate a bit on the MBR gap. For more details about this see
https://en.wikipedia.org/wiki/GNU_GRUB#Version_2_(GRUB_2)
NB: This commit also changes the order of the grub modules, which in
turn changes the layout of the core.img and thus the hash value used
in the test; adapt those value to reflect the changed core.img.
The GPT (GUID Partition Table) standard for partition layout supports
giving partition a name in the Partition object as well as in the
option for the qemu stage when specifying the partition layout.
Introduce a method on the PartitionTable that returns the partition
containing the root filesystem. NB: this does not have to be the
first partition (which could be the EFI partition, or something
else), so we have to iterate through the partitions until we find
it.
Instead of having dictionaries representing the partition table,
partitions and filesystems together with some functions operating
on them, have proper python objects with methods. In the future
these objects could be extract and properly tested as well.
Add mkfs_vfat and hook it up into the generic mkfs_for_type()
dispatcher function. Install grub2 to the MBR only if the partition
table is of type "MBR".
Introduce two new assembler options `pttype` and `partitions` to
allow fine grained control over how the partition table is created.
The first one controls the partition type, either `mbr` (default,
when the key is missing) or `gpt`; if specified the `partitions`
key must contain a list of objects describing the individual
partitions (`start`, `size`, `type`) together with a `filesystem`
object describing the filesystem (`type`, `uuid`, `mountpoint`) to
be created on that partition.
In the case the `pttype` option is missing, the legacy mode is used
where `root_fs_uuid` and `root_fs_type` need to be specified.
Use the newly available partition information in the install_grub2
method: detect which module to use for the root filesystem and
assert the second stage fits between the MBR and the first partition.
Introduce a generic mkfs_for_type() function that will dispatch
to the correct mkfs function depending on the type. Additionally
refactor the partition creation and mounting code to handle more
than one partition.
Part of the refactoring to support uefi/gpt: the method that creates
the partition table now returns an array of dictionaries corresponding
to the individual partitions that have been created together with the
information for the filesystem that this partition should end up with.
Prepare the stage for uefi/gpt support by extracting the code that
installs GRUB and creates the partitions into its own functions.
Should not have any effect on the actual data written to the image.
Commit 283281f broke compression by appending the argument last to the
tar command line. It needs to appear before the file.
Fix that and add a test.
[teg: add minor fix]
This introduces the `root_fs_type` option on the org.osbuild.rawfs
assembler. It only accepts "ext4" and "xfs" values right now and
defaults to "ext4" to preserve backwards compatibility.
This introduces the `root_fs_type` option on the org.osbuild.qemu
assembler. It only accepts "ext4" and "xfs" values right now and
defaults to "ext4" to preserve backwards compatibility.
This commit adds semi-structured documentation to all osbuild stages and
assemblers. The variables added work like this:
* STAGE_DESC: Short description of the stage.
* STAGE_INFO: Longer documentation of the stage, including expected
behavior, required binaries, etc.
* STAGE_OPTS: A JSON Schema describing the stage's expected/allowed
options. (see https://json-schema.org/ for details)
It also has a little unittest to check stageinfo - specifically:
1. All (executable) stages in stages/* and assemblers/ must define strings named
STAGE_DESC, STAGE_INFO, and STAGE_OPTS
2. The contents of STAGE_OPTS must be valid JSON (if you put '{' '}'
around it)
3. STAGE_OPTS, if non-empty, should have a "properties" object
4. if STAGE_OPTS lists "required" properties, those need to be present
in the "properties" object.
The test is *not* included in .travis.yml because I'm not sure we want
to fail the build for this, but it's still helpful as a lint-style
check.
Rather than relying on the offset parameter, simply run mkfs on the
loopback device which is anyway being set up. This also allows us
not to specify the size explicitly.
Before this patch mkfs would complain (uneccesarily) about the
backing file containing a partition table. This is a false positive
as the partition table is in the region of the file before the
passed offset.
Signed-off-by: Tom Gundersen <teg@jklm.no>
We know the root partition we want, as we are setting it up. There
is no need to search for it by filesystem UUID. This simplifies the
setup and means the level 1.5 bootloader is always the same, and
not dependent on an embedded UUID.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Similar to the existing test, but uses qemu-nbd to mount the generated
image.
Using unittest.TestCase.subTest() for now, which means that the tests
aren't very independent. I think this is fine in this case, because
we're testing images independently from each other, reusing the base
tree in the store.