This fixes a bunch of minor pylint warnings:
* Drop unused imports.
* Fix "inline-JSON" formatting.
* Fix space before/after brackets.
* Use `_` for unused variables.
* Break overlong lines.
* Mark unittest as `no-self-use` if applicable.
* Drop spurious newline at end of file.
This fixes 3 things:
* Drop an unused argument from the http server handler.
* Break an overlong `with:` statement.
* Fix indentation where it is wrong.
This drops the `server` alias for `http.server`. There is only a single
caller, so lets just be explicit so the callsite is easier to
understand.
As a side effect, this unifies all the imports, no cherrypicking
anymore.
This stage runs a given command only on the first boot of the image,
useful for doing instantiation tasks that can only be done in the
target environment, or that should be done per-instance, rather
than per image.
Ideally we would use systemd's ConditionFirstBoot for this, but that
requires images to ship without an /etc/machine-id, and currently
we only support shipping images with an empty /etc/machine-id.
Changing this would mean dropping /etc/fstab in favor of mounting
the rootfs rw from the initrd. This is likely the right thing to
do regardless, but we would have to audit what other first-boot
services we would end up with pulling in in this case.
Instead we introduce our own flag file /etc/osbuild-first-boot,
and use ConditionPathExists.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Add a helper, `parse_config`, to parse a selinux configuration file,
see selinux(8), and return a dictionary containing the configuration
data in key, value pairs. This, in turn, can be fed into the other
helper method, `config_get_policy`, to get the effective policy or
`None` if SELinux is disabled or the policy type is not configured.
Add a new test suite that checks the basic functionality of the
helpers above.
When using rpm-ostree compose, a Treefile[1] controls various
aspects of its behaviour. Since rpm-ostree will, at least in
the beginning, be used to post-process and committing the tree
add a helper class to ease the creation of correct Treefiles.
The docstring of the Treefile contains the information in which
phases ('install', 'postprocess', 'commit') the option is used,
as of rpm-ostree commit 1cf0d557ae8059e689b1fed670022727e9842288
Add basic checks for the ostree.Treefile helper. Some of the
tests require rpm-ostree to be installed.
[1] https://rpm-ostree.readthedocs.io/en/stable/manual/treefile/
For the sake of backwards compatibility, legacy support was enabled
by default. Flip this around, so that leaving the parameter out
means disabling it.
This is more intuitive, and will pave the way for dropping support
for the value being a bool in the future.
`osbuild-composer` always passes the argumnet explicitly, though
still always as a boolean.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Make sure we prune the caches after each stage-test to keep our disk
footprint small. This does considerably increase build-times since we
no longer share cached entries. However, the current CI builds simply
run out of disk-space.
Once we use separate output-directories we will be able to drop the
automatic checkpointing from the tests, and thus effectively get the
same behavior. Until then, lets prune the caches explicitly.
Recent qemu version will warn with our current code:
qemu-system-x86_64: -accel kvm:hvf:tcg: Don't use ':' with -accel,
use -M accel=... for now instead
Since this might result in hard-errors, lets just follow the advice and
use the `-M` switch.
Simple new object that should expose the root file system with the
same API as `objectstore.Object` but as read-only. This means that
the `read` call works exactly as for `Object` but `write` raises
an exception.
Add tests to specifically check the read-only properties.
Using a metalink resolves to a specific mirror at runtime, and
downloads each rpm from that repository.
We want to move to using the org.osbuild.files source, which means
that we must save the url to each rpm in the source definition, which
will be determined by which mirror is used to generate the config.
If we use metalinks to generate the source configuration, the mirror
used will be arbitrary. Instead, we want to pick the best mirror
explicitly, ideally in a way that is independent of the location
depsolving happens in (which will be different from the location
the rpms are downloaded to).
We can choose explicitly by passing baseurl rather than metalink
to dnf, so move in that direction now by replacing all metalinks
by baseurls in our dnf configuration.
Signed-off-by: Tom Gundersen <teg@jklm.no>
We now support sources and pipelines being passed to osbuild as one.
This will make the transformation from dnf to rpm stage simpler, as
the source object will then be different for each stage, so having
a shared one as now would be cumbersome.
Signed-off-by: Tom Gundersen <teg@jklm.no>
As long as this matches the build environment, this does not make
a differenece, but let us not depend on this.
This will be useful when automatically transforming dnf to rpm
pipelines, as the platform_module_id is needed as input to
osbuild-composer's dnf-json tool.
Performed using this script:
```
cat $1 | jq '(.stages[]? | select(.name == "org.osbuild.dnf") | .options.module_platform_id) |= . + "platform:f30"' | sponge $1
cat $1 | jq '(.build.pipeline.stages[]? | select(.name == "org.osbuild.dnf") | .options.module_platform_id) |= . + "platform:f30"' | sponge $1
```
Signed-off-by: Tom Gundersen <teg@jklm.no>
This adds two more os-release tests. One contains an empty os-release
file. We verify it is correctly parsed and ends up with the default
value.
The second one is the official Arch Linux os-release file. We verify
that we correctly end up with the rolling-release name.
Rename the function to `describe_os()`. We do no actual detection, nor
verification here. That is, the return value of this function is in no
way guaranteed to be a valid runner. That is, error-handling needs to
be done in the caller. Make this clear by renaming the function.
Note: Currently, in case no runner exists for the OS, we end up with:
execv(...) failed: No such file or directory
This needs to be fixed in the future.
Reading from an `Object` via `read` already uses a context manager
to manage the read-only bind mount and also maintain a count of
currently active readers. With this an attempt to start a new
`write` operation while readers were active can be detected and
an exception is throw. Since `write` was not introducing a context
the inverted situation, i.e. reads while a write is ongoing, was
not possible to detect.
This commit therefore introduces a context also for `.write` so
that we can enforce the policy to have either many readers but no
writers, or just one writer and no readers.
A bind mount is also used for write (in read-write mode) to hide
the internal path of the tree.
Verify that `Object` can have multiple readers but as long as at
least one reader is active, no one can write to it. Also checks
that after `Object` has left the context it is not writable anymore.
Verify the copy on write semantics of `objectstore.Object`, i.e.
content will only be copied at the moment a client wants to write
to `Object`. This also checks that `Object.base` works.
Modify the CI to execute the unit tests in a privileged container
because `Object.read()` works internally by bind mounting a path.
The mount operation needs at least CAP_SYS_ADMIN and overwriting
the file permissions CAP_DAC_OVERRIDE.
Do not automatically commit the last stage of the pipeline to the
store. The last stage is most likely not what should be cached,
because it will contain all the individual customization and thus
be very likely different for different users. Instead, the dnf or
rpm stages have a higher chance of being the same and thus are
better candidates for caching.
Technically this change is done via two big changes that build
upon new features introduces in the previous commits, most notably
the copy on write semantics of Object and that input/output is
being done via `objectstore.Object` instead of plain paths. The
first of the two big changes is to create one new `Object` at
the beginning of `pipeline.run` and use that, in write mode via
`Object.write` across invocations of `stage.run` calls, with
checkpoints being created after each stage on demand.
The very same `Object` is then used in read mode via `Object.read`
as the input tree for the Assembler. After the assembler is done
the resulting image/tree is manually committed to the store.
The other big change is to remove the `ObjectStore.commit` call
from the `ObjectStore.new` method and thus the automatic commit
after the last stage is gone.
NB: since the build tree is being retrieved in `get_buildtree`
from the store, a checkpoint for the last stage of the build
pipeline is forced for now. Future commits will refactor will
do away with that forced commit as well.
Change osbuildtest.TestCase to always create a checkpoint at
the final tree (the last stage of the pipeline), since tests
need it to check the tree contents.
As a result of the previous commits that implement copy on write
semantics, `commit` can now be used to create snapshots. Whenever
an Object is committed, its tree is moved to the store and it is
being reset, i.e. a new clean workdir is created and the old one
discarded. The moved tree is then set as the base of the reset
Object. On the next call to `write` the moved tree will be copied
over and forms the basis of the Object again. Should nobody want
to write to Object after the snapshot, i.e. the `commit`, no copy
will be made.
NB: snapshots/commits will act now act as synchronization points:
if a object with the same treesum, i.e. the very same content
already exists, the move (i.e. `store_tree`) will gracefully fail
and the existing content will be set as the base for Object.
Since Object knows its base now, the initialization of the tree
with the content of its base can be delayed until the moment
someone wants to actually modify the tree, thus implementing
copy on write semantics. For this a new `write` method is added
that will initialize the base and return the writable tree. It
should be used instead of `path` whenever the a client wants to
write to the tree of the Object.
Adapt the pipeline and the tests to use the new `write` method
in all the appropriate places.
NB: since the intention can not be inferred when using `path`
directly, the Object is still being initialized there.
Refactor the `ObjectStore.snapshot` method to take an `Object` not
a plain filesystem tree, so the latter is more encapsulated from
the ObjectStore user (e.g. the pipeline) and prepares a unified
code-path for `snapshot` and `commit` in the future.
Instead of just returning the path of the temporary object that is
created in .new() the actual instance of the new `Object` is being
returned, which can then provide a richer interface for clients
than a plain directory path.
Make the sources options a static property of the pipeline, in
particular of each stage, rather than being passed in on `run()`.
This more closely matches the intended semantics of sources and
pipeline having similar lifetimes and being fairly coupled together.
The difference between the pipeline and the sources is that the
sources do not contribute to identifying the pipeline (they are not
part of the hash for the pipeline id), and they could be swapped
out without changing the output image (as long as they are valid).
However, a pipeline without A sources object would not be useful,
and typically the pipeline and the sources are generated, passed
around and used together.
This is different from the build environment and the secrets object,
which both are specific to either the host or the caller, unlike
the pipeline which should be universal.
This changes the `load()` function to take a `manifest`, which is
a map containing both the pipeline and the sources.
Note that the semantics of the build-env parameter remains unchanged:
It shares the sources with the rest of the pipeline. We may want to
reconsider this in future commits, as the build-env is specific to
the host, whereas the regular pipeline is not.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Drop the rpm downloading and instead use the files source. This gives
us caching for free, and is the last missing step before we can
deprecate the dnf stage.
The main benefit of the rpm over the dnf stage is that we pin the package
versions rather than the repo metadata version. This will allow us to
support continuously changing repositories as individual packages are much
less likely to change than the repos iteself, and old packages are meant
to stay around for some time, unlike the repo metadata which is instantly
swapped out.
Depsolving is also slow on the first run, which we were always hitting as
the depsolving was always happening in a fresh container.
Based on a patch by Lars Karlitski.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This source adds support for downloaded files. The files are
indexed by their content hash, and the only option is their URL.
The main usecase for this will be downloading rpms. Allowing depsolving
to be done outside of osbuild, network access to be restricted and
downloaded rpms to be reused between runs.
Each source is now passed two additional arguments, a cache directory
and an output directory. Both are in the source's namespace, and
the source is responsible for managing them. Each directory may
contain contents from previous runs, but neither is ever guaranteed
to do so.
Downloaded contents may be saved to the cache and resued between
runs, and the requested content should be written to the output dir.
If secrets are used, the source must only ever write contents to
the output that corresponds to the available secrets (rather than
contents from the cache from previous runs).
Each stage is passed an additional argument, a sources directory.
The directory is read-only, and contains a subdirectory named after
each used source, which will contain the requseted contents when
the `Get()` call returns (if the source uses this functionality).
Based on a patch by Lars Karlitski.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Check the very basic operations of the object store, i.e. creating
multiple trees with different object_ids works and results in the
correct number of objects and references.
Add a stage test to check that the new kopts stage is creating the
target file /etc/kernel/cmdline with the right content. Since tree
diff currently seems to lack support for content hashing new files
we work around this by creating first an empty /etc/kernel/cmdline
file and then get a content diff with the desired options set.
The zipl stage is a fairly simple stage that just creates a file
in /etc called zipl.conf with a single configurable option, which
is called `timeout`. Check the file gets properly created with
the desired hash and verify that setting the timeout works.
Explain the concept and reason behind the grub2 core as well as the
details behind the selection of the core modules that get included.
Also elaborate a bit on the MBR gap. For more details about this see
https://en.wikipedia.org/wiki/GNU_GRUB#Version_2_(GRUB_2)
NB: This commit also changes the order of the grub modules, which in
turn changes the layout of the core.img and thus the hash value used
in the test; adapt those value to reflect the changed core.img.
Pipelines encode which source content they need in the form of
repository metadata checksums (or rpm checksums). In addition, they
encode where they fetch that source content from in the form of URLs.
This is overly specific and doesn't have to be in the pipeline's hash:
the checksum is enough to specify an image.
In practice, this precluded using alternative ways of getting at source
packages, such as local mirrors, which could speed up development.
Introduce a new osbuild API: sources. With it, a stage can query for a
way to fetch source content based on checksums.
The first such source is `org.osbuild.dnf`, which returns repository
configuration for a metadata checksum. Note that the dnf stage continues
to verify that the content it received matches the checksum it expects.
Sources are implemented as programs, living in a `sources` directory.
They are run on the host (i.e., uncontained) right now. Each source gets
passed options, which are taken from a new command line argument to
osbuild, and an array of checksums for which to return content.
This API is only available to stages right now.