Check for existing checkpoint in `Pipeline.build_stages` by trying to
get the object, instead of just checking for its existence. Later, if
no checkpoints were found, i.e. `tree` is `None`, create a new object.
This avoids mixing of new object creation and object access.
Instead of iterating over the stages via indices, iterate over the
stages directly. To be able to do so, collect the stages that need
to be built in a deque and then drain it from the other end.
Also invoke `monitor.finish` when the pipeline failed to built.
There is no need to not invoke it in that case. This also will
allow us to print some information in the monitor in tha case.
Since neither a build tree, nor the actual tree is returned from
`build_stages` the short circuit code that checks if the tree is
already present in the store, can be moved before the build tree
retrival. As a result, the short-circuit check in `Pipeline.run`
is now redundant. It was there to make sure that if we have the
tree associated with a pipeline, its build pipeline would also
not be needed. With the short-circuit now happening before the
access of the build pipeline in `build_stages` this is ensured.
In the previous data model the build pipelines were nested inside
the pipeline and thus we would recurse in `build_stages`. The
tree that was built was returned and potentially became the build
tree for the pipeline that invoked `build_stages`. In the new
model of a direct acyclic graph of pipelines the build tree can
be any previously built pipeline and we just get it via the store,
which now keeps track of all previously built pipelines even if
there are not committed to it. Thus there is no need to return
the trees from `build_stages` anymore.
Adjust the short code that does the short circuit check to use
`ObjectStore.contains` instead of `ObjectStore.get` since we
do not need to object anymore.
The pipeline data model used to have an assembler optionally
associated with the pipeline; therefore we had to return the
build tree used to to build the stages since the same build
tree also needed to be used from the assembler. In the "new"
model (first introduced in version 27), the assembler got
replaced by another "normal" pipeline. Since then, there is
no need to return the build tree anymore. Remove it.
The option will force `mkfs.fat` to ignore existing partitions on
the target device. The check is done via the corresponding device
node in sysfs, i.e. the contents of the `partition` attribute in
`/sys/dev/block/<major>:<minor>`. In certain situations this info
can be stale. Passing `-I` will work-around these situations.
Currently we hard code the vpc options `subformat=fixed` and
`force_size`, which are needed to generate valid azure images
with newer versions of qemu. But for other use cases or other
versions of qemu these options might not be wanted or valid.
Expose all the options but with defaults corresponding to the
old behavior.
Add a unit test for the `force_size` option to check its
effect. Also add a check for the correct size to the existing,
default value (i.e. `force_size` being `true`).
Adjust the timer for our automated releases to trigger the workflow at
8 UTC. This corresponds to 10am in most of our team's timezone and to
the reminder event in our team calendar.
Add support for the `--insecure` curl flag, which makes curl skip the
verification step when making secure connections (e.g., https://).
This allows osbuild to download files from servers configured with
SSL/TLS but whose certificate cannot be validated.
This is supported for configuring repository sources in
osbuild-composer.
Instead of serializing the `BuildResult` to a dict in `build_stages`,
we keep the object and then only serialize it in the corresponding
formatting code. This doubles down on the separation between the
internal data structures and the external representation of them. It
was partially already done in the v2 format which hand-picked which
elements of the BuildResult it would return for each stage.
Remove the stage options from the `BuildResult` object. They were
only serialized in the case of version 1 and not actually used by
Composer for anything. Use of v1 manifests should very limted now
anyway.
Add a new stage to handle openscap first boot
remediation. The openscap-remediation.service
looks for a `/system-update` symlink which
points to an openscap config file. This stage
creates both the necessary configuration and
the `/system-update` symlink.
The `architecture` served two purposes: 1) the selection of the loader
and 2) the selection of the platform. Instead of inferring the latter
from `architecture`, it is now explicitly specified as a property of
the `bios` value, which in turn was transformed into an object.
The loader is still inferred but since `bios` is an object now there
is the option of adding an explicit `loader` option to it.
All this should make it more transparent what is happening and is
also more in line with the normal `grub2` stage.
Can be used to create partition tables via GPT laypout via `sgdisk(8)`.
The schema of `partitions` is intentionally kept identical to the one
in `org.osbuild.sfdisk`.
Add corresponding tests.
Show the stage name (if one is set) when failing the stage in the
validator. This closes#1007, example output:
```
€ python3 -m osbuild supakeen-os.json
supakeen-os.json has errors:
pipelines[0].stages[0]
could not find schema information for 'org.osbuild.rpmb'
.pipelines[0].stages[0].inputs.packages:
could not find schema information for 'org.osbuild.filesz'
```
The generic ways of checking if an object is in the cache does not apply
for ostree as the internal structure of a repo is quite specific. Thus
we need to use the ostree executable to ask it to explore its repo for
us.
Before, the download method was defined in the inherited class of each
program. With the same kind of workflow redefined every time. This
contribution aims at making the workflow more clear and to generalize
what can be in the SourceService class.
The download worklow is as follow:
Setup -> Filter -> Prepare -> Download
The setup mainly step sets up caches. Where the download data will be
stored in the end.
The filter step is used to discard some of the items to download based
on some criterion. By default, it is used to verify if an item is
already in the cache using the item's checksum.
The Prepare step goes from each element and let the overloading step the
ability to alter each item before downloading it. This is used mainly
for the curl command which for rhel must generate the subscriptions.
Then the download step will call fetch_one for each item. Here the
download can be performed sequentially or in parallel depending on the
number of workers selected.
Introduce a new class member `content_type` that specifies what type of
items the source will store in the cache. Use that to generalize the
setup step, which is shared across all sources.
This warning was globally disabled in commit c124ab2, due to dynamic
attributes of the `LoopInfo` class. This false positive is silenced
locally now. Some actual positives have meanwhile made it into the
code base, but have fixed via previous commits so we can now enable
W0201/attribute-defined-outside-init again.
Silence pylint warning W0201 (attribute-defined-outside-init) in
`set_status`; it sets dynamic attributes on the LoopInfo class
which pylint does not recognize.