Using a metalink or mirrorlist along with the package paths and
checksums allows them to be reliably downloaded even when mirrors are
not all in sync. It will retry with a new mirror until it succeeds, or
has tried all of the mirrors.
Now that metadata is stored and can be accessed via `Object.meta`,
read it from the built or stored objects when serializing the
result in the `format.output` functions.
Use the new `Index.detect_runner` method that will give us the best
available runner for a requested one. To do so a new `pipeline.Runner`
class is introduced that stores the `meta.RunnerInfo` class for the
specific runner and the original name that was requested.
In the manifest loading and describing functions of the formats, use
`Index.detect_runner` to get the `RunnerInfo` for a requested runner
and then wrap it in a `pipeline.Runner` object, which is then passed
to the `Manifest.add_pipeline` method.
See also commit "meta: ability to auto-detect runner".
Adjust all test.
Instead of serializing the `BuildResult` to a dict in `build_stages`,
we keep the object and then only serialize it in the corresponding
formatting code. This doubles down on the separation between the
internal data structures and the external representation of them. It
was partially already done in the v2 format which hand-picked which
elements of the BuildResult it would return for each stage.
When formatting the the result, switch the default to success,
but then properly propagate the status of the build pipeline.
This should ensure that if there are no pipeline results but
a failed build pipeline, the overall status will be 'failed'.
On the other hand, if no pipelines were built, including tree
or build, the overall status will be 'success'.
When building a version 1 manifest, the assembler would always be
exported, even when not requested via the `--export` command line
option. This was done for backwards compatibility so to not break
tools relying on that behavior. The problem is that support for
this uses a completely different code path and might also now be
confusing behavior. Thus remove the implicit and really only ever
export what was explicitly requested by the caller.
If a pipeline has an assembler and that assembler failed, the
overall status of the build also needs to be marked as failed.
This used to be the case, but a bug got introduced when the
format abstraction code was added.
The `org.osbuild.files` source provides files, but might in the
future not be the only one that does. Therefore rename it to
match the internal tool that is being used to fetch the files.
This is done for most other osbuild modules that target tools.
The format v1 loader is adapted to make this change transparent
for users of the v1 format, so we are backwards compatible.
Change the MPP depsolve preprocessor so that for format v2 based
manifest `org.osbuild.curl` source is used. Also rename the
corresponding source test. Adapt the format v2 mod test to use
the curl source.
All sources fetch various types of `items`, the specific nature
of which is dependent on the source type, but they are all
identifyable by a opaque identifier. In order for osbuild to
check that all the inputs that a stage needs are are indeed
contained in the manifest description, osbuild must learn what
ids are fetched by what source. This is done by standarzing
the common "items" part, i.e. the "id" -> "options for that id"
mapping that is common to all sources.
For the version 1 of the format, extract the files and ostree
the item information from the respective options.
Adapt the sources (files, ostree) so that they use the new items
information, but also fall back to the old style; the latter is
needed since the sources tests still uses the SourceServer.
Convert the `org.osbuild.ostree` stage to use inputs instead of
sources. In the format (version 1) loading code, convert the
stage to use an input based on the existing stage options.
Instead of manually constructing and appending the input for
stages (here the stages that replace the assembler), use the
new `Stage.add_input` method.
Currently all options for inputs are totally opaque to osbuild
itself. This is neat from a seperation of concerns point of view
but has one major downside: osbuild can not verify the integrity
of the pipeline graph, i.e. if all inputs that need pipelines or
sources do indeed exists. Therefore intrdouce two generic fields
for inputs: `origin` and `references`. The former can either be
a source or a pipeline. The latter is an array of identifiers or
a dictionary where the keys are the identifiers and the values
are additional options for that id. The identifiers then refer
to either resources obtained via a source or a pipeline that has
already been built.
Add a new `add_source` method that will add an individual `Source`
to a `Manifest` give its `ModuleInfo` and options. The dictionary
of source options in the manifest is replaced with a list of such
`Sources` and `add_source` will append to it. Adap the version 1
format code to use `add_source` and reconstruct the source options
from the list of source on `describe`.
Remove the `sources_options` constructor parameter for `Manifest`
and adapt all the source base for this.
Now that `Pipelines` have no assemblers anymore and thus only one
identifier, i.e. the one corresponding to the tree (`tree_id`),
the `id` and `tree_id` are now the same. Therefore replace the
usage of `tree_id` with `id` and drop the former. Add some extra
documentation including some caveats about the uniquness of `id`.
Convert the assembler phase of the main pipeline in the old format
into a new Pipeline that as the assembler as a stage, where the
input of that stage is the main pipeline. This removes the need of
having "assemblers" as special concepts and thus the corresponding
code in `Pipeline` is removed. The new assembler pipeline is marked
as exported, but the pipeline that builds the tree is not anymore.
Adapt the `describe` and `output` functions of the `v1` format to
handle the assembler pipeline. Also change the tests accordingly.
NB: The id reported for the assembler via `--inspect` and the result
will change as a result of this, since the assembler stage is now
the first and only stage of a new pipeline and thus has no base
anymore.
Every pipeline that gets added to the `Manifest` now need to have
a unique name by which it can be identified. The version 1 format
loader is changed so that the main pipeline that builds the tree
is always called `tree`. The build pipeline for it will be called
`build` and further recursive build pipelines `build-build`, where
the number of repetitions of `build` corresponds to their level of
nesting. An assembler, if it exists, will be added as `assembler`.
The `Manifest.__getitem__` helper is changed so it will first try
to access pipeline via its name and then fall back to an id based
search. NB: in the degenrate case of multiple pipelines that have
exactly the same `id`, i.e. same stages, with the same options and
same build pipeline, only the first one will be return; but only
the first one here will be built as well, so this is in practice
not a problem.
The formatter uses this helper to get the tree pipeline via its
name wherever it is needed.
This also adds an `__iter__` method `Manifest` to ease iterating
over just the pipeline values, a la `for pipeline in manifet`.
Add a new `export` property to the `Pipeline` object that indicates
whether a the result, i.e. the tree after the pipelines has been
built, should be exported, i.e. copied to the output directory.
In the current format (v1), the main pipeline, gets marked as such
by the corresponding loader.
Instead of passing all pre-created pipelines to the Manifest
constructor, add a `add_pipeline` method, analogous to the
existing `Pipeline.add_{stage, assembler}` methods. Convert
the format loading code to use that and remove the constructor
parameter.
When the build fails, not all pipelines might have been built and
those pipelines will be missing from the results. Currently the
code assumes that all pipelines will have a result and this will
crash when trying to find an id for an pipeline that did not get
built.
Instead of using the `Assemblers` class to represent assemblers,
use the `Stage` class: The `Pipeline.add_assembler` method will
now instantiate and `Stage` instead of an `Assembler`. The tree
that the pipeline built is converted to an Input (while loading
the manifest description in `format/v1.py`) and all existing
assemblers are converted to use that input as the tree input.
The assembler run test is removed as the Assembler class itself
is not used (i.e. run) anymore.
Add a new `info` property that holds the `meta.ModuleInfo` info
for the stage. This gives each instance of a stage access to
meta (or class) information about it, i.e. its schema, docs but,
more importantly, also its name and path to the executable.
Thefore the `name` property is coverted into a transient property
which access the `name` member of `info`.
Change the `formats/v1` load mechanism to carry a new `index`
argument which is used to load the `ModuleInfo` for each stage.
Adapt all tests to load the info as well when creating stages.
Instead of carrying around the `sources_options` parameter
through the recursive `load` and `load_build` calls, set
the sources options after loading has completed by iterating
through all stages of all pipelines.
All tests and invocations of `add_stage` actually pass a valid
options dictionary. Thefore move the `options` args before
the `sources` arg and remove the default value (`None`).
Instead of having build pipelines nested within the pipeline it is
the build pipeline for, the nested structure is transferred into a
flat list of pipelines. As a result the recursion is gone and all
the pipelines and trees are build one after the other. This is now
possible since floating objects are kept alive by the store itself
and all trees that are being built are transparently via them.
The immediate result dictionary changed accordingly. To keep the
JSON output of osbuild the same, the result is now routed through
a format specific converter.
Additionally, the v1 format module gained a function to retrieve
the global tree_id and output_id. With the new models those global
ids will go away eventually and thus need to go through the format
specific code.
Instead of a pipeline, describe now takes a Manifest instance.
The reason is that a manifest fully describes the build, which
includes the sources. Now that the describe function takes the
manifest, the sources can be included as well.
Adapt the tests to refelect that change.
The 'Manifest' class represents what to build and the necessary
sources to do so. For now thus it is just a combination of the
pipeline the source options.
The description of a pipeline is format dependent and thus needs
to be located at the specific format module.
Temporarily remove two tests; they should be added back to a format
specific test suit.
Instead of having the pipeline and the source option as separate
arguments, the load function now takes the full manifest, which
has those two items combined.
The validation of the manifest descritpion is eo ipso format
specific and thus belongs into the format specific module.
Adapt all usages throughout the codebase to directly use the
version 1 specific function.
Extract the code that loads a pipeline from a pipeline description,
i.e. a manifest, into a new module inside a new 'formats' package.
The idea is to have different descriptions, i.e. different formats,
for the same internal representation. This allows changing the
internal representation, i.e. data structures, but still having the
same external description.
Later a new description might be added that better matches the new
internal representation.