Mark '$(TEST_MANIFESTS_GEN)' target as .PHONY.
Currently the `test-data` make target does not always trigger
regenerating of manifests used for testing various osbuild parts,
although it is marked as .PHONY. The reason that its dependency
'$(TEST_MANIFESTS_GEN)' is not marked as .PHONY and therefore it is not
run if the files already exist in repository.
Due to the above reason, CI is actually running `make --always-make
test-data` to always regenerate manifests.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Instead of having build pipelines nested within the pipeline it is
the build pipeline for, the nested structure is transferred into a
flat list of pipelines. As a result the recursion is gone and all
the pipelines and trees are build one after the other. This is now
possible since floating objects are kept alive by the store itself
and all trees that are being built are transparently via them.
The immediate result dictionary changed accordingly. To keep the
JSON output of osbuild the same, the result is now routed through
a format specific converter.
Additionally, the v1 format module gained a function to retrieve
the global tree_id and output_id. With the new models those global
ids will go away eventually and thus need to go through the format
specific code.
This is a step towards generic pipelines, i.e. replacing assemblers
with pipelines, thus creating an acyclic graph of pipelines. There
the pipeline id will be what is now the tree_id. For now though the
generic id is either the output_id or the tree_id.
The objectstore always tracked all objects that were returned from
it, but it did so via weak references, which means it did not keep
the objects alive itself. With the introduction of identifiers for
temporary objects (floating objects), it makes sense to keep all
created objects alive so that they can in fact be used.
A "floating" object is a temporary object that is identified, i.e.
has an `id` and is thus also locked, but is not committed to the
store.
The `contains` and `get` methods of ObjectStore will now return such
floating objects as if they were committed ones, provind transparent
access to object that have been built during the exectuin of osbuild.
The current pipeline code used to set a base for a tree object
that might or might not exist. Depending on it it would either
use that object or reset its base. Avoid doing that because it
prohibits us from properly interpreting the `id` of an object
if the latter is also set when `base_id` is assigned, since
that base might not exist and thus the `id` would not actually
mean that the the contents of tree associated with the object.
Therefore we use `ObjectStore.get` and return the result if it
is not None or a fresh Object otherwise.
Every time a stage has been successfully built, the contents of
the tree now corresponds to the stage and can thus be identified
via the id of the stage.
When the tree is being written to, i.e. on consecutive attempts
of stage builds, the `id` of the tree object will automatically
be reset.
This adds a new `id` property to the ObjectStore.Object, that is
meant to reflect the identifer of the Stage to build the contents
of it. This will help to transparently access objects that have
been built but not committed to the store.
Setting the `base_id` of an object will also set its `id`. When
the object is then modified via write() the `id` will be set to
None, since no the content and the id are out of sync. In the
same way, restting an object will reset its `id` to None.
The object in question will be cleaned when the store goes out of
context, which happens soon after the manual cleanup anyway and
the eager cleanup does not gain us much.
More importantly, it removes the special case for the assembler
output object, since trees build by the stages are not cleaned
up manually already.
Instead of passing the store directory to Pipeline.run, pass an
already initialized ObjectStore object. This binds the lifetime
of the store and its (temporary) objects to the run of osbuild
not the run of the pipeline.
This prepares re-using the stores with multiple (non-nested)
pipelines.
Instead of a pipeline, describe now takes a Manifest instance.
The reason is that a manifest fully describes the build, which
includes the sources. Now that the describe function takes the
manifest, the sources can be included as well.
Adapt the tests to refelect that change.
The 'Manifest' class represents what to build and the necessary
sources to do so. For now thus it is just a combination of the
pipeline the source options.
The description of a pipeline is format dependent and thus needs
to be located at the specific format module.
Temporarily remove two tests; they should be added back to a format
specific test suit.
Instead of having the pipeline and the source option as separate
arguments, the load function now takes the full manifest, which
has those two items combined.
Instead of importing the load, load_build functions into the osbuild
namespace and using it via that, use the load function via the module
that provides them, i.e. the formats.v1 module.
The validation of the manifest descritpion is eo ipso format
specific and thus belongs into the format specific module.
Adapt all usages throughout the codebase to directly use the
version 1 specific function.
Extract the code that loads a pipeline from a pipeline description,
i.e. a manifest, into a new module inside a new 'formats' package.
The idea is to have different descriptions, i.e. different formats,
for the same internal representation. This allows changing the
internal representation, i.e. data structures, but still having the
same external description.
Later a new description might be added that better matches the new
internal representation.
This was deprecated in favor of always having the source in the
manifest. Remove the command line option and the corresponding
code that would override the sources definitions.
Update the docs accordingly.
Integrate with codecov. Define a threshold of 5% to pass. Coverage
is cumulative, i.e. all the tests send their coverage to codecov,
which will integrate them all into a total.
This runner is used by both CentOS 8 and CentOS Stream. CentOS is kinda weird
because it specifies only number 8 in VERSION_ID in /etc/os-release unlike
RHEL. Also the ID is the same for CentOS 8 and CentOS Stream.
This should work fine for now though:
CentOS 8 is currently based on RHEL 8.3 and CentOS Stream on devel version of
RHEL 8.4. For both RHEL 8.3 and 8.4 we use the RHEL 8.2 runner so it should be
safe to assume that it's OK to base the CentOS 8 runner also on the RHEL 8.2
one.
We might need to tweak this at some point but I suggest dealing with it when
that time comes.
The "/run/osbuild" path is used as the default runpath by the
BuildRoot, which creates it on demand. The only other place
is the API (`BaseAPI`) to create the socket directories in,
but that is now also created on-demand. Additionally, the
API are only run after the build root has been set up so that
directory would already exist.
When creating the socket directory, i.e. in the case that it was
not specified directly, ensure the parent directories exist.
Make it possible to override that parent directory.
We do want to turn the dependency generator off for runners,
because they are tied to the specific platform, which might, if
not disabled, introduce dependencies for that platform to the
general package. An prominent example is platform-python used
by the RHEL runner.
On the other hand, we do want to pick up the dependency for the
stages and assemblers, i.e. /usr/bin/python3, because they need
to be able to run on the host, since the host provides the root
file-system for the initial build container, the build host.
Add an additional comment to the shebang mangling exception to
explain that due to the combination of dependency generator and
the disabling of shebang mangler for assembler and stages an
additionally dependency on /usr/bin/python3 will be added on RHEL,
and that this is what we indeed want.
Pin the osbuild-composer that schutzbot runs a reverse dependency test
against. This allows to control which exact version to test against, and
ensures that PRs against osbuild always run against the same version.
Now that osbuild-composer's CI uploads RPMs to a predictable destination
(the same one that osbuild uses), we can use that instead of rebuilding
osbuild-composer on every CI run. This should speed up the mockbuild
stage considerably.
Pin it to v24 now.
Drop setting fastestmirror, disabling weak dependencies, and removal of
modular repositories.
Try to install as close to what people do in production, which means
sticking to the defaults.
It was only used once, to retry dnf. This is not necessary, because dnf
already has retrying logic. We're also not using `retry` on any of the
other calls to dnf in this script.
Now that the repository URLs are predictable, don't use Jenkins' stash
feature to pass the repo file between stages.
Instead, simply create the repo file where it is needed, in deploy.sh.
The length of these is not predictable. It depends on the shortest
unique prefix in the repository and git configuration.
Just use the full one, which also makes it easier to copy the id from
`git log` or GitHub.
Change the repository path on S3 to a more predictable one. We really
only need the name of the project (static osbuild for this repository),
the name of the distro (use the same as osbuild-composer's API for
consistency) and the commit SHA.
In particular, drop the PR number / branch name. Also don't remove the
dots from version numbers. All places we're using them in (paths and
URLs) support dots.
For example, osbuild commit xxxxxxx for fedora-33 on x86_64 will result
in this URL:
osbuild/fedora-33/x86_64/xxxxxxx