The `build_stages` method short-circuits and returns early in case
any of the stages fail to build and returns None for the tree, and
build tree, therefore both of those can immediately cleaned up at
that point.
For this add a small helper `cleanup` that will call the cleanup
method for all supplied arguments, after filtering out None values.
Delay the cleanup of the build tree of the build pipeline, and
first check the result and only cleanup the tree when the build
did not fail, because in that case both returned trees will be
None and trying to cleanup them up will result in an exception.
Therefore, also don't clean up `tree` in the error case.
This extends on our model how we do releases. It introduces `NEWS.md`
as the authoritative source of our release-notes. It is pre-populated
with the release-notes from the previous 'v9' release, and contains a
suggestion for the upcoming 'v10'.
Furthermore, this adds `make release` as a simple checklist target that
contains instructions how to create a new release. Note that it is a
passive make-target which has no side-effects at all. It only prints
release information.
With this in place, we can drop `RELEASE.md`, as all information is now
combined in `make release`.
The used format of `-X, --long=VALUE` is not a valid option-list entry,
even though it is very commonly used all over the linux man-pages. Use
the supported format of `-X VALUE, --long=VALUE`, which will format
correctly in the man-page and html outputs.
For reference, these formats are valid in RST option-lists:
-a Short option
-c arg Short option with arg.
--long Long option.
-2, --two Aliases on a single line.
-f FILE, --file=FILE Aliases with arguments.
/V VMS/DOS-style option.
Add a 10s connection timeout for each file transfer. Also add an
increasing max timeout for a given file transfer (30s to 180s).
Also increase the retries to 10 and the concurrent threads to 15.
Hopefully this should make things a bit more stable in the face of
bad mirrors. We were encountering mirrors that would hang either
on connect or download at such slow speeds that they might as well
have stalled (~1kB in 45s).
Follow-up patches will provide a more long-term solution, by
allowing the same mirror selection as dnf currently uses.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Use `make man` rather than hard-coding man-page creation in the
spec-file. Furthermore, install all man-pages, not just the base one.
The commands are adjusted to account for possibly more man-pages being
added. Tree-wide the only place to adjust for new man-pages being added
is the `%files` list in the spec-file.
We already indent the values in the spec-file to all start at the same
column. However, we use different indentation sizes for different
blocks of values. This makes it really confusing to read. Lets use a
consistent indentation and align everything with the main assignments in
the spec-file.
This adds another job to the CI runner. This builds and tests the
documentation. The tests are currently reduced to just verifying the
respective man-pages are actually generated. This can be extended on in
the future.
This improves `make man` in the following ways:
* The recently added `osbuild-manifest.5` man-page is now generated
as well.
* The target now honors `SRCDIR` and `BUILDDIR` variables.
* Any newly added man-page is now automatically picked up and
generated as well.
* The output directory structure now mirrors the input directory
structure.
This documents the structure of `Makefile` as well as its supported
targets. Furthermore, we add support for SRCDIR and BUILDDIR so we can
use this makefile to ultimately deploy generated documentation to our
website and more.
This commit does not convert all of the makefile to honor SRCDIR. It
merely sets up the infrastructure to support SRCDIR. Follow-ups will
improve the different targets.
If the final object, image, artifact, already exists in the store,
short-circuit and return directly from `Pipeline.run`. Otherwise
the situation might arise that the final result is in the store,
but the tree (and build trees) are not and thus the tree would be
built, just to be thrown away when the assembler phase detects
that the final output already exists.
Extract the code that assembles the tree into its own method as
it was previously done for the stages. This should make the new
method as well as `Pipeline.run` method easier to read.
Refactor the building of stages and the build tree so that no auto
commit is done at the end of the build pipeline anymore, i.e. the
respective build tree(s) are not commit to the store unless that
was explicitly enabled via a checkpoint.
NB: `objectstore.Object`s are used not via a context manager
anymore, because they are returned from the `build_stages` method
to make the code easier to use and read. Cleanup of Objects during
a KeyboardInterrupt exception (Ctrl-C) are handled by using the
ObjectStore with a context manager, which on exit of the context
will cleanup all objects. Due to a big in python[1] this is indeed
more robust than using `with object_store.new() as tree` because
that is translated[2] to something like:
1: mgr = (EXPR)
2: exit = type(mgr).__exit__
3: value = type(mgr).__enter__(mgr)
-> 4: # NOTE: KeyboardInterrupt here will "leak" value
5: try:
6: [...]
7: finally:
8: if exc:
9: exit(mgr, None, None, None)
Which can leave the tree initialized but not cleaned up if the
KeyboardInterrupt happens exactly line 4.
[1] https://bugs.python.org/issue29988
[2] https://www.python.org/dev/peps/pep-0343/
Simple new object that should expose the root file system with the
same API as `objectstore.Object` but as read-only. This means that
the `read` call works exactly as for `Object` but `write` raises
an exception.
Add tests to specifically check the read-only properties.
Keep track of all created objects via weak references. Add support
to use ObjectStore as context manager and ensure that all objects
are cleaned up when the context is exited.
Instead of creating temporary directories at the root of the store
create them in a sub-directory called 'tmp'. This should make it
easy to cleanup left-over (temporary) dirs in case of crashes.
Additionally, it has the nice side effect that it is possible to
check that there are no objects that are still in-flight, i.e. not
cleaned-up.
Turn `ObjectStore.new` into a plain method, since `Object` itself
can be used as a context manager, which is now directly returned,
instead of internally wrapped in a `with` statement and then
yielded. Thus for callers of the method nothing changes and the
behavior of `with objectstore.new() as x` is exactly the same.
This rips out the `PIPELINE` section from osbuild(1) and instead adds a
new osbuild-manifest(5) man-page. This new man-page contains a rather
formal definition of the manifest, with a separate section for each
part of a manifest.
The man-page is exhaustive, in that it describes all available options.
However, it does *NOT* document the available stages, runners, and
assemblers. It does document the available (and supported) sources.
This should serve as an example how to document available stages and
assemblers in the future.
Note that it is not clear whether we should document these right now.
Once we decided to support the available stages for a reasonable
time-frame, we can start on documenting them as well.
This adds a non-binding, documentational-only json-schema to
schemas/osbuild1.json which describes the format of the pipeline
manifest taken as input to osbuild. This is currently for
documentational purposes, but is definitely open to be used for actual
runtime verification.
The manifest does not describe options to assemblers, stages, or
sources. These are left as arbitrary json-objects and need separate
validation, if required. Note that most stages already contain an
embedded schema for their parameters.
This extends the EXAMPLES section with more examples, reduces their
complexity, and also restructures the layout to make it nicer to read
in the resulting TROFF file.
We will mention this example in our man-page, so make sure it actually
works. This imports all sources into the pipeline definition and
adjusts the syntax to match what we expect.
It is not used anywhere and might be confusing to newcomers, because it
only contains the (private) osbuild library, without the command line
tool and runners/stages/assemblers.
Prior to this patch, `make rpm` would produce rpms that have the latest
tag as their versions. This was confusing, because one could never know
which contents are in a locally built rpm.
Change this so that the is version always based on the commit hash of
HEAD. This is easy: the golang macros read a `%commit` macro when it
exists and do this for us.
To simplify more, only define `%_topdir` to ./rpmbuild and use
rpmbuild's known directory structure (SPEC, SOURCES, RPMS, ...)
otherwise, to make it easier to find build results.
Build the specfile, tarball, source rpms, and rpms with `make rpm`,
without separate sub-targets. We can reintroduce them if they're needed
somewhere.
The rule for the tag name is to have a 'v' prefix, update the
release instruction to reflect that. Also remove the Packit
header, because we don't use Packit at all anymore.
The MANIFEST.in file is used to include `LICENSE` in our python source
distribution. However, we never use this. In our spec-files we use the
rpm `%license` macro which copies from the source checkout, anyway.
Lets drop this file so we can forget about it.
Only commit checkpoints to the object store if the run of the
stage or assembler was successful. Otherwise we commit a empty,
corrupted or old tree to the store. Any subsequent run might
then pick up that bogus tree as a starting point.
We already have a makefile for maintenance work. Merge the
`bump-version.sh` script into `Makefile` as a new target so we avoid
placing scripts spread around the repository.
While at it, this commit also uses immediate assignment for
shell-evaluated make variables. This ensures the shell commands are
only executed once, and then are guaranteed to be of the same value for
the remainder of the execution.
When marking stages for checkpointing, let us make use of the local set
datastructure we already allocate, rather than iterating over it
linearly.
Apart from the negligible performance improvement, it makes the code
quite a lot simpler.
We generally surround function definitions with newlines. Make sure
this is also true for local function definitions.
Signed-off-by: David Rheinsberg <david.rheinsberg@gmail.com>
Even on ubuntu we can build rpm-based pipelines without bootstrapping
via fedora 27. Drop the build env from the travis config and from our
samples directory.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Using a metalink resolves to a specific mirror at runtime, and
downloads each rpm from that repository.
We want to move to using the org.osbuild.files source, which means
that we must save the url to each rpm in the source definition, which
will be determined by which mirror is used to generate the config.
If we use metalinks to generate the source configuration, the mirror
used will be arbitrary. Instead, we want to pick the best mirror
explicitly, ideally in a way that is independent of the location
depsolving happens in (which will be different from the location
the rpms are downloaded to).
We can choose explicitly by passing baseurl rather than metalink
to dnf, so move in that direction now by replacing all metalinks
by baseurls in our dnf configuration.
Signed-off-by: Tom Gundersen <teg@jklm.no>
We now support sources and pipelines being passed to osbuild as one.
This will make the transformation from dnf to rpm stage simpler, as
the source object will then be different for each stage, so having
a shared one as now would be cumbersome.
Signed-off-by: Tom Gundersen <teg@jklm.no>