This replaces Packages() and BuildPackages() by returning a map of
package sets, the semantics of which is up to the distro to define.
They are meant to be depsolved and the result returned back as a
map to Manifest(), with the same keys.
No functional change.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Note that this doesn't actually test for the ostree fields, I'm not sure
if that's possible with this test framework. But it does make sure that
a test compose won't try to fetch the url.
For now this is simply used to resolve the parent commit, in case
one is not provided. In the future it will be used by new image
types to actually pull content from.
This extends the weldr API, so that future work does not have to
modify that.
The logic we now implement for the ostree commit image types is:
If the URL is provided, but the parent commit is not. The parent
commit is taken to be the current HEAD of the ostree repo at the
given url, with the given (or default) ref.
This only provides a small optional convenience, but we will
soon introduce image types where the URL of the repository is
required.
This commit still needs testing.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Rather than setting this automagically, expose it to the caller. For
now the only caller we have simply passes it back in, so this is a
noop.
In follow-up commits this will be used to resolve the parent commit.
This is tested by verifying that the generated manifests do not
change.
Signed-off-by: Tom Gundersen <teg@jklm.no>
There is some confusion surrounding the format of the source TOML that
can be sent to the server. The format it accepts doesn't match the
output from composer-cli which includes the source id in [] eg.
[k8s]
name = "kubernetes packages"
...
This patch changes the parsing to allow the id to be set as 'id = "k8s"'
or passed as a map in [k8s]. If the id is passed in the body it takes
precedence over the map name.
System repos cannot be overridden by users, return an error if they try
to push a source with the same name/id as a system source.
Resolves: rhbz#1915359
testjobqueue did not implement the JobQueue interface correctly (noted
in its package comment), making it impossible to write tests for
JobQueue itself.
Replace its use everywhere with fsjobqueue operating on a temporary
directory.
I thought rand in Go is auto-seeded but I was wrong, see [1].
This commit adds seed initialization.
[1]: https://golang.org/pkg/math/rand/#Seed
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
Imagine this situation: You have a RHEL system booted from an image produced
by osbuild-composer. On this system, you want to use osbuild-composer to
create another image of RHEL.
However, there's currently something funny with partitions:
All RHEL images built by osbuild-composer contain a root xfs partition. The
interesting bit is that they all share the same xfs partition UUID. This might
sound like a good thing for reproducibility but it has a quirk.
The issue appears when osbuild runs the qemu assembler: it needs to mount all
partitions of the future image to copy the OS tree into it.
Imagine that osbuild-composer is running on a system booted from an imaged
produced by osbuild-composer. This means that its root xfs partition has this
uuid:
efe8afea-c0a8-45dc-8e6e-499279f6fa5d
When osbuild-composer builds an image on this system, it runs osbuild that
runs the qemu assembler at some point. As I said previously, it will mount
all partitions of the future image. That means that it will also try to
mount the root xfs partition with this uuid:
efe8afea-c0a8-45dc-8e6e-499279f6fa5d
Do you remember this one? Yeah, it's the same one as before. However, the xfs
kernel driver doesn't like that. It contains a global table[1] of all xfs
partitions that forbids to mount 2 xfs partitions with the same uuid.
I mean... uuids are meant to be unique, right?
This commit changes the way we build RHEL 8.4 images: Each one now has a
unique uuid. It's now literally a unique universally unique identifier. haha
[1]: a349e4c659/fs/xfs/xfs_mount.c (L51)
Most of the worker API is now untyped, but keep Enqueu() typed to
ensure the job objects match the names in the queue. This means we
must add a version of Enqueue() for each job type we support.
Similarly to the recent changes to Dequeue(), let the caller unmarshal the
return JSON. This allows us to pass the result on without being able
to unmarshal it.
In follow-up patches, we will pass results of jobs to dependent jobs,
but the worker API does not know about the different job types, nor how
to unmarshal them.
The worker server was heavily tied to OSBuildJob(Result). Untie it so
that it can deal with different job types in the future.
This necessitates a change in the jobqueue: Dequeue() now returns the
job type, as well as job arguments as json.RawMessage. This is so that
the server can wait on multiple job types with different argument
types.
The weldr, composer, and koji APIs continue to use only "osbuild" jobs.
Workers reported status via an `osbuild.Result`, which only includes
osbuild output. Make it report OSBuildJobResult instead, which was meant
to be used for this purpose and is already used as the result type in
the jobqueue.
While at it, add any errors produced by targets into this struct, as
well as an overall success flag.
Note that this breaks older workers returning the result of an osbuild
job to a new composer. I think this is fine in this case, for two
reasons:
1. We don't support running different versions of the worker and
composer in the weldr API, and remote workers aren't widely used yet.
2. Both osbuild.Result and worker.OSBuildJobResult have a top-level
`Success` boolean. Thus, logs are lost in such cases, but the overall
status of the compose is not.
Add "image_name" and "stream_optimized" fields to the osbuild job as
replacement for the local target options. The former signifies the name
of the uploaded artifact and whether an artifact should be uploaded at
all (only weldr API). The latter will be deprecated at some point, when
osbuild itself can make streamoptimized vmdk images.
This change separates what have always been two distinct concepts:
artifacts that are reported back to the composer node (in practice
always running on the same machine), and upload targets to clouds and
such. Separating them makes it easier to add job types that only allow
one upload target while keeping artifacts.
Keep the local target around, so that jobs that are scheduled can still
be run after an upgrade.
ComposeState is only used by the weldr API.
Drop the JSON marshaller and unmarshaller, because ComposeState is not
used in an JSON-exported field anymore.
This state is specific to weldr. Previous commits removed it from the
other APIs, because they use different values.
Move the conversion into the weldr API.
In the same way `osbuild.Manifest` is the input to the osbuild API,
`osbuild.Result` is the output. Move it to the `osbuild` package where
it belongs.
This is not a functional change.
Signed-off-by: Tom Gundersen <teg@jklm.no>
vCenter requires images to be uploaded as vmdk StreamOptimized. Lorax
always produced images on this format, so we should make sure to do the
same for our VMWare images.
Allow LocalTarget to request the images produced by osbuild be converted
to be streamOptimized before saving in composer, and hook the weldr API
up to enable this option for vmdk images.
Ideally this should simply be an option in osbuild, but that would
require some more work, which we will not manage in time for RHEL8.3.
Therefore do this minimal fix.
Note that that means the images produced by our manifests (including in
our image-test test cases) are not on the format that the weldr API
returns, so the tests we run on them would also, for now, need to
convert before uploading to vCenter.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Change the translation from our internal structs to the structs used for
weldr serialization to drop account details. These must obviously be
passed in to configure an upload, but exposing them in the logs may be
surprising.
There is no notion of user accounts in the weldr API, and the state
should not be considered private. However, this is likely to take people
by surprise, so let us guard the secrets entrusted to us.
Fixes#907.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Improve the message returned by osbuild-composer when a user asks for
logs of a compose that is still running.
Signed-off-by: Major Hayden <major@redhat.com>
The API was crashing if the freeze request was called on a non-existent
blueprint. This changes it to return an empty string, matching
lorax-composer's behavior (since the output is toml it shouldn't return
json).
This changes the response to match lorax-composer's behavior. If any of
the blueprints in the list passed to /blueprints/depsolve/... have an
error that error should be appended to the error list, and the blueprint
included in the blueprints list with an empty dependencies section.
It was returning an error 400 and a single error if it hit any depsolve
problems, skipping any other blueprints and returning the wrong
response.
This also adjusts the tests to account for the change.
Fixes#890
By default, go's tar archiver uses USTAR header format. Unfortunately, this
format doesn't support sub-second resolution for ModTime. Go solves this by
*rounding* the time. Sometimes, this creates an archive containing a file
with modtime from the future. When such archive is untarred by GNU tar,
the following message is produced:
tar: bf548dfd-0a90-40e6-bbf2-dcdd82fcbb4e.json: time stamp 2020-07-13
13:34:31 is 0.356223173 s in the future
We have two options here:
1) Use gnu header format that supports sub-second resolution. Unfortunately,
it seems that not all tar archivers support this format (e.g. 7-zip).
2) The other option is to truncate the date (instead of rounding).
I went with option 2.
Also, this commit adds a test to check that the header is not from the future.
Without this fix, the test is actually failing, I verified this manually.
Fixes#854
Rather than getting a set of base packages from the ImageType, and then
appending the requested packages from the blueprint, pass the blueprint
into the new Packages() function, and return the full set of packages to
be depsolved.
This allows us to also append packages based on other customizations
too, and use that to append chrony when the timezone is set. This
matches the behavior anaconda had, and there was a TODO item to do this,
which had been overlooked.
Fixes#787.
Signed-off-by: Tom Gundersen <teg@jklm.no>
The sources weldr API already supports this, so hook it up to be
represented on disk and in our internal state tracking too.
This does not yet hook this up to be respected by osbuild, which
currently takes this to be unconditionally set to true.
Signed-off-by: Tom Gundersen <teg@jklm.no>
blueprint.GetPackages() method was used to depsolve blueprints prior the
dnf-to-rpm switch. However, it got dropped during the switch. This commit
makes weldr use it again.
The nice effect of this change is that we can drop getPkgNameGlob function and
have only one function for getting package name-versions from a blueprint.
Also, blueprint.GetPackages() works better with * version. Previously, we had
issues with composer depsolving bash of version * to both x86_64 and i686
versions of bash package. GetPackages() converts the package to name-version
of just bash, which dnf-json correctly depsolves to just one architecture. On
the contrary, the previous method converted bash to name-version bash-*.*.*,
which confused dnf-json.
Note that conversion to bash-* is also wrong because this will cause dnf-json
to install all packages with prefix "bash-*".
This is needed for lorax parity. When multiple blueprints are being frozen in
toml mode, the API returns an error. This is the same behaviour as in the
/blueprints/info route.
Fixes#667
It was passing it through to the non-system delete function
and not returning an error. This checks for system repos first and
returns a 400, SystemSource error response if it is in the system list.
This changes store.DeleteSource to DeleteSourceByName for v0 use and
DeleteSourceByID for v1 usage.
It includes a new client function DeleteSourceV1, adds a new test, and
converts the tests for the previous Source V1 API commits to use
DeleteSourceV1.
This commit changes the store.GetAllSources to distinguish between
getting the source by the Name field, or by the ID (the key to the map)
using GetAllSourcesByName and ...ByID.
SourceConfig.RepoConfig() now takes an id parameter because SourceConfig
only stores the Name, not the ID.
In weldr I split the sourceInfoHandler into 2 separate functions for v0
and v1 behavior, with the core of the old function refactored as
getSourceConfigs and used by both of them.
This also adds new structs for the SourceResponseV0 and SourceResponseV1
as well as helper functions for converting to/from store.SourceConfig
This commit changes the store.PushSource function to take the key as
well as the SourceConfig so that it can be used for v0 or v1.
It adds helper functions for decoding the toml/json into a new
SourceConfig interface type which lets the core source/new code be
shared between the versions.
It also adds tests for the new API behavior.
This is the first patch in a series to add APIv1 support to the
/projects/source routes. The change involves using the store.Sources key
in a different way (as an id instead of as a duplicate of the struct's
Name field) but does not actually involve changing the Sources json in
the store.
In the V0 API the name of the source was used as the identifier, and
there was no short id. In V1 the source is identified by the API using
a short id, and the Name is just a field in the struct to describe the
source. This will become more obvious with the /projects/source/info
response.
This commit changes the following:
Changes store.ListSources to ListSourcesByName and explicitly pulls the
name from the source struct instead of the key. v0 will use this
function call.
Adds store.ListSourcesById which returns the source key as the
identifier. This is used by v1.
Adds a new weldr.SourcesListV1 response type, even though it is exactly
the same as the V1 response in this specific case. I thought it would be
better to have one called V1 than to reuse the V0 struct and possibly
confuse people.
The /projects/source/list API now lists the sources by name for v0 and id for v1.
A test has been added. You will notice it still uses v0 to push and
delete the sources. These will be updated when the new version of the
functions are added in subsequent commits.
Workers don't report status for the osbuild run and the upload targets
separately. Before the move to the jobqueue, we explicitly set the
status of all targets when a compose finished. When I removed that,
the image status broke.
Set the status from what's returned by api.getComposeStatus() to restore
the original behavior.
Fixes#702
Rather than Manifest() returning an osbuild.Manifest object, introduce a
new distro.Manifest object which represents it as an opaque, JSON
serializable object. This new type has the following properties:
1) its serialization is compatible with the input to osbuild,
2) any valid osbuild input can be deserialized into it, and
3) marshalling and unmarshaling to and from JSON is lossless.
This means that even as we change the subset of valid osbulid manifests
that we support, we can still load any previous state from disk, and it
will continue to work just as before, even though we can no longer
deserialize it into our internal notion of osbuild.Manifest.
This fixes the underlying problem of which #685 was a symptom.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Any valid ImageBuild must contain a Manifest, so don't allow this to be
nil, simplifying the code a bit in the process.
Signed-off-by: Tom Gundersen <teg@jklm.no>