This commandline tools uploads a file to S3, as a proof of concept.
All options are mandatory. Credentials are only read from the
commandline and not from the environment or configuration files.
The next step is to add support for importing from S3 to EC2,
currently the images we produce cannot be imported as-is, so this
requires more research.
To try this out: create an S3 bucket, get your credentials and
call the tool, passing any value as `key`. Note that if the key
already exists, it will be overwritten.
Signed-off-by: Tom Gundersen <teg@jklm.no>
It uses Azure SDK to connect to Azure storage, creates a container there
and uploads the image. Unfortunately the API for page blobs does not
include some thread pool for upload so I implemented one myself. The
performance can be tweaked using the upload chunk size and number of
parallel threads.
The package is prepared to be refactored into common module within
internals package as soon as we agree on the of these common packages
for image upload.
Add azure-blob-storage rpm package as a dependency
It didn't work for me using the `golang(package)` syntax. Using the
package name explicitly works.
lorax-composer recently introduced API version 1. This commit introduces
very basic support for it. This implementation tries to deduplicate code
for routes with the same behaviour as much as possible. All the differences of
v1 API are marked as TODOs for now and will be implemented in follow-ups PRs.
Make distros export repository information and use those in the weldr
API. This means that repos are only specified once and that the API
returns the right packages when we allow different distros.
Split the error case (no sources specified) into its own function, so
that we can use `source/info/:sources` (note the colon) to get the list
of sources without the leading `/`. This gets rid of two special cases
which made the previous implementation hard to parse.
The naming is confusing: repositories have an `id` and a human readable
`name`. Weldr's sources also have a field called `name`, but
lorax-composer uses that as a way to identify repositories by their id.
Use `id` consistently here as well.
The blueprints freeze test now creates the blueprint with the package
dep-package1 which is mocked and will properly depsolve when the
blueprint is frozen. This test can no longer run externally against
lorax-composer.
This test creates a new blueprint with libsemanage. Libsemanage is
already in the mock rpmmd so when we test the freeze route on this
blueprint, the blueprint will properly depsolve and return the package
with the depsolved version.
The blueprint freeze route returns the blueprint info but each
package will be the package selected by depsolving. So, instead of the
version being the version number with optional wildcards as
/blueprints/info would provide, the version is of the form
`Version-Release.Arch`.
These endpoint are similar in many ways, therefore just one commit. Their
functionality is basically same as in lorax except for error messages and
weird edge cases when handling trailing slashes.
closes#64, closes#65
Add a route to set a blueprint back to its state at a particular change.
The route `blueprints/undo/:blueprint/:commit` requires the blueprint
name and the commit hash for the change that the blueprint should be
reverted too. Also, the commit message for the change created when a
blueprint is pushed is now passed from the api to the store's
PushBlueprint funtion.
Add function to get a single blueprint change given the blueprint name
and the commit hash. Update the PushBlueprint function to allow a commit
message to be passed to it which will be used when adding a change.
The package list is generated on each request for a package so there is
no longer a need to generate the package list in main or to store these
packages in the API object.
The package list is now generated from the base repo and the
user-defined repos. This allows users to add packages not found in the
base repo to their blueprints.
Add function to convert between a user source and a repo which can be
passed to dnf-json. This is neccessary because user-defined sources have
a slightly different format than dnf repos.
1) additional qemu tests for ami, vmdk, vhd, and openstack image types
2) new type of systemd-nspawn tests for tar, ext4, and parititioned disk
types
the systemd-nspawn tests use loopback network interface directly from
the host so it is necessary to tweak the settings of its SSH server.
This is done in a "script" stage using simple "sed" command.
We want to test API methods which calls dnf. Unfortunately, calling dnf
is expensive operation - it requires network access and downloading
a lot of (meta)data. This commit changes the rpmmd implementation
so that it can be mocked.
For each blueprint name passed to the route, a list of the changes to
that blueprint will be returned.
weldr/tests: add blueprint changes test
In order to test blueprint changes a blueprint must be created with a
unique id. Blueprint changes are not deleted when the blueprint is
deleted so in order to test this against lorax the blueprint must have
not been used/tested before. This id is created from a random int. The
test creates and deletes the same blueprint twice to check that each
creation updates the list of changes.
When a blueprint is pushed to the store it will also add a change to the
BlueprintChanges map. Each blueprint has a list of changes mapped to
their commit. This commmit is a hex string based off of the sha1 hash of
the current time so every commit is unique. The message and timestamp
follow the format of lorax-composer.
Add a struct to store changes made to a blueprint. Each change contains
a commit which is a hex string based off of an sha1 hash, a message
describing the change, a revision which will usually be null, a
timestamp, and the blueprint at the time of the change.
When dnf-json dumps the packages from the repos passed to it, it does
not sort the packages. In order to properly list and search the
packages, the package list is now sorted before being returned by the
FetchPackageList function.
According to Fedora packaging guidelines, the rpm scriptlets must
call into systemd to make sure the services are enabled/disabled
corerctly on install and uninstall.
Signed-off-by: Tom Gundersen <teg@jklm.no>
This makes no difference, so let's just put them where the Fedora
guidelines say they should be.
Also, make sure to own the containing directory.
Signed-off-by: Tom Gundersen <teg@jklm.no>
Addressing https://bugzilla.redhat.com/show_bug.cgi?id=1768774#c1
The review also points out that the Source0 URL is wrong, I fixed
this by pushing a new tag: `v1`, rather than just `1`.
Signed-off-by: Tom Gundersen <teg@jklm.no>
The tests works by executing osbuild with predefined pipeline. Then the
image boots and the testing script creates SSH connection to the running
VM. If everything goes fine `systemctl is-system-running` is executed
with result `running` and the test case passed.
The JSON definition of the test case contains also a blueprint that
should generate the desired pipeline, but it didn't work for me, so I'm
including it for future use from the golang unit tests.
The delete source handler now removes the leading "/" from the
parameters passed to it. Not removing the "/" caused sources to not
be deleted from the store since they could not be found when their name
contained a "/" as the first character.