NOTE that the tests against the rpmmd fixure for running and waiting are
a bit of a kludge. It is impossible to mock up a running or waiting that
will allow an actual cancel, so the tests check for the internal server
error generated by using the fixure data.
Getting this error means that the API code did it's job and tried to
cancel the compose so that should be a good compromise.
Fix the cancel API to allow a waiting compose to be canceled.
This also fixes the cancel return code to be 400, the lorax-composer
behavior was a bug, and using 400 allows composer-cli to properly
display the error.
Move the `ostreePullStageInputs()` function duplicated in all
distro definitions to the `osbuild2` package as
`NewOstreePullStageInputs()`.
Delete `stage_inputs.go` from all distro definitions.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Move the `bootISOMonoStageInputs()` function duplicated in all
distro definitions to the `osbuild2` package as
`NewBootISOMonoStagePipelineTreeInputs()`.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Move the `rpmStageInputs()` function duplicated in all
distro definitions to the `osbuild2` package as
`NewRpmStageSourceFilesInputs()`.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Move `OSBuildMetadataToRPMs()` and `PackageMetadataToSignature()`
functions from the `rpmmd` package to `osbuild2` package to prevent
import cycles while de-duplicating `rpmStageInputs()` function from
`stage_inputs.go` of distro definitions.
Rename `PackageMetadataToSignature()` to
`RPMPackageMetadataToSignature()`, since it takes specifically
`RPMPackageMetadata` type as an argument.
Adjust affected parts of code (unit tests, cloudapi, worker).
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Move the `qemuStageInputs()` function duplicated in most
distro definitions to the `osbuild2` package as
`NewQemuStagePipelineFilesInputs()`.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Move the `xorrisofsStageInputs()` function duplicated in all
distro definitions to the `osbuild2` package as
`NewXorrisofsStagePipelineTreeInputs()`.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Move the `copyPipelineTreeInputs()` function duplicated in many
distro definitions to the `osbuild2` package as
`NewCopyStagePipelineTreeInputs()`.
This will prevent creating another copy of the code in rhel-84 for
the `gce` image.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Move the `kernelVerStr()` function duplicated in many
distro definitions to the `rpmmd` package as
`GetVerStrFromPackageSpecListPanic()`.
I could not come up with a better name, sorry.
This will prevent creating another copy of the code in rhel-84 for
the `gce` image.
This change initially exposed a bug in the original implementation of
`kernelVerStr()`. Since on the first line, we allocate an empty structure
into `kernelPkg` variable, it can never be `nil` and the function never
panicked even if there was no `kernel` package in the PackageSpec list.
Fix all unit tests to provide valid arguments when calling `Manifest()`
method of image types.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
kernelVerStr fixup
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Move the `kernelCmdlineStageOptions()` function duplicated in many
distro definitions to the `osbuild2` package as
`NewKernelCmdlineStageOptions()`.
This will prevent creating another copy of the code in rhel-84 for the
`gce` image.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Previous implementation of fsjobqueue is amazing but it has its drawbacks:
- dequeueing can be done only based on a job type
- it's limited to 100 jobs per a job type
As we soon want to be able to dequeue also by another criteria (job channel),
we need to refactor the queue.
The new implementation is more naive but also more flexible. It basically
works like the dbjobqueue - dequeueing goroutines listen for newly added
jobs. When that happens, a signal is sent to all of them and they all inspect
all pending jobs and dequeue ones that match their needs. Ones that don't find
a suitable job, are waiting for the next signal.
This is certainly slower implementation as every time a new job is added into
the queue, all dequeueing goroutines will have to iterate over all
pending jobs. I think that's fine because fsjobqueue is not recommended
to use for composer instances with heavy load.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
In the greenboot 0.13 release we updated packaging due to the increase
in new tests and it not making sense to have packaging so granular.
Signed-off-by: Peter Robinson <pbrobinson@gmail.com>
When the server is restarted the blueprint changes, which are only held
in memory, are lost. This checks for missing changes and returns an
error.
The test is also adjusted for the new error.
Related: rhbz#1922845
The blueprint name should never be empty, as it can cause other problems
like with the blueprints list results. Return an error if one is pushed
to the store, either as a blueprint commit or as a blueprint workspace.
Also adjusts the new test for the new error.
Related: rhbz#1922845
There is a problem with blueprint changes, once the server is restarted
the previous changes are all lost because they are not serialized to
disk.
This adds test fixture support so that new tests can be added before
fixing the problem. It adds store.FixtureOldChanges with blueprints
changes and empty blueprints.
Related: rhbz#1922845
- Any repository without package_sets is added to the general `Repos`
field of the DepsolveJob, just like before.
- Repositories with package_sets are added to the `PackageSetsRepos`
map, indexed by the package set names.
- Repositories defined in the customizations as `PayloadRepositories`
are considered to be associated only with the `PayloadPackageSets`
names from the image type and are added to the `PackageSetsRepos`
under the payload sets.
The repository collection and conversion of repository structs (from
Repository to RepoConfig) has been moved to a separate function.
Tags is an array of strings that associates repositories with package
sets. A repository tagged with a package set name will be used only for
the named package sets.
We have two fields, `Repos` and `PackageSets`. Renaming
`PackageSetsRepositories` to `PackageSetsRepos` for consistency.
The struct is for internal use only so the rename has no impact as long
as the serialised name is the same (json tag).
Also it's shorter.
Added docstring to the struct that explains the arguments in the same
way as they are described for the `depsolve()` function.
Changing the name of the argument in the internal `depsolve()` function
for the same reasons.
When deploying an ostree commit, specify a remote, currently hard-
coded to `rhel-edge`, so that updates work automatically, if they
are served from the same location as the initial commit is pulled
from.
NB: now that the remote is specified in the raw image, remove the
corresponding bits form the tests.
Signed-off-by: Antonio Murdaca <runcom@linux.com>
For the brew use-case, we also need to build AWS images containing RHUI. This
commit is thus adding them.
Signed-off-by: Ondřej Budai <ondrej@budai.cz>
Currently the job metrics are namespaced with the composer
subsystem, i.e. `composer_worker`. Since we plan to split
the components to their own namespaces in app interface,
the worker subsystem should be split too.
Fedora 33 is already EOL, therefore there is no point in supporting
image builds for it. Drop F33 from the distroregistry list and remove
F33 repositories definition.
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Setting of the `crashkernel` option to the appropriate value is now done
by the `kexec-tools` package when installed and when any new kernel is
installed.
Regenerate relevant image test cases.
Fix#1819
Fix rhbz#2006692
Signed-off-by: Tomas Hozza <thozza@redhat.com>
This commit introduces the collection of error
metrics since it is now possible to differentiate
between internal errors and user input errors.
Additionally, the error status is reported for
job duration metrics.
For simplicity, the collection of the job metrics
was carried out in the job queue. This was only
being done in the dbqueue and not in the fsqueue.
This pr refactors the metric collection and moves
the job metrics to the worker server, by adding a
wrapper function to enqueueing jobs so that the
metrics only have to be recorded in one place when
queueing a job.
Refactor the current metric collection to make use
of re-usable functions, since some of the same queries
are repeated. This will also make it easier to move
the collection of metrics from the job queue.
The internal GCP package used `pkg.go.dev/google.golang.org/api` [1] to
interact with Compute Engine API. Modify the package to use the new and
idiomatic `pkg.go.dev/cloud.google.com/go` [2] library for interacting
with the Compute Engine API. The new library have been already used to
interact with the Cloudbuild and Storage APIs. The new library was not
used for Compute Engine since the beginning, because at that time, it
didn't support Compute Engine.
Update go.mod and vendored packages.
[1] https://github.com/googleapis/google-api-go-client
[2] https://github.com/googleapis/google-cloud-go
Signed-off-by: Tomas Hozza <thozza@redhat.com>
Disable loging in via password authentication since this is an
official Amazon marketplace requirement
Linux-based AMIs must not allow SSH password authentication.
Disable password authentication via your sshd_config file by
setting PasswordAuthentication to NO.
Section "Security policies" from
https://docs.aws.amazon.com/marketplace/latest/userguide/product-and-ami-policies.html
Disable loging in via password authentication since this is an
official Amazon marketplace requirement
Linux-based AMIs must not allow SSH password authentication.
Disable password authentication via your sshd_config file by
setting PasswordAuthentication to NO.
Section "Security policies" from
https://docs.aws.amazon.com/marketplace/latest/userguide/product-and-ami-policies.html
Disable loging in via password authentication since this is an
official Amazon marketplace requirement
Linux-based AMIs must not allow SSH password authentication.
Disable password authentication via your sshd_config file by
setting PasswordAuthentication to NO.
Section "Security policies" from
https://docs.aws.amazon.com/marketplace/latest/userguide/product-and-ami-policies.html
Clean up some implementation aspects of the Fedora distro definition:
- Do not have default Fedora distro version and use `fedora` as the
package name in all places that use it, instead of `fedora33`.
- Fix bugs when wrong (Fedora 33) values were returned by `OSTreeRef()`
and `Releasever()` for newer Fedora releases.
- Test Fedora 35 in package unit tests.
- Add unit test for `OSTreeRef()` method.
- Use architecture name constants from `distro` package, instead of
string literals.
Fix#1802
Signed-off-by: Tomas Hozza <thozza@redhat.com>
The QEMU assembler in Fedora distro definition for UEFI systems used
longer than allowed label for the VFAT filesystem of the EFI System
Partition. The maximum allowed label length is 11 characters.
This worked before with dosfstools, but in 2018, they added a label
validation [1]. This change got into the v4.2 release of dosfstools,
released in Jan 2021. And subsequently since F34, this new version of
dosfstools is present in Fedora repositories.
[1] ca54953476
Signed-off-by: Tomas Hozza <thozza@redhat.com>