split: replace internal packages with images library

Remove all the internal package that are now in the
github.com/osbuild/images package and vendor it.

A new function in internal/blueprint/ converts from an osbuild-composer
blueprint to an images blueprint.  This is necessary for keeping the
blueprint implementation in both packages.  In the future, the images
package will change the blueprint (and most likely rename it) and it
will only be part of the osbuild-composer internals and interface.  The
Convert() function will be responsible for converting the blueprint into
the new configuration object.
This commit is contained in:
Achilleas Koutsou 2023-06-14 16:34:41 +02:00 committed by Sanne Raymaekers
parent d59199670f
commit 0e4a9e586f
446 changed files with 5690 additions and 13312 deletions

201
vendor/github.com/osbuild/images/LICENSE generated vendored Normal file
View file

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View file

@ -0,0 +1,38 @@
package common
import "fmt"
const (
KiloByte = 1000 // kB
KibiByte = 1024 // KiB
MegaByte = 1000 * 1000 // MB
MebiByte = 1024 * 1024 // MiB
GigaByte = 1000 * 1000 * 1000 // GB
GibiByte = 1024 * 1024 * 1024 // GiB
TeraByte = 1000 * 1000 * 1000 * 1000 // TB
TebiByte = 1024 * 1024 * 1024 * 1024 // TiB
)
// These constants are set during buildtime using additional
// compiler flags. Not all of them are necessarily defined
// because RPMs can be build from a tarball and spec file without
// being in a git repository. On the other hand when building
// composer inside of a container, there is no RPM layer so in
// that case the RPM version doesn't exist at all.
var (
// Git revision from which this code was built
GitRev = "undefined"
// RPM Version
RpmVersion = "undefined"
)
func BuildVersion() string {
if GitRev != "undefined" {
return fmt.Sprintf("git-rev:%s", GitRev)
} else if RpmVersion != "undefined" {
return fmt.Sprintf("NEVRA:%s", RpmVersion)
} else {
return "devel"
}
}

View file

@ -0,0 +1,83 @@
package common
import (
"bufio"
"errors"
"io"
"os"
"strings"
)
func GetHostDistroName() (string, bool, bool, error) {
f, err := os.Open("/etc/os-release")
if err != nil {
return "", false, false, err
}
defer f.Close()
osrelease, err := readOSRelease(f)
if err != nil {
return "", false, false, err
}
isStream := osrelease["NAME"] == "CentOS Stream"
version := strings.Split(osrelease["VERSION_ID"], ".")
name := osrelease["ID"] + "-" + strings.Join(version, "")
// TODO: We should probably index these things by the full CPE
beta := strings.Contains(osrelease["CPE_NAME"], "beta")
return name, beta, isStream, nil
}
func readOSRelease(r io.Reader) (map[string]string, error) {
osrelease := make(map[string]string)
scanner := bufio.NewScanner(r)
for scanner.Scan() {
line := strings.TrimSpace(scanner.Text())
if len(line) == 0 {
continue
}
parts := strings.SplitN(line, "=", 2)
if len(parts) != 2 {
return nil, errors.New("readOSRelease: invalid input")
}
key := strings.TrimSpace(parts[0])
// drop all surrounding whitespace and double-quotes
value := strings.Trim(strings.TrimSpace(parts[1]), "\"")
osrelease[key] = value
}
return osrelease, nil
}
// Returns true if the version represented by the first argument is
// semantically older than the second.
// Meant to be used for comparing distro versions for differences between minor
// releases.
// Evaluates to false if a and b are equal.
// Assumes any missing components are 0, so 8 < 8.1.
func VersionLessThan(a, b string) bool {
aParts := strings.Split(a, ".")
bParts := strings.Split(b, ".")
// pad shortest argument with zeroes
for len(aParts) < len(bParts) {
aParts = append(aParts, "0")
}
for len(bParts) < len(aParts) {
bParts = append(bParts, "0")
}
for idx := 0; idx < len(aParts); idx++ {
if aParts[idx] < bParts[idx] {
return true
} else if aParts[idx] > bParts[idx] {
return false
}
}
// equal
return false
}

View file

@ -0,0 +1,175 @@
package common
import (
"encoding/json"
"io"
"github.com/labstack/gommon/log"
"github.com/sirupsen/logrus"
)
// EchoLogrusLogger extend logrus.Logger
type EchoLogrusLogger struct {
*logrus.Logger
}
var commonLogger = &EchoLogrusLogger{
Logger: logrus.StandardLogger(),
}
func Logger() *EchoLogrusLogger {
return commonLogger
}
func toEchoLevel(level logrus.Level) log.Lvl {
switch level {
case logrus.DebugLevel:
return log.DEBUG
case logrus.InfoLevel:
return log.INFO
case logrus.WarnLevel:
return log.WARN
case logrus.ErrorLevel:
return log.ERROR
}
return log.OFF
}
func (l *EchoLogrusLogger) Output() io.Writer {
return l.Out
}
func (l *EchoLogrusLogger) SetOutput(w io.Writer) {
// disable operations that would change behavior of global logrus logger.
}
func (l *EchoLogrusLogger) Level() log.Lvl {
return toEchoLevel(l.Logger.Level)
}
func (l *EchoLogrusLogger) SetLevel(v log.Lvl) {
// disable operations that would change behavior of global logrus logger.
}
func (l *EchoLogrusLogger) SetHeader(h string) {
}
func (l *EchoLogrusLogger) Prefix() string {
return ""
}
func (l *EchoLogrusLogger) SetPrefix(p string) {
}
func (l *EchoLogrusLogger) Print(i ...interface{}) {
l.Logger.Print(i...)
}
func (l *EchoLogrusLogger) Printf(format string, args ...interface{}) {
l.Logger.Printf(format, args...)
}
func (l *EchoLogrusLogger) Printj(j log.JSON) {
b, err := json.Marshal(j)
if err != nil {
panic(err)
}
l.Logger.Println(string(b))
}
func (l *EchoLogrusLogger) Debug(i ...interface{}) {
l.Logger.Debug(i...)
}
func (l *EchoLogrusLogger) Debugf(format string, args ...interface{}) {
l.Logger.Debugf(format, args...)
}
func (l *EchoLogrusLogger) Debugj(j log.JSON) {
b, err := json.Marshal(j)
if err != nil {
panic(err)
}
l.Logger.Debugln(string(b))
}
func (l *EchoLogrusLogger) Info(i ...interface{}) {
l.Logger.Info(i...)
}
func (l *EchoLogrusLogger) Infof(format string, args ...interface{}) {
l.Logger.Infof(format, args...)
}
func (l *EchoLogrusLogger) Infoj(j log.JSON) {
b, err := json.Marshal(j)
if err != nil {
panic(err)
}
l.Logger.Infoln(string(b))
}
func (l *EchoLogrusLogger) Warn(i ...interface{}) {
l.Logger.Warn(i...)
}
func (l *EchoLogrusLogger) Warnf(format string, args ...interface{}) {
l.Logger.Warnf(format, args...)
}
func (l *EchoLogrusLogger) Warnj(j log.JSON) {
b, err := json.Marshal(j)
if err != nil {
panic(err)
}
l.Logger.Warnln(string(b))
}
func (l *EchoLogrusLogger) Error(i ...interface{}) {
l.Logger.Error(i...)
}
func (l *EchoLogrusLogger) Errorf(format string, args ...interface{}) {
l.Logger.Errorf(format, args...)
}
func (l *EchoLogrusLogger) Errorj(j log.JSON) {
b, err := json.Marshal(j)
if err != nil {
panic(err)
}
l.Logger.Errorln(string(b))
}
func (l *EchoLogrusLogger) Fatal(i ...interface{}) {
l.Logger.Fatal(i...)
}
func (l *EchoLogrusLogger) Fatalf(format string, args ...interface{}) {
l.Logger.Fatalf(format, args...)
}
func (l *EchoLogrusLogger) Fatalj(j log.JSON) {
b, err := json.Marshal(j)
if err != nil {
panic(err)
}
l.Logger.Fatalln(string(b))
}
func (l *EchoLogrusLogger) Panic(i ...interface{}) {
l.Logger.Panic(i...)
}
func (l *EchoLogrusLogger) Panicf(format string, args ...interface{}) {
l.Logger.Panicf(format, args...)
}
func (l *EchoLogrusLogger) Panicj(j log.JSON) {
b, err := json.Marshal(j)
if err != nil {
panic(err)
}
l.Logger.Panicln(string(b))
}

View file

@ -0,0 +1,103 @@
package common
import (
"fmt"
"io"
"regexp"
"runtime"
"sort"
"strconv"
"strings"
)
var RuntimeGOARCH = runtime.GOARCH
func CurrentArch() string {
if RuntimeGOARCH == "amd64" {
return "x86_64"
} else if RuntimeGOARCH == "arm64" {
return "aarch64"
} else if RuntimeGOARCH == "ppc64le" {
return "ppc64le"
} else if RuntimeGOARCH == "s390x" {
return "s390x"
} else {
panic("unsupported architecture")
}
}
func PanicOnError(err error) {
if err != nil {
panic(err)
}
}
// IsStringInSortedSlice returns true if the string is present, false if not
// slice must be sorted
func IsStringInSortedSlice(slice []string, s string) bool {
i := sort.SearchStrings(slice, s)
if i < len(slice) && slice[i] == s {
return true
}
return false
}
// DataSizeToUint64 converts a size specified as a string in KB/KiB/MB/etc. to
// a number of bytes represented by uint64.
func DataSizeToUint64(size string) (uint64, error) {
// Pre-process the input
size = strings.TrimSpace(size)
// Get the number from the string
plain_number := regexp.MustCompile(`[[:digit:]]+`)
number_as_str := plain_number.FindString(size)
if number_as_str == "" {
return 0, fmt.Errorf("the size string doesn't contain any number: %s", size)
}
// Parse the number into integer
return_size, err := strconv.ParseUint(number_as_str, 10, 64)
if err != nil {
return 0, fmt.Errorf("failed to parse size as integer: %s", number_as_str)
}
// List of all supported units (from kB to TB and KiB to TiB)
supported_units := []struct {
re *regexp.Regexp
multiple uint64
}{
{regexp.MustCompile(`^\s*[[:digit:]]+\s*kB$`), KiloByte},
{regexp.MustCompile(`^\s*[[:digit:]]+\s*KiB$`), KibiByte},
{regexp.MustCompile(`^\s*[[:digit:]]+\s*MB$`), MegaByte},
{regexp.MustCompile(`^\s*[[:digit:]]+\s*MiB$`), MebiByte},
{regexp.MustCompile(`^\s*[[:digit:]]+\s*GB$`), GigaByte},
{regexp.MustCompile(`^\s*[[:digit:]]+\s*GiB$`), GibiByte},
{regexp.MustCompile(`^\s*[[:digit:]]+\s*TB$`), TeraByte},
{regexp.MustCompile(`^\s*[[:digit:]]+\s*TiB$`), TebiByte},
{regexp.MustCompile(`^\s*[[:digit:]]+$`), 1},
}
for _, unit := range supported_units {
if unit.re.MatchString(size) {
return_size *= unit.multiple
return return_size, nil
}
}
// In case the strign didn't match any of the above regexes, return nil
// even if a number was found. This is to prevent users from submitting
// unknown units.
return 0, fmt.Errorf("unknown data size units in string: %s", size)
}
// NopSeekCloser returns an io.ReadSeekCloser with a no-op Close method
// wrapping the provided io.ReadSeeker r.
func NopSeekCloser(r io.ReadSeeker) io.ReadSeekCloser {
return nopSeekCloser{r}
}
type nopSeekCloser struct {
io.ReadSeeker
}
func (nopSeekCloser) Close() error { return nil }

View file

@ -0,0 +1,22 @@
package common
import (
"github.com/labstack/echo/v4"
"github.com/segmentio/ksuid"
)
const OperationIDKey string = "operationID"
// Adds a time-sortable globally unique identifier to an echo.Context if not already set
func OperationIDMiddleware(next echo.HandlerFunc) echo.HandlerFunc {
return func(c echo.Context) error {
if c.Get(OperationIDKey) == nil {
c.Set(OperationIDKey, GenerateOperationID())
}
return next(c)
}
}
func GenerateOperationID() string {
return ksuid.New().String()
}

View file

@ -0,0 +1,5 @@
package common
func ToPtr[T any](x T) *T {
return &x
}

View file

@ -0,0 +1,71 @@
package common
import (
"encoding/json"
)
func getStateMapping() []string {
return []string{"WAITING", "RUNNING", "FINISHED", "FAILED"}
}
type ImageBuildState int
const (
IBWaiting ImageBuildState = iota
IBRunning
IBFinished
IBFailed
)
// CustomJsonConversionError is thrown when parsing strings into enumerations
type CustomJsonConversionError struct {
reason string
}
// Error returns the error as a string
func (err *CustomJsonConversionError) Error() string {
return err.reason
}
// CustomTypeError is thrown when parsing strings into enumerations
type CustomTypeError struct {
reason string
}
// Error returns the error as a string
func (err *CustomTypeError) Error() string {
return err.reason
}
// ToString converts ImageBuildState into a human readable string
func (ibs ImageBuildState) ToString() string {
return getStateMapping()[int(ibs)]
}
func unmarshalStateHelper(data []byte, mapping []string) (int, error) {
var stringInput string
err := json.Unmarshal(data, &stringInput)
if err != nil {
return 0, err
}
for n, str := range mapping {
if str == stringInput {
return n, nil
}
}
return 0, &CustomJsonConversionError{"invalid image build status:" + stringInput}
}
// UnmarshalJSON converts a JSON string into an ImageBuildState
func (ibs *ImageBuildState) UnmarshalJSON(data []byte) error {
val, err := unmarshalStateHelper(data, getStateMapping())
if err != nil {
return err
}
*ibs = ImageBuildState(val)
return nil
}
func (ibs ImageBuildState) MarshalJSON() ([]byte, error) {
return json.Marshal(getStateMapping()[ibs])
}

View file

@ -0,0 +1,325 @@
package dnfjson
import (
"fmt"
"io/fs"
"os"
"path/filepath"
"sort"
"sync"
"time"
"github.com/osbuild/images/pkg/rpmmd"
"github.com/gobwas/glob"
)
// CleanupOldCacheDirs will remove cache directories for unsupported distros
// eg. Once support for a fedora release stops and it is removed, this will
// delete its directory under root.
//
// A happy side effect of this is that it will delete old cache directories
// and files from before the switch to per-distro cache directories.
//
// NOTE: This does not return any errors. This is because the most common one
// will be a nonexistant directory which will be created later, during initial
// cache creation. Any other errors like permission issues will be caught by
// later use of the cache. eg. touchRepo
func CleanupOldCacheDirs(root string, distros []string) {
dirs, _ := os.ReadDir(root)
for _, e := range dirs {
if strSliceContains(distros, e.Name()) {
// known distro
continue
}
if e.IsDir() {
// Remove the directory and everything under it
_ = os.RemoveAll(filepath.Join(root, e.Name()))
} else {
_ = os.Remove(filepath.Join(root, e.Name()))
}
}
}
// strSliceContains returns true if the elem string is in the slc array
func strSliceContains(slc []string, elem string) bool {
for _, s := range slc {
if elem == s {
return true
}
}
return false
}
// global cache locker
var cacheLocks sync.Map
// A collection of directory paths, their total size, and their most recent
// modification time.
type pathInfo struct {
paths []string
size uint64
mtime time.Time
}
type rpmCache struct {
// root path for the cache
root string
// individual repository cache data
repoElements map[string]pathInfo
// list of known repository IDs, sorted by mtime
repoRecency []string
// total cache size
size uint64
// max cache size
maxSize uint64
// locker for this cache directory
locker *sync.RWMutex
}
func newRPMCache(path string, maxSize uint64) *rpmCache {
absPath, err := filepath.Abs(path) // convert to abs if it's not already
if err != nil {
panic(err) // can only happen if the CWD does not exist and the path isn't already absolute
}
path = absPath
locker := new(sync.RWMutex)
if l, loaded := cacheLocks.LoadOrStore(path, locker); loaded {
// value existed and was loaded
locker = l.(*sync.RWMutex)
}
r := &rpmCache{
root: path,
repoElements: make(map[string]pathInfo),
size: 0,
maxSize: maxSize,
locker: locker,
}
// collect existing cache paths and timestamps
r.updateInfo()
return r
}
// updateInfo updates the repoPaths and repoRecency fields of the rpmCache.
//
// NOTE: This does not return any errors. This is because the most common one
// will be a nonexistant directory which will be created later, during initial
// cache creation. Any other errors like permission issues will be caught by
// later use of the cache. eg. touchRepo
func (r *rpmCache) updateInfo() {
dirs, _ := os.ReadDir(r.root)
for _, d := range dirs {
r.updateCacheDirInfo(filepath.Join(r.root, d.Name()))
}
}
func (r *rpmCache) updateCacheDirInfo(path string) {
// See updateInfo NOTE on error handling
cacheEntries, _ := os.ReadDir(path)
// each repository has multiple cache entries (3 on average), so using the
// number of cacheEntries to allocate the map and ID slice is a high upper
// bound, but guarantees we wont need to grow and reallocate either.
repos := make(map[string]pathInfo, len(cacheEntries))
repoIDs := make([]string, 0, len(cacheEntries))
var totalSize uint64
// Collect the paths grouped by their repo ID
// We assume the first 64 characters of a file or directory name are the
// repository ID because we use a sha256 sum of the repository config to
// create the ID (64 hex chars)
for _, entry := range cacheEntries {
eInfo, err := entry.Info()
if err != nil {
// skip it
continue
}
fname := entry.Name()
if len(fname) < 64 {
// unknown file in cache; ignore
continue
}
repoID := fname[:64]
repo, ok := repos[repoID]
if !ok {
// new repo ID
repoIDs = append(repoIDs, repoID)
}
mtime := eInfo.ModTime()
ePath := filepath.Join(path, entry.Name())
// calculate and add entry size
size, err := dirSize(ePath)
if err != nil {
// skip it
continue
}
repo.size += size
totalSize += size
// add path
repo.paths = append(repo.paths, ePath)
// if for some reason the mtimes of the various entries of a single
// repository are out of sync, use the most recent one
if repo.mtime.Before(mtime) {
repo.mtime = mtime
}
// update the collection
repos[repoID] = repo
}
sortFunc := func(idx, jdx int) bool {
ir := repos[repoIDs[idx]]
jr := repos[repoIDs[jdx]]
return ir.mtime.Before(jr.mtime)
}
// sort IDs by mtime (oldest first)
sort.Slice(repoIDs, sortFunc)
r.size = totalSize
r.repoElements = repos
r.repoRecency = repoIDs
}
func (r *rpmCache) shrink() error {
r.locker.Lock()
defer r.locker.Unlock()
// start deleting until we drop below r.maxSize
nDeleted := 0
for idx := 0; idx < len(r.repoRecency) && r.size >= r.maxSize; idx++ {
repoID := r.repoRecency[idx]
nDeleted++
repo, ok := r.repoElements[repoID]
if !ok {
// cache inconsistency?
// ignore and let the ID be removed from the recency list
continue
}
for _, gPath := range repo.paths {
if err := os.RemoveAll(gPath); err != nil {
return err
}
}
r.size -= repo.size
delete(r.repoElements, repoID)
}
// update recency list
r.repoRecency = r.repoRecency[nDeleted:]
return nil
}
// Update file atime and mtime on the filesystem to time t for all files in the
// root of the cache that match the repo ID. This should be called whenever a
// repository is used.
// This function does not update the internal cache info. A call to
// updateInfo() should be made after touching one or more repositories.
func (r *rpmCache) touchRepo(repoID string, t time.Time) error {
repoGlob, err := glob.Compile(fmt.Sprintf("%s*", repoID))
if err != nil {
return err
}
distroDirs, err := os.ReadDir(r.root)
if err != nil {
return err
}
for _, d := range distroDirs {
// we only touch the top-level directories and files of the cache
cacheEntries, err := os.ReadDir(filepath.Join(r.root, d.Name()))
if err != nil {
return err
}
for _, cacheEntry := range cacheEntries {
if repoGlob.Match(cacheEntry.Name()) {
path := filepath.Join(r.root, d.Name(), cacheEntry.Name())
if err := os.Chtimes(path, t, t); err != nil {
return err
}
}
}
}
return nil
}
func dirSize(path string) (uint64, error) {
var size uint64
sizer := func(path string, info fs.FileInfo, err error) error {
if err != nil {
return err
}
size += uint64(info.Size())
return nil
}
err := filepath.Walk(path, sizer)
return size, err
}
// dnfResults holds the results of a dnfjson request
// expire is the time the request was made, used to expire the entry
type dnfResults struct {
expire time.Time
pkgs rpmmd.PackageList
}
// dnfCache is a cache of results from dnf-json requests
type dnfCache struct {
results map[string]dnfResults
timeout time.Duration
*sync.RWMutex
}
// NewDNFCache returns a pointer to an initialized dnfCache struct
func NewDNFCache(timeout time.Duration) *dnfCache {
return &dnfCache{
results: make(map[string]dnfResults),
timeout: timeout,
RWMutex: new(sync.RWMutex),
}
}
// CleanCache deletes unused cache entries
// This prevents the cache from growing for longer than the timeout interval
func (d *dnfCache) CleanCache() {
d.Lock()
defer d.Unlock()
// Delete expired resultCache entries
for k := range d.results {
if time.Since(d.results[k].expire) > d.timeout {
delete(d.results, k)
}
}
}
// Get returns the package list and true if cached
// or an empty list and false if not cached or if cache is timed out
func (d *dnfCache) Get(hash string) (rpmmd.PackageList, bool) {
d.RLock()
defer d.RUnlock()
result, ok := d.results[hash]
if !ok || time.Since(result.expire) >= d.timeout {
return rpmmd.PackageList{}, false
}
return result.pkgs, true
}
// Store saves the package list in the cache
func (d *dnfCache) Store(hash string, pkgs rpmmd.PackageList) {
d.Lock()
defer d.Unlock()
d.results[hash] = dnfResults{expire: time.Now(), pkgs: pkgs}
}

View file

@ -0,0 +1,651 @@
// Package dnfjson is an interface to the dnf-json Python script that is
// packaged with the osbuild-composer project. The core component of this
// package is the Solver type. The Solver can be configured with
// distribution-specific values (platform ID, architecture, and version
// information) and provides methods for dependency resolution (Depsolve) and
// retrieving a full list of repository package metadata (FetchMetadata).
//
// Alternatively, a BaseSolver can be created which represents an un-configured
// Solver. This type can't be used for depsolving, but can be used to create
// configured Solver instances sharing the same cache directory.
//
// This package relies on the types defined in rpmmd to describe RPM package
// metadata.
package dnfjson
import (
"bytes"
"crypto/sha256"
"encoding/json"
"fmt"
"os"
"os/exec"
"path/filepath"
"sort"
"strings"
"time"
"github.com/osbuild/images/pkg/rhsm"
"github.com/osbuild/images/pkg/rpmmd"
)
// BaseSolver defines the basic solver configuration without platform
// information. It can be used to create configured Solver instances with the
// NewWithConfig() method. A BaseSolver maintains the global repository cache
// directory.
type BaseSolver struct {
// Cache information
cache *rpmCache
// Path to the dnf-json binary and optional args (default: "/usr/libexec/osbuild-composer/dnf-json")
dnfJsonCmd []string
resultCache *dnfCache
}
// Create a new unconfigured BaseSolver (without platform information). It can
// be used to create configured Solver instances with the NewWithConfig()
// method.
func NewBaseSolver(cacheDir string) *BaseSolver {
return &BaseSolver{
cache: newRPMCache(cacheDir, 1024*1024*1024), // 1 GiB
dnfJsonCmd: []string{"/usr/libexec/osbuild-composer/dnf-json"},
resultCache: NewDNFCache(60 * time.Second),
}
}
// SetMaxCacheSize sets the maximum size for the global repository metadata
// cache. This is the maximum size of the cache after a CleanCache()
// call. Cache cleanup is never performed automatically.
func (s *BaseSolver) SetMaxCacheSize(size uint64) {
s.cache.maxSize = size
}
// SetDNFJSONPath sets the path to the dnf-json binary and optionally any command line arguments.
func (s *BaseSolver) SetDNFJSONPath(cmd string, args ...string) {
s.dnfJsonCmd = append([]string{cmd}, args...)
}
// NewWithConfig initialises a Solver with the platform information and the
// BaseSolver's subscription info, cache directory, and dnf-json path.
// Also loads system subscription information.
func (bs *BaseSolver) NewWithConfig(modulePlatformID, releaseVer, arch, distro string) *Solver {
s := new(Solver)
s.BaseSolver = *bs
s.modulePlatformID = modulePlatformID
s.arch = arch
s.releaseVer = releaseVer
s.distro = distro
subs, _ := rhsm.LoadSystemSubscriptions()
s.subscriptions = subs
return s
}
// CleanCache deletes the least recently used repository metadata caches until
// the total size of the cache falls below the configured maximum size (see
// SetMaxCacheSize()).
func (bs *BaseSolver) CleanCache() error {
bs.resultCache.CleanCache()
return bs.cache.shrink()
}
// Solver is configured with system information in order to resolve
// dependencies for RPM packages using DNF.
type Solver struct {
BaseSolver
// Platform ID, e.g., "platform:el8"
modulePlatformID string
// System architecture
arch string
// Release version of the distro. This is used in repo files on the host
// system and required for subscription support.
releaseVer string
// Full distribution string, eg. fedora-38, used to create separate dnf cache directories
// for each distribution.
distro string
subscriptions *rhsm.Subscriptions
}
// Create a new Solver with the given configuration. Initialising a Solver also loads system subscription information.
func NewSolver(modulePlatformID, releaseVer, arch, distro, cacheDir string) *Solver {
s := NewBaseSolver(cacheDir)
return s.NewWithConfig(modulePlatformID, releaseVer, arch, distro)
}
// GetCacheDir returns a distro specific rpm cache directory
// It ensures that the distro name is below the root cache directory, and if there is
// a problem it returns the root cache intead of an error.
func (s *Solver) GetCacheDir() string {
b := filepath.Base(s.distro)
if b == "." || b == "/" {
return s.cache.root
}
return filepath.Join(s.cache.root, s.distro)
}
// Depsolve the list of required package sets with explicit excludes using
// their associated repositories. Each package set is depsolved as a separate
// transactions in a chain. It returns a list of all packages (with solved
// dependencies) that will be installed into the system.
func (s *Solver) Depsolve(pkgSets []rpmmd.PackageSet) ([]rpmmd.PackageSpec, error) {
req, repoMap, err := s.makeDepsolveRequest(pkgSets)
if err != nil {
return nil, err
}
// get non-exclusive read lock
s.cache.locker.RLock()
defer s.cache.locker.RUnlock()
output, err := run(s.dnfJsonCmd, req)
if err != nil {
return nil, err
}
// touch repos to now
now := time.Now().Local()
for _, r := range repoMap {
// ignore errors
_ = s.cache.touchRepo(r.Hash(), now)
}
s.cache.updateInfo()
var result packageSpecs
if err := json.Unmarshal(output, &result); err != nil {
return nil, err
}
return result.toRPMMD(repoMap), nil
}
// FetchMetadata returns the list of all the available packages in repos and
// their info.
func (s *Solver) FetchMetadata(repos []rpmmd.RepoConfig) (rpmmd.PackageList, error) {
req, err := s.makeDumpRequest(repos)
if err != nil {
return nil, err
}
// get non-exclusive read lock
s.cache.locker.RLock()
defer s.cache.locker.RUnlock()
// Is this cached?
if pkgs, ok := s.resultCache.Get(req.Hash()); ok {
return pkgs, nil
}
result, err := run(s.dnfJsonCmd, req)
if err != nil {
return nil, err
}
// touch repos to now
now := time.Now().Local()
for _, r := range repos {
// ignore errors
_ = s.cache.touchRepo(r.Hash(), now)
}
s.cache.updateInfo()
var pkgs rpmmd.PackageList
if err := json.Unmarshal(result, &pkgs); err != nil {
return nil, err
}
sortID := func(pkg rpmmd.Package) string {
return fmt.Sprintf("%s-%s-%s", pkg.Name, pkg.Version, pkg.Release)
}
sort.Slice(pkgs, func(i, j int) bool {
return sortID(pkgs[i]) < sortID(pkgs[j])
})
// Cache the results
s.resultCache.Store(req.Hash(), pkgs)
return pkgs, nil
}
// SearchMetadata searches for packages and returns a list of the info for matches.
func (s *Solver) SearchMetadata(repos []rpmmd.RepoConfig, packages []string) (rpmmd.PackageList, error) {
req, err := s.makeSearchRequest(repos, packages)
if err != nil {
return nil, err
}
// get non-exclusive read lock
s.cache.locker.RLock()
defer s.cache.locker.RUnlock()
// Is this cached?
if pkgs, ok := s.resultCache.Get(req.Hash()); ok {
return pkgs, nil
}
result, err := run(s.dnfJsonCmd, req)
if err != nil {
return nil, err
}
// touch repos to now
now := time.Now().Local()
for _, r := range repos {
// ignore errors
_ = s.cache.touchRepo(r.Hash(), now)
}
s.cache.updateInfo()
var pkgs rpmmd.PackageList
if err := json.Unmarshal(result, &pkgs); err != nil {
return nil, err
}
sortID := func(pkg rpmmd.Package) string {
return fmt.Sprintf("%s-%s-%s", pkg.Name, pkg.Version, pkg.Release)
}
sort.Slice(pkgs, func(i, j int) bool {
return sortID(pkgs[i]) < sortID(pkgs[j])
})
// Cache the results
s.resultCache.Store(req.Hash(), pkgs)
return pkgs, nil
}
func (s *Solver) reposFromRPMMD(rpmRepos []rpmmd.RepoConfig) ([]repoConfig, error) {
dnfRepos := make([]repoConfig, len(rpmRepos))
for idx, rr := range rpmRepos {
dr := repoConfig{
ID: rr.Hash(),
Name: rr.Name,
BaseURLs: rr.BaseURLs,
Metalink: rr.Metalink,
MirrorList: rr.MirrorList,
GPGKeys: rr.GPGKeys,
MetadataExpire: rr.MetadataExpire,
repoHash: rr.Hash(),
}
if rr.CheckGPG != nil {
dr.CheckGPG = *rr.CheckGPG
}
if rr.CheckRepoGPG != nil {
dr.CheckRepoGPG = *rr.CheckRepoGPG
}
if rr.IgnoreSSL != nil {
dr.IgnoreSSL = *rr.IgnoreSSL
}
if rr.RHSM {
if s.subscriptions == nil {
return nil, fmt.Errorf("This system does not have any valid subscriptions. Subscribe it before specifying rhsm: true in sources.")
}
secrets, err := s.subscriptions.GetSecretsForBaseurl(rr.BaseURLs, s.arch, s.releaseVer)
if err != nil {
return nil, fmt.Errorf("RHSM secrets not found on the host for this baseurl: %s", rr.BaseURLs)
}
dr.SSLCACert = secrets.SSLCACert
dr.SSLClientKey = secrets.SSLClientKey
dr.SSLClientCert = secrets.SSLClientCert
}
dnfRepos[idx] = dr
}
return dnfRepos, nil
}
// Repository configuration for resolving dependencies for a set of packages. A
// Solver needs at least one RPM repository configured to be able to depsolve.
type repoConfig struct {
ID string `json:"id"`
Name string `json:"name,omitempty"`
BaseURLs []string `json:"baseurl,omitempty"`
Metalink string `json:"metalink,omitempty"`
MirrorList string `json:"mirrorlist,omitempty"`
GPGKeys []string `json:"gpgkeys,omitempty"`
CheckGPG bool `json:"gpgcheck"`
CheckRepoGPG bool `json:"check_repogpg"`
IgnoreSSL bool `json:"ignoressl"`
SSLCACert string `json:"sslcacert,omitempty"`
SSLClientKey string `json:"sslclientkey,omitempty"`
SSLClientCert string `json:"sslclientcert,omitempty"`
MetadataExpire string `json:"metadata_expire,omitempty"`
// set the repo hass from `rpmmd.RepoConfig.Hash()` function
// rather than re-calculating it
repoHash string
}
// use the hash calculated by the `rpmmd.RepoConfig.Hash()`
// function rather than re-implementing the same code
func (r *repoConfig) Hash() string {
return r.repoHash
}
// Helper function for creating a depsolve request payload.
// The request defines a sequence of transactions, each depsolving one of the
// elements of `pkgSets` in the order they appear. The `repoConfigs` are used
// as the base repositories for all transactions. The extra repository configs
// in `pkgsetsRepos` are used for each of the `pkgSets` with matching index.
// The length of `pkgsetsRepos` must match the length of `pkgSets` or be empty
// (nil or empty slice).
//
// NOTE: Due to implementation limitations of DNF and dnf-json, each package set
// in the chain must use all of the repositories used by its predecessor.
// An error is returned if this requirement is not met.
func (s *Solver) makeDepsolveRequest(pkgSets []rpmmd.PackageSet) (*Request, map[string]rpmmd.RepoConfig, error) {
// dedupe repository configurations but maintain order
// the order in which repositories are added to the request affects the
// order of the dependencies in the result
repos := make([]rpmmd.RepoConfig, 0)
rpmRepoMap := make(map[string]rpmmd.RepoConfig)
for _, ps := range pkgSets {
for _, repo := range ps.Repositories {
id := repo.Hash()
if _, ok := rpmRepoMap[id]; !ok {
rpmRepoMap[id] = repo
repos = append(repos, repo)
}
}
}
transactions := make([]transactionArgs, len(pkgSets))
for dsIdx, pkgSet := range pkgSets {
transactions[dsIdx] = transactionArgs{
PackageSpecs: pkgSet.Include,
ExcludeSpecs: pkgSet.Exclude,
}
for _, jobRepo := range pkgSet.Repositories {
transactions[dsIdx].RepoIDs = append(transactions[dsIdx].RepoIDs, jobRepo.Hash())
}
// If more than one transaction, ensure that the transaction uses
// all of the repos from its predecessor
if dsIdx > 0 {
prevRepoIDs := transactions[dsIdx-1].RepoIDs
if len(transactions[dsIdx].RepoIDs) < len(prevRepoIDs) {
return nil, nil, fmt.Errorf("chained packageSet %d does not use all of the repos used by its predecessor", dsIdx)
}
for idx, repoID := range prevRepoIDs {
if repoID != transactions[dsIdx].RepoIDs[idx] {
return nil, nil, fmt.Errorf("chained packageSet %d does not use all of the repos used by its predecessor", dsIdx)
}
}
}
}
dnfRepoMap, err := s.reposFromRPMMD(repos)
if err != nil {
return nil, nil, err
}
args := arguments{
Repos: dnfRepoMap,
Transactions: transactions,
}
req := Request{
Command: "depsolve",
ModulePlatformID: s.modulePlatformID,
Arch: s.arch,
CacheDir: s.GetCacheDir(),
Arguments: args,
}
return &req, rpmRepoMap, nil
}
// Helper function for creating a dump request payload
func (s *Solver) makeDumpRequest(repos []rpmmd.RepoConfig) (*Request, error) {
dnfRepos, err := s.reposFromRPMMD(repos)
if err != nil {
return nil, err
}
req := Request{
Command: "dump",
ModulePlatformID: s.modulePlatformID,
Arch: s.arch,
CacheDir: s.GetCacheDir(),
Arguments: arguments{
Repos: dnfRepos,
},
}
return &req, nil
}
// Helper function for creating a search request payload
func (s *Solver) makeSearchRequest(repos []rpmmd.RepoConfig, packages []string) (*Request, error) {
dnfRepos, err := s.reposFromRPMMD(repos)
if err != nil {
return nil, err
}
req := Request{
Command: "search",
ModulePlatformID: s.modulePlatformID,
Arch: s.arch,
CacheDir: s.GetCacheDir(),
Arguments: arguments{
Repos: dnfRepos,
Search: searchArgs{
Packages: packages,
},
},
}
return &req, nil
}
// convert internal a list of PackageSpecs to the rpmmd equivalent and attach
// key and subscription information based on the repository configs.
func (pkgs packageSpecs) toRPMMD(repos map[string]rpmmd.RepoConfig) []rpmmd.PackageSpec {
rpmDependencies := make([]rpmmd.PackageSpec, len(pkgs))
for i, dep := range pkgs {
repo, ok := repos[dep.RepoID]
if !ok {
panic("dependency repo ID not found in repositories")
}
dep := pkgs[i]
rpmDependencies[i].Name = dep.Name
rpmDependencies[i].Epoch = dep.Epoch
rpmDependencies[i].Version = dep.Version
rpmDependencies[i].Release = dep.Release
rpmDependencies[i].Arch = dep.Arch
rpmDependencies[i].RemoteLocation = dep.RemoteLocation
rpmDependencies[i].Checksum = dep.Checksum
if repo.CheckGPG != nil {
rpmDependencies[i].CheckGPG = *repo.CheckGPG
}
if repo.IgnoreSSL != nil {
rpmDependencies[i].IgnoreSSL = *repo.IgnoreSSL
}
if repo.RHSM {
rpmDependencies[i].Secrets = "org.osbuild.rhsm"
}
}
return rpmDependencies
}
// Request command and arguments for dnf-json
type Request struct {
// Command should be either "depsolve" or "dump"
Command string `json:"command"`
// Platform ID, e.g., "platform:el8"
ModulePlatformID string `json:"module_platform_id"`
// System architecture
Arch string `json:"arch"`
// Cache directory for the DNF metadata
CacheDir string `json:"cachedir"`
// Arguments for the action defined by Command
Arguments arguments `json:"arguments"`
}
// Hash returns a hash of the unique aspects of the Request
//
//nolint:errcheck
func (r *Request) Hash() string {
h := sha256.New()
h.Write([]byte(r.Command))
h.Write([]byte(r.ModulePlatformID))
h.Write([]byte(r.Arch))
for _, repo := range r.Arguments.Repos {
h.Write([]byte(repo.Hash()))
}
h.Write([]byte(fmt.Sprintf("%T", r.Arguments.Search.Latest)))
h.Write([]byte(strings.Join(r.Arguments.Search.Packages, "")))
return fmt.Sprintf("%x", h.Sum(nil))
}
// arguments for a dnf-json request
type arguments struct {
// Repositories to use for depsolving
Repos []repoConfig `json:"repos"`
// Search terms to use with search command
Search searchArgs `json:"search"`
// Depsolve package sets and repository mappings for this request
Transactions []transactionArgs `json:"transactions"`
}
type searchArgs struct {
// Only include latest NEVRA when true
Latest bool `json:"latest"`
// List of package name globs to search for
// If it has '*' it is passed to dnf glob search, if it has *name* it is passed
// to substr matching, and if it has neither an exact match is expected.
Packages []string `json:"packages"`
}
type transactionArgs struct {
// Packages to depsolve
PackageSpecs []string `json:"package-specs"`
// Packages to exclude from results
ExcludeSpecs []string `json:"exclude-specs"`
// IDs of repositories to use for this depsolve
RepoIDs []string `json:"repo-ids"`
}
type packageSpecs []PackageSpec
// Package specification
type PackageSpec struct {
Name string `json:"name"`
Epoch uint `json:"epoch"`
Version string `json:"version,omitempty"`
Release string `json:"release,omitempty"`
Arch string `json:"arch,omitempty"`
RepoID string `json:"repo_id,omitempty"`
Path string `json:"path,omitempty"`
RemoteLocation string `json:"remote_location,omitempty"`
Checksum string `json:"checksum,omitempty"`
Secrets string `json:"secrets,omitempty"`
}
// dnf-json error structure
type Error struct {
Kind string `json:"kind"`
Reason string `json:"reason"`
}
func (err Error) Error() string {
return fmt.Sprintf("DNF error occurred: %s: %s", err.Kind, err.Reason)
}
// parseError parses the response from dnf-json into the Error type and appends
// the name and URL of a repository to all detected repository IDs in the
// message.
func parseError(data []byte, repos []repoConfig) Error {
var e Error
if err := json.Unmarshal(data, &e); err != nil {
// dumping the error into the Reason can get noisy, but it's good for troubleshooting
return Error{
Kind: "InternalError",
Reason: fmt.Sprintf("Failed to unmarshal dnf-json error output %q: %s", string(data), err.Error()),
}
}
// append to any instance of a repository ID the URL (or metalink, mirrorlist, etc)
for _, repo := range repos {
idstr := fmt.Sprintf("'%s'", repo.ID)
var nameURL string
if len(repo.BaseURLs) > 0 {
nameURL = strings.Join(repo.BaseURLs, ",")
} else if len(repo.Metalink) > 0 {
nameURL = repo.Metalink
} else if len(repo.MirrorList) > 0 {
nameURL = repo.MirrorList
}
if len(repo.Name) > 0 {
nameURL = fmt.Sprintf("%s: %s", repo.Name, nameURL)
}
e.Reason = strings.Replace(e.Reason, idstr, fmt.Sprintf("%s [%s]", idstr, nameURL), -1)
}
return e
}
func ParseError(data []byte) Error {
var e Error
if err := json.Unmarshal(data, &e); err != nil {
// dumping the error into the Reason can get noisy, but it's good for troubleshooting
return Error{
Kind: "InternalError",
Reason: fmt.Sprintf("Failed to unmarshal dnf-json error output %q: %s", string(data), err.Error()),
}
}
return e
}
func run(dnfJsonCmd []string, req *Request) ([]byte, error) {
if len(dnfJsonCmd) == 0 {
return nil, fmt.Errorf("dnf-json command undefined")
}
ex := dnfJsonCmd[0]
args := make([]string, len(dnfJsonCmd)-1)
if len(dnfJsonCmd) > 1 {
args = dnfJsonCmd[1:]
}
cmd := exec.Command(ex, args...)
stdin, err := cmd.StdinPipe()
if err != nil {
return nil, err
}
cmd.Stderr = os.Stderr
stdout := new(bytes.Buffer)
cmd.Stdout = stdout
err = cmd.Start()
if err != nil {
return nil, err
}
err = json.NewEncoder(stdin).Encode(req)
if err != nil {
return nil, err
}
stdin.Close()
err = cmd.Wait()
output := stdout.Bytes()
if runError, ok := err.(*exec.ExitError); ok && runError.ExitCode() != 0 {
return nil, parseError(output, req.Arguments.Repos)
}
return output, nil
}

View file

@ -0,0 +1,13 @@
package environment
type Azure struct {
BaseEnvironment
}
func (p *Azure) GetPackages() []string {
return []string{"WALinuxAgent"}
}
func (p *Azure) GetServices() []string {
return []string{"waagent"}
}

View file

@ -0,0 +1,13 @@
package environment
type EC2 struct {
BaseEnvironment
}
func (p *EC2) GetPackages() []string {
return []string{"cloud-init"}
}
func (p *EC2) GetServices() []string {
return []string{"cloud-init.service"}
}

View file

@ -0,0 +1,25 @@
package environment
import "github.com/osbuild/images/pkg/rpmmd"
type Environment interface {
GetPackages() []string
GetRepos() []rpmmd.RepoConfig
GetServices() []string
}
type BaseEnvironment struct {
Repos []rpmmd.RepoConfig
}
func (p BaseEnvironment) GetPackages() []string {
return []string{}
}
func (p BaseEnvironment) GetRepos() []rpmmd.RepoConfig {
return p.Repos
}
func (p BaseEnvironment) GetServices() []string {
return []string{}
}

View file

@ -0,0 +1,6 @@
package environment
// TODO
type GCP struct {
BaseEnvironment
}

View file

@ -0,0 +1,6 @@
package environment
// TODO
type OpenStack struct {
BaseEnvironment
}

View file

@ -0,0 +1,6 @@
package environment
// TODO
type VSphere struct {
BaseEnvironment
}

15
vendor/github.com/osbuild/images/internal/fdo/fdo.go generated vendored Normal file
View file

@ -0,0 +1,15 @@
package fdo
import "github.com/osbuild/images/pkg/blueprint"
type Options struct {
ManufacturingServerURL string
DiunPubKeyInsecure string
DiunPubKeyHash string
DiunPubKeyRootCerts string
}
func FromBP(bpFDO blueprint.FDOCustomization) *Options {
fdo := Options(bpFDO)
return &fdo
}

View file

@ -0,0 +1,34 @@
package fsnode
import "os"
type Directory struct {
baseFsNode
ensureParentDirs bool
}
func (d *Directory) IsDir() bool {
return true
}
func (d *Directory) EnsureParentDirs() bool {
if d == nil {
return false
}
return d.ensureParentDirs
}
// NewDirectory creates a new directory with the given path, mode, user and group.
// user and group can be either a string (user name/group name), an int64 (UID/GID) or nil.
func NewDirectory(path string, mode *os.FileMode, user interface{}, group interface{}, ensureParentDirs bool) (*Directory, error) {
baseNode, err := newBaseFsNode(path, mode, user, group)
if err != nil {
return nil, err
}
return &Directory{
baseFsNode: *baseNode,
ensureParentDirs: ensureParentDirs,
}, nil
}

View file

@ -0,0 +1,36 @@
package fsnode
import (
"os"
)
type File struct {
baseFsNode
data []byte
}
func (f *File) IsDir() bool {
return false
}
func (f *File) Data() []byte {
if f == nil {
return nil
}
return f.data
}
// NewFile creates a new file with the given path, data, mode, user and group.
// user and group can be either a string (user name/group name), an int64 (UID/GID) or nil.
func NewFile(path string, mode *os.FileMode, user interface{}, group interface{}, data []byte) (*File, error) {
baseNode, err := newBaseFsNode(path, mode, user, group)
if err != nil {
return nil, err
}
return &File{
baseFsNode: *baseNode,
data: data,
}, nil
}

View file

@ -0,0 +1,133 @@
package fsnode
import (
"fmt"
"os"
"path"
"regexp"
)
const usernameRegex = `^[A-Za-z0-9_.][A-Za-z0-9_.-]{0,31}$`
const groupnameRegex = `^[A-Za-z0-9_][A-Za-z0-9_-]{0,31}$`
type FsNode interface {
Path() string
Mode() *os.FileMode
// User can return either a string (user name/group name), an int64 (UID/GID) or nil
User() interface{}
// Group can return either a string (user name/group name), an int64 (UID/GID) or nil
Group() interface{}
IsDir() bool
}
type baseFsNode struct {
path string
mode *os.FileMode
user interface{}
group interface{}
}
func (f *baseFsNode) Path() string {
if f == nil {
return ""
}
return f.path
}
func (f *baseFsNode) Mode() *os.FileMode {
if f == nil {
return nil
}
return f.mode
}
// User can return either a string (user name) or an int64 (UID)
func (f *baseFsNode) User() interface{} {
if f == nil {
return nil
}
return f.user
}
// Group can return either a string (group name) or an int64 (GID)
func (f *baseFsNode) Group() interface{} {
if f == nil {
return nil
}
return f.group
}
func newBaseFsNode(path string, mode *os.FileMode, user interface{}, group interface{}) (*baseFsNode, error) {
node := &baseFsNode{
path: path,
mode: mode,
user: user,
group: group,
}
err := node.validate()
if err != nil {
return nil, err
}
return node, nil
}
func (f *baseFsNode) validate() error {
// Check that the path is valid
if f.path == "" {
return fmt.Errorf("path must not be empty")
}
if f.path[0] != '/' {
return fmt.Errorf("path must be absolute")
}
if f.path[len(f.path)-1] == '/' {
return fmt.Errorf("path must not end with a slash")
}
if f.path != path.Clean(f.path) {
return fmt.Errorf("path must be canonical")
}
// Check that the mode is valid
if f.mode != nil && *f.mode&os.ModeType != 0 {
return fmt.Errorf("mode must not contain file type bits")
}
// Check that the user and group are valid
switch user := f.user.(type) {
case string:
nameRegex := regexp.MustCompile(usernameRegex)
if !nameRegex.MatchString(user) {
return fmt.Errorf("user name %q doesn't conform to validating regex (%s)", user, nameRegex.String())
}
case int64:
if user < 0 {
return fmt.Errorf("user ID must be non-negative")
}
case nil:
// user is not set
default:
return fmt.Errorf("user must be either a string or an int64, got %T", user)
}
switch group := f.group.(type) {
case string:
nameRegex := regexp.MustCompile(groupnameRegex)
if !nameRegex.MatchString(group) {
return fmt.Errorf("group name %q doesn't conform to validating regex (%s)", group, nameRegex.String())
}
case int64:
if group < 0 {
return fmt.Errorf("group ID must be non-negative")
}
case nil:
// group is not set
default:
return fmt.Errorf("group must be either a string or an int64, got %T", group)
}
return nil
}
func (f *baseFsNode) IsDir() bool {
panic("IsDir() called on baseFsNode")
}

View file

@ -0,0 +1,31 @@
package ignition
import (
"encoding/base64"
"errors"
"github.com/osbuild/images/pkg/blueprint"
)
type FirstBootOptions struct {
ProvisioningURL string
}
func FirstbootOptionsFromBP(bpIgnitionFirstboot blueprint.FirstBootIgnitionCustomization) *FirstBootOptions {
ignition := FirstBootOptions(bpIgnitionFirstboot)
return &ignition
}
type EmbeddedOptions struct {
Config string
}
func EmbeddedOptionsFromBP(bpIgnitionEmbedded blueprint.EmbeddedIgnitionCustomization) (*EmbeddedOptions, error) {
decodedConfig, err := base64.StdEncoding.DecodeString(bpIgnitionEmbedded.Config)
if err != nil {
return nil, errors.New("can't decode Ignition config")
}
return &EmbeddedOptions{
Config: string(decodedConfig),
}, nil
}

View file

@ -0,0 +1,225 @@
// dnfjson_mock provides data and methods for testing the dnfjson package.
package dnfjson_mock
import (
"encoding/json"
"fmt"
"os"
"path/filepath"
"time"
"github.com/osbuild/images/internal/dnfjson"
"github.com/osbuild/images/pkg/rpmmd"
)
func generatePackageList() rpmmd.PackageList {
baseTime, err := time.Parse(time.RFC3339, "2006-01-02T15:04:05Z")
if err != nil {
panic(err)
}
var packageList rpmmd.PackageList
for i := 0; i < 22; i++ {
basePackage := rpmmd.Package{
Name: fmt.Sprintf("package%d", i),
Summary: fmt.Sprintf("pkg%d sum", i),
Description: fmt.Sprintf("pkg%d desc", i),
URL: fmt.Sprintf("https://pkg%d.example.com", i),
Epoch: 0,
Version: fmt.Sprintf("%d.0", i),
Release: fmt.Sprintf("%d.fc30", i),
Arch: "x86_64",
BuildTime: baseTime.AddDate(0, i, 0),
License: "MIT",
}
secondBuild := basePackage
secondBuild.Version = fmt.Sprintf("%d.1", i)
secondBuild.BuildTime = basePackage.BuildTime.AddDate(0, 0, 1)
packageList = append(packageList, basePackage, secondBuild)
}
return packageList
}
// generateSearchResults creates results for use with the dnfjson search command
// which is used for listing a subset of modules and projects.
//
// The map key is a comma-separated list of the packages requested
// If no packages are included it returns all 22 packages, same as the mock dump
//
// nonexistingpkg returns an empty list
// badpackage1 returns a fetch error, same as when the package name is unknown
// baddepsolve returns package1, the test then tries to depsolve package1 using BadDepsolve()
// wich will return a depsolve error.
func generateSearchResults() map[string]interface{} {
allPackages := generatePackageList()
// This includes package16, package2, package20, and package21
var wildcardResults rpmmd.PackageList
wildcardResults = append(wildcardResults, allPackages[32], allPackages[33])
wildcardResults = append(wildcardResults, allPackages[4], allPackages[5])
for i := 40; i < 44; i++ {
wildcardResults = append(wildcardResults, allPackages[i])
}
fetchError := dnfjson.Error{
Kind: "FetchError",
Reason: "There was a problem when fetching packages.",
}
return map[string]interface{}{
"": allPackages,
"*": allPackages,
"nonexistingpkg": rpmmd.PackageList{},
"package1": rpmmd.PackageList{allPackages[2], allPackages[3]},
"package1,package2": rpmmd.PackageList{allPackages[2], allPackages[3], allPackages[4], allPackages[5]},
"package2*,package16": wildcardResults,
"package16": rpmmd.PackageList{allPackages[32], allPackages[33]},
"badpackage1": fetchError,
"baddepsolve": rpmmd.PackageList{allPackages[2], allPackages[3]},
}
}
func createBaseDepsolveFixture() []dnfjson.PackageSpec {
return []dnfjson.PackageSpec{
{
Name: "dep-package3",
Epoch: 7,
Version: "3.0.3",
Release: "1.fc30",
Arch: "x86_64",
RepoID: "REPOID", // added by mock-dnf-json
},
{
Name: "dep-package1",
Epoch: 0,
Version: "1.33",
Release: "2.fc30",
Arch: "x86_64",
RepoID: "REPOID", // added by mock-dnf-json
},
{
Name: "dep-package2",
Epoch: 0,
Version: "2.9",
Release: "1.fc30",
Arch: "x86_64",
RepoID: "REPOID", // added by mock-dnf-json
},
}
}
// BaseDeps is the expected list of dependencies (as rpmmd.PackageSpec) from
// the Base ResponseGenerator
func BaseDeps() []rpmmd.PackageSpec {
return []rpmmd.PackageSpec{
{
Name: "dep-package3",
Epoch: 7,
Version: "3.0.3",
Release: "1.fc30",
Arch: "x86_64",
CheckGPG: true,
},
{
Name: "dep-package1",
Epoch: 0,
Version: "1.33",
Release: "2.fc30",
Arch: "x86_64",
CheckGPG: true,
},
{
Name: "dep-package2",
Epoch: 0,
Version: "2.9",
Release: "1.fc30",
Arch: "x86_64",
CheckGPG: true,
},
}
}
type ResponseGenerator func(string) string
func Base(tmpdir string) string {
data := map[string]interface{}{
"depsolve": createBaseDepsolveFixture(),
"dump": generatePackageList(),
"search": generateSearchResults(),
}
path := filepath.Join(tmpdir, "base.json")
write(data, path)
return path
}
func NonExistingPackage(tmpdir string) string {
deps := dnfjson.Error{
Kind: "MarkingErrors",
Reason: "Error occurred when marking packages for installation: Problems in request:\nmissing packages: fash",
}
data := map[string]interface{}{
"depsolve": deps,
}
path := filepath.Join(tmpdir, "notexist.json")
write(data, path)
return path
}
func BadDepsolve(tmpdir string) string {
deps := dnfjson.Error{
Kind: "DepsolveError",
Reason: "There was a problem depsolving ['go2rpm']: \n Problem: conflicting requests\n - nothing provides askalono-cli needed by go2rpm-1-4.fc31.noarch",
}
data := map[string]interface{}{
"depsolve": deps,
"dump": generatePackageList(),
"search": generateSearchResults(),
}
path := filepath.Join(tmpdir, "baddepsolve.json")
write(data, path)
return path
}
func BadFetch(tmpdir string) string {
deps := dnfjson.Error{
Kind: "DepsolveError",
Reason: "There was a problem depsolving ['go2rpm']: \n Problem: conflicting requests\n - nothing provides askalono-cli needed by go2rpm-1-4.fc31.noarch",
}
pkgs := dnfjson.Error{
Kind: "FetchError",
Reason: "There was a problem when fetching packages.",
}
data := map[string]interface{}{
"depsolve": deps,
"dump": pkgs,
"search": generateSearchResults(),
}
path := filepath.Join(tmpdir, "badfetch.json")
write(data, path)
return path
}
func marshal(data interface{}) []byte {
jdata, err := json.Marshal(data)
if err != nil {
panic(err)
}
return jdata
}
func write(data interface{}, path string) {
fp, err := os.Create(path)
if err != nil {
panic(err)
}
if _, err := fp.Write(marshal(data)); err != nil {
panic(err)
}
}

View file

@ -0,0 +1,72 @@
package oscap
import (
"strings"
)
type Profile string
func (p Profile) String() string {
return string(p)
}
const (
AnssiBp28Enhanced Profile = "xccdf_org.ssgproject.content_profile_anssi_bp28_enhanced"
AnssiBp28High Profile = "xccdf_org.ssgproject.content_profile_anssi_bp28_high"
AnssiBp28Intermediary Profile = "xccdf_org.ssgproject.content_profile_anssi_bp28_intermediary"
AnssiBp28Minimal Profile = "xccdf_org.ssgproject.content_profile_anssi_bp28_minimal"
Cis Profile = "xccdf_org.ssgproject.content_profile_cis"
CisServerL1 Profile = "xccdf_org.ssgproject.content_profile_cis_server_l1"
CisWorkstationL1 Profile = "xccdf_org.ssgproject.content_profile_cis_workstation_l1"
CisWorkstationL2 Profile = "xccdf_org.ssgproject.content_profile_cis_workstation_l2"
Cui Profile = "xccdf_org.ssgproject.content_profile_cui"
E8 Profile = "xccdf_org.ssgproject.content_profile_e8"
Hippa Profile = "xccdf_org.ssgproject.content_profile_hipaa"
IsmO Profile = "xccdf_org.ssgproject.content_profile_ism_o"
Ospp Profile = "xccdf_org.ssgproject.content_profile_ospp"
PciDss Profile = "xccdf_org.ssgproject.content_profile_pci-dss"
Standard Profile = "xccdf_org.ssgproject.content_profile_standard"
Stig Profile = "xccdf_org.ssgproject.content_profile_stig"
StigGui Profile = "xccdf_org.ssgproject.content_profile_stig_gui"
// datastream fallbacks
defaultFedoraDatastream string = "/usr/share/xml/scap/ssg/content/ssg-fedora-ds.xml"
defaultCentos8Datastream string = "/usr/share/xml/scap/ssg/content/ssg-centos8-ds.xml"
defaultCentos9Datastream string = "/usr/share/xml/scap/ssg/content/ssg-cs9-ds.xml"
defaultRHEL8Datastream string = "/usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml"
defaultRHEL9Datastream string = "/usr/share/xml/scap/ssg/content/ssg-rhel9-ds.xml"
)
func DefaultFedoraDatastream() string {
return defaultFedoraDatastream
}
func DefaultRHEL8Datastream(isRHEL bool) string {
if isRHEL {
return defaultRHEL8Datastream
}
return defaultCentos8Datastream
}
func DefaultRHEL9Datastream(isRHEL bool) string {
if isRHEL {
return defaultRHEL9Datastream
}
return defaultCentos9Datastream
}
func IsProfileAllowed(profile string, allowlist []Profile) bool {
for _, a := range allowlist {
if a.String() == profile {
return true
}
// this enables a user to specify
// the full profile or the short
// profile id
if strings.HasSuffix(a.String(), profile) {
return true
}
}
return false
}

View file

@ -0,0 +1,54 @@
package pathpolicy
import (
"fmt"
"path"
)
type PathPolicy struct {
Deny bool // explicitly do not allow this entry
Exact bool // require and exact match, no subdirs
}
type PathPolicies = PathTrie
// Create a new PathPolicies trie from a map of path to PathPolicy
func NewPathPolicies(entries map[string]PathPolicy) *PathPolicies {
noType := make(map[string]interface{}, len(entries))
for k, v := range entries {
noType[k] = v
}
return NewPathTrieFromMap(noType)
}
// Check a given path against the PathPolicies
func (pol *PathPolicies) Check(fsPath string) error {
// Quickly check we have a path and it is absolute
if fsPath == "" || fsPath[0] != '/' {
return fmt.Errorf("path must be absolute")
}
// ensure that only clean paths are valid
if fsPath != path.Clean(fsPath) {
return fmt.Errorf("path must be canonical")
}
node, left := pol.Lookup(fsPath)
policy, ok := node.Payload.(PathPolicy)
if !ok {
panic("programming error: invalid path trie payload")
}
// 1) path is explicitly not allowed or
// 2) a subpath was match but an explicit match is required
if policy.Deny || (policy.Exact && len(left) > 0) {
return fmt.Errorf("path '%s ' is not allowed", fsPath)
}
// exact match or recursive path allowed
return nil
}

View file

@ -0,0 +1,123 @@
package pathpolicy
import (
"sort"
"strings"
)
// splits the path into its individual components. Retruns the
// empty list if the path is just the absolute root, i.e. "/".
func pathTrieSplitPath(path string) []string {
path = strings.Trim(path, "/")
if path == "" {
return []string{}
}
return strings.Split(path, "/")
}
type PathTrie struct {
Name []string
Paths []*PathTrie
Payload interface{}
}
// match checks if the given trie is a prefix of path
func (trie *PathTrie) match(path []string) bool {
if len(trie.Name) > len(path) {
return false
}
for i := range trie.Name {
if path[i] != trie.Name[i] {
return false
}
}
return true
}
func (trie *PathTrie) get(path []string) (*PathTrie, []string) {
if len(path) < 1 {
panic("programming error: expected root node")
}
var node *PathTrie
for i := range trie.Paths {
if trie.Paths[i].match(path) {
node = trie.Paths[i]
break
}
}
// no subpath match, we are the best match
if node == nil {
return trie, path
}
// node, or one of its sub-nodes, is a match
prefix := len(node.Name)
// the node is a perfect match, return it
if len(path) == prefix {
return node, nil
}
// check if any sub-path's of node match
return node.get(path[prefix:])
}
func (trie *PathTrie) add(path []string) *PathTrie {
node := &PathTrie{Name: path}
if trie.Paths == nil {
trie.Paths = make([]*PathTrie, 0, 1)
}
trie.Paths = append(trie.Paths, node)
return node
}
// Construct a new trie from a map of paths to their payloads.
// Returns the root node of the trie.
func NewPathTrieFromMap(entries map[string]interface{}) *PathTrie {
root := &PathTrie{Name: []string{}}
keys := make([]string, 0, len(entries))
for k := range entries {
keys = append(keys, k)
}
sort.Strings(keys)
for _, k := range keys {
node, left := root.Lookup(k)
if len(left) > 0 {
node = node.add(left)
}
node.Payload = entries[k]
}
return root
}
// Lookup returns the node that is the prefix of path and
// the unmatched path segment. Must be called on the root
// trie node.
func (root *PathTrie) Lookup(path string) (*PathTrie, []string) {
if len(root.Name) != 0 {
panic("programming error: lookup on non-root trie node")
}
elements := pathTrieSplitPath(path)
if len(elements) == 0 {
return root, elements
}
return root.get(elements)
}

View file

@ -0,0 +1,31 @@
package pathpolicy
// MountpointPolicies is a set of default mountpoint policies used for filesystem customizations
var MountpointPolicies = NewPathPolicies(map[string]PathPolicy{
"/": {Exact: true},
"/boot": {Exact: true},
"/var": {},
"/opt": {},
"/srv": {},
"/usr": {},
"/app": {},
"/data": {},
"/home": {},
"/tmp": {},
})
// CustomDirectoriesPolicies is a set of default policies for custom directories
var CustomDirectoriesPolicies = NewPathPolicies(map[string]PathPolicy{
"/": {Deny: true},
"/etc": {},
})
// CustomFilesPolicies is a set of default policies for custom files
var CustomFilesPolicies = NewPathPolicies(map[string]PathPolicy{
"/": {Deny: true},
"/etc": {},
"/etc/fstab": {Deny: true},
"/etc/shadow": {Deny: true},
"/etc/passwd": {Deny: true},
"/etc/group": {Deny: true},
})

View file

@ -0,0 +1,11 @@
package shell
type EnvironmentVariable struct {
Key string
Value string
}
type InitFile struct {
Filename string
Variables []EnvironmentVariable
}

View file

@ -0,0 +1,40 @@
package users
import "github.com/osbuild/images/pkg/blueprint"
type User struct {
Name string
Description *string
Password *string
Key *string
Home *string
Shell *string
Groups []string
UID *int
GID *int
}
type Group struct {
Name string
GID *int
}
func UsersFromBP(userCustomizations []blueprint.UserCustomization) []User {
users := make([]User, len(userCustomizations))
for idx := range userCustomizations {
// currently, they have the same structure, so we convert directly
// this will fail to compile as soon as one of the two changes
users[idx] = User(userCustomizations[idx])
}
return users
}
func GroupsFromBP(groupCustomizations []blueprint.GroupCustomization) []Group {
groups := make([]Group, len(groupCustomizations))
for idx := range groupCustomizations {
// currently, they have the same structure, so we convert directly
// this will fail to compile as soon as one of the two changes
groups[idx] = Group(groupCustomizations[idx])
}
return groups
}

View file

@ -0,0 +1,7 @@
package workload
// TODO: replace the Anaconda pipeline by the OS pipeline with the
// anaconda workload.
type Anaconda struct {
BaseWorkload
}

View file

@ -0,0 +1,22 @@
package workload
type Custom struct {
BaseWorkload
Packages []string
Services []string
DisabledServices []string
}
func (p *Custom) GetPackages() []string {
return p.Packages
}
func (p *Custom) GetServices() []string {
return p.Services
}
// TODO: Does this belong here? What kind of workload requires
// services to be disabled?
func (p *Custom) GetDisabledServices() []string {
return p.DisabledServices
}

View file

@ -0,0 +1,6 @@
package workload
// TODO!
type SAP struct {
BaseWorkload
}

View file

@ -0,0 +1,7 @@
package workload
// TODO: replace the CommitServerTree pipeline by the OS pipeline with the
// StaticWebserver workload.
type StaticWebserver struct {
BaseWorkload
}

View file

@ -0,0 +1,30 @@
package workload
import "github.com/osbuild/images/pkg/rpmmd"
type Workload interface {
GetPackages() []string
GetRepos() []rpmmd.RepoConfig
GetServices() []string
GetDisabledServices() []string
}
type BaseWorkload struct {
Repos []rpmmd.RepoConfig
}
func (p BaseWorkload) GetPackages() []string {
return []string{}
}
func (p BaseWorkload) GetRepos() []rpmmd.RepoConfig {
return p.Repos
}
func (p BaseWorkload) GetServices() []string {
return []string{}
}
func (p BaseWorkload) GetDisabledServices() []string {
return []string{}
}

View file

@ -0,0 +1,31 @@
package artifact
type Artifact struct {
export string
filename string
mimeType string
}
func New(export, filename string, mimeType *string) *Artifact {
artifact := &Artifact{
export: export,
filename: filename,
mimeType: "application/octet-stream",
}
if mimeType != nil {
artifact.mimeType = *mimeType
}
return artifact
}
func (a *Artifact) Export() string {
return a.export
}
func (a *Artifact) Filename() string {
return a.filename
}
func (a *Artifact) MIMEType() string {
return a.mimeType
}

View file

@ -0,0 +1,184 @@
// Package blueprint contains primitives for representing weldr blueprints
package blueprint
import (
"encoding/json"
"fmt"
"github.com/osbuild/images/pkg/crypt"
"github.com/coreos/go-semver/semver"
)
// A Blueprint is a high-level description of an image.
type Blueprint struct {
Name string `json:"name" toml:"name"`
Description string `json:"description" toml:"description"`
Version string `json:"version,omitempty" toml:"version,omitempty"`
Packages []Package `json:"packages" toml:"packages"`
Modules []Package `json:"modules" toml:"modules"`
Groups []Group `json:"groups" toml:"groups"`
Containers []Container `json:"containers,omitempty" toml:"containers,omitempty"`
Customizations *Customizations `json:"customizations,omitempty" toml:"customizations"`
Distro string `json:"distro" toml:"distro"`
}
type Change struct {
Commit string `json:"commit" toml:"commit"`
Message string `json:"message" toml:"message"`
Revision *int `json:"revision" toml:"revision"`
Timestamp string `json:"timestamp" toml:"timestamp"`
Blueprint Blueprint `json:"-" toml:"-"`
}
// A Package specifies an RPM package.
type Package struct {
Name string `json:"name" toml:"name"`
Version string `json:"version,omitempty" toml:"version,omitempty"`
}
// A group specifies an package group.
type Group struct {
Name string `json:"name" toml:"name"`
}
type Container struct {
Source string `json:"source" toml:"source"`
Name string `json:"name,omitempty" toml:"name,omitempty"`
TLSVerify *bool `json:"tls-verify,omitempty" toml:"tls-verify,omitempty"`
}
// DeepCopy returns a deep copy of the blueprint
// This uses json.Marshal and Unmarshal which are not very efficient
func (b *Blueprint) DeepCopy() Blueprint {
bpJSON, err := json.Marshal(b)
if err != nil {
panic(err)
}
var bp Blueprint
err = json.Unmarshal(bpJSON, &bp)
if err != nil {
panic(err)
}
return bp
}
// Initialize ensures that the blueprint has sane defaults for any missing fields
func (b *Blueprint) Initialize() error {
if len(b.Name) == 0 {
return fmt.Errorf("empty blueprint name not allowed")
}
if b.Packages == nil {
b.Packages = []Package{}
}
if b.Modules == nil {
b.Modules = []Package{}
}
if b.Groups == nil {
b.Groups = []Group{}
}
if b.Containers == nil {
b.Containers = []Container{}
}
if b.Version == "" {
b.Version = "0.0.0"
}
// Return an error if the version is not valid
_, err := semver.NewVersion(b.Version)
if err != nil {
return fmt.Errorf("Invalid 'version', must use Semantic Versioning: %s", err.Error())
}
err = b.CryptPasswords()
if err != nil {
return fmt.Errorf("Error hashing passwords: %s", err.Error())
}
return nil
}
// BumpVersion increments the previous blueprint's version
// If the old version string is not vaild semver it will use the new version as-is
// This assumes that the new blueprint's version has already been validated via Initialize
func (b *Blueprint) BumpVersion(old string) {
var ver *semver.Version
ver, err := semver.NewVersion(old)
if err != nil {
return
}
ver.BumpPatch()
b.Version = ver.String()
}
// packages, modules, and groups all resolve to rpm packages right now. This
// function returns a combined list of "name-version" strings.
func (b *Blueprint) GetPackages() []string {
return b.GetPackagesEx(true)
}
func (b *Blueprint) GetPackagesEx(bootable bool) []string {
packages := []string{}
for _, pkg := range b.Packages {
packages = append(packages, pkg.ToNameVersion())
}
for _, pkg := range b.Modules {
packages = append(packages, pkg.ToNameVersion())
}
for _, group := range b.Groups {
packages = append(packages, "@"+group.Name)
}
if bootable {
kc := b.Customizations.GetKernel()
kpkg := Package{Name: kc.Name}
packages = append(packages, kpkg.ToNameVersion())
}
return packages
}
func (p Package) ToNameVersion() string {
// Omit version to prevent all packages with prefix of name to be installed
if p.Version == "*" || p.Version == "" {
return p.Name
}
return p.Name + "-" + p.Version
}
// CryptPasswords ensures that all blueprint passwords are hashed
func (b *Blueprint) CryptPasswords() error {
if b.Customizations == nil {
return nil
}
// Any passwords for users?
for i := range b.Customizations.User {
// Missing or empty password
if b.Customizations.User[i].Password == nil {
continue
}
// Prevent empty password from being hashed
if len(*b.Customizations.User[i].Password) == 0 {
b.Customizations.User[i].Password = nil
continue
}
if !crypt.PasswordIsCrypted(*b.Customizations.User[i].Password) {
pw, err := crypt.CryptSHA512(*b.Customizations.User[i].Password)
if err != nil {
return err
}
// Replace the password with the
b.Customizations.User[i].Password = &pw
}
}
return nil
}

View file

@ -0,0 +1,351 @@
package blueprint
import (
"fmt"
"reflect"
"strings"
)
type Customizations struct {
Hostname *string `json:"hostname,omitempty" toml:"hostname,omitempty"`
Kernel *KernelCustomization `json:"kernel,omitempty" toml:"kernel,omitempty"`
SSHKey []SSHKeyCustomization `json:"sshkey,omitempty" toml:"sshkey,omitempty"`
User []UserCustomization `json:"user,omitempty" toml:"user,omitempty"`
Group []GroupCustomization `json:"group,omitempty" toml:"group,omitempty"`
Timezone *TimezoneCustomization `json:"timezone,omitempty" toml:"timezone,omitempty"`
Locale *LocaleCustomization `json:"locale,omitempty" toml:"locale,omitempty"`
Firewall *FirewallCustomization `json:"firewall,omitempty" toml:"firewall,omitempty"`
Services *ServicesCustomization `json:"services,omitempty" toml:"services,omitempty"`
Filesystem []FilesystemCustomization `json:"filesystem,omitempty" toml:"filesystem,omitempty"`
InstallationDevice string `json:"installation_device,omitempty" toml:"installation_device,omitempty"`
FDO *FDOCustomization `json:"fdo,omitempty" toml:"fdo,omitempty"`
OpenSCAP *OpenSCAPCustomization `json:"openscap,omitempty" toml:"openscap,omitempty"`
Ignition *IgnitionCustomization `json:"ignition,omitempty" toml:"ignition,omitempty"`
Directories []DirectoryCustomization `json:"directories,omitempty" toml:"directories,omitempty"`
Files []FileCustomization `json:"files,omitempty" toml:"files,omitempty"`
Repositories []RepositoryCustomization `json:"repositories,omitempty" toml:"repositories,omitempty"`
}
type IgnitionCustomization struct {
Embedded *EmbeddedIgnitionCustomization `json:"embedded,omitempty" toml:"embedded,omitempty"`
FirstBoot *FirstBootIgnitionCustomization `json:"firstboot,omitempty" toml:"firstboot,omitempty"`
}
type EmbeddedIgnitionCustomization struct {
Config string `json:"config,omitempty" toml:"config,omitempty"`
}
type FirstBootIgnitionCustomization struct {
ProvisioningURL string `json:"url,omitempty" toml:"url,omitempty"`
}
type FDOCustomization struct {
ManufacturingServerURL string `json:"manufacturing_server_url,omitempty" toml:"manufacturing_server_url,omitempty"`
DiunPubKeyInsecure string `json:"diun_pub_key_insecure,omitempty" toml:"diun_pub_key_insecure,omitempty"`
// This is the output of:
// echo "sha256:$(openssl x509 -fingerprint -sha256 -noout -in diun_cert.pem | cut -d"=" -f2 | sed 's/://g')"
DiunPubKeyHash string `json:"diun_pub_key_hash,omitempty" toml:"diun_pub_key_hash,omitempty"`
DiunPubKeyRootCerts string `json:"diun_pub_key_root_certs,omitempty" toml:"diun_pub_key_root_certs,omitempty"`
}
type KernelCustomization struct {
Name string `json:"name,omitempty" toml:"name,omitempty"`
Append string `json:"append" toml:"append"`
}
type SSHKeyCustomization struct {
User string `json:"user" toml:"user"`
Key string `json:"key" toml:"key"`
}
type UserCustomization struct {
Name string `json:"name" toml:"name"`
Description *string `json:"description,omitempty" toml:"description,omitempty"`
Password *string `json:"password,omitempty" toml:"password,omitempty"`
Key *string `json:"key,omitempty" toml:"key,omitempty"`
Home *string `json:"home,omitempty" toml:"home,omitempty"`
Shell *string `json:"shell,omitempty" toml:"shell,omitempty"`
Groups []string `json:"groups,omitempty" toml:"groups,omitempty"`
UID *int `json:"uid,omitempty" toml:"uid,omitempty"`
GID *int `json:"gid,omitempty" toml:"gid,omitempty"`
}
type GroupCustomization struct {
Name string `json:"name" toml:"name"`
GID *int `json:"gid,omitempty" toml:"gid,omitempty"`
}
type TimezoneCustomization struct {
Timezone *string `json:"timezone,omitempty" toml:"timezone,omitempty"`
NTPServers []string `json:"ntpservers,omitempty" toml:"ntpservers,omitempty"`
}
type LocaleCustomization struct {
Languages []string `json:"languages,omitempty" toml:"languages,omitempty"`
Keyboard *string `json:"keyboard,omitempty" toml:"keyboard,omitempty"`
}
type FirewallCustomization struct {
Ports []string `json:"ports,omitempty" toml:"ports,omitempty"`
Services *FirewallServicesCustomization `json:"services,omitempty" toml:"services,omitempty"`
Zones []FirewallZoneCustomization `json:"zones,omitempty" toml:"zones,omitempty"`
}
type FirewallZoneCustomization struct {
Name *string `json:"name,omitempty" toml:"name,omitempty"`
Sources []string `json:"sources,omitempty" toml:"sources,omitempty"`
}
type FirewallServicesCustomization struct {
Enabled []string `json:"enabled,omitempty" toml:"enabled,omitempty"`
Disabled []string `json:"disabled,omitempty" toml:"disabled,omitempty"`
}
type ServicesCustomization struct {
Enabled []string `json:"enabled,omitempty" toml:"enabled,omitempty"`
Disabled []string `json:"disabled,omitempty" toml:"disabled,omitempty"`
}
type OpenSCAPCustomization struct {
DataStream string `json:"datastream,omitempty" toml:"datastream,omitempty"`
ProfileID string `json:"profile_id,omitempty" toml:"profile_id,omitempty"`
}
type CustomizationError struct {
Message string
}
func (e *CustomizationError) Error() string {
return e.Message
}
// CheckCustomizations returns an error of type `CustomizationError`
// if `c` has any customizations not specified in `allowed`
func (c *Customizations) CheckAllowed(allowed ...string) error {
if c == nil {
return nil
}
allowMap := make(map[string]bool)
for _, a := range allowed {
allowMap[a] = true
}
t := reflect.TypeOf(*c)
v := reflect.ValueOf(*c)
for i := 0; i < t.NumField(); i++ {
empty := false
field := v.Field(i)
switch field.Kind() {
case reflect.String:
if field.String() == "" {
empty = true
}
case reflect.Array, reflect.Slice:
if field.Len() == 0 {
empty = true
}
case reflect.Ptr:
if field.IsNil() {
empty = true
}
default:
panic(fmt.Sprintf("unhandled customization field type %s, %s", v.Kind(), t.Field(i).Name))
}
if !empty && !allowMap[t.Field(i).Name] {
return &CustomizationError{fmt.Sprintf("'%s' is not allowed", t.Field(i).Name)}
}
}
return nil
}
func (c *Customizations) GetHostname() *string {
if c == nil {
return nil
}
return c.Hostname
}
func (c *Customizations) GetPrimaryLocale() (*string, *string) {
if c == nil {
return nil, nil
}
if c.Locale == nil {
return nil, nil
}
if len(c.Locale.Languages) == 0 {
return nil, c.Locale.Keyboard
}
return &c.Locale.Languages[0], c.Locale.Keyboard
}
func (c *Customizations) GetTimezoneSettings() (*string, []string) {
if c == nil {
return nil, nil
}
if c.Timezone == nil {
return nil, nil
}
return c.Timezone.Timezone, c.Timezone.NTPServers
}
func (c *Customizations) GetUsers() []UserCustomization {
if c == nil {
return nil
}
users := []UserCustomization{}
// prepend sshkey for backwards compat (overridden by users)
if len(c.SSHKey) > 0 {
for _, c := range c.SSHKey {
users = append(users, UserCustomization{
Name: c.User,
Key: &c.Key,
})
}
}
users = append(users, c.User...)
// sanitize user home directory in blueprint: if it has a trailing slash,
// it might lead to the directory not getting the correct selinux labels
for idx := range users {
u := users[idx]
if u.Home != nil {
homedir := strings.TrimRight(*u.Home, "/")
u.Home = &homedir
users[idx] = u
}
}
return users
}
func (c *Customizations) GetGroups() []GroupCustomization {
if c == nil {
return nil
}
return c.Group
}
func (c *Customizations) GetKernel() *KernelCustomization {
var name string
var append string
if c != nil && c.Kernel != nil {
name = c.Kernel.Name
append = c.Kernel.Append
}
if name == "" {
name = "kernel"
}
return &KernelCustomization{
Name: name,
Append: append,
}
}
func (c *Customizations) GetFirewall() *FirewallCustomization {
if c == nil {
return nil
}
return c.Firewall
}
func (c *Customizations) GetServices() *ServicesCustomization {
if c == nil {
return nil
}
return c.Services
}
func (c *Customizations) GetFilesystems() []FilesystemCustomization {
if c == nil {
return nil
}
return c.Filesystem
}
func (c *Customizations) GetFilesystemsMinSize() uint64 {
if c == nil {
return 0
}
var agg uint64
for _, m := range c.Filesystem {
agg += m.MinSize
}
// This ensures that file system customization `size` is a multiple of
// sector size (512)
if agg%512 != 0 {
agg = (agg/512 + 1) * 512
}
return agg
}
func (c *Customizations) GetInstallationDevice() string {
if c == nil || c.InstallationDevice == "" {
return ""
}
return c.InstallationDevice
}
func (c *Customizations) GetFDO() *FDOCustomization {
if c == nil {
return nil
}
return c.FDO
}
func (c *Customizations) GetOpenSCAP() *OpenSCAPCustomization {
if c == nil {
return nil
}
return c.OpenSCAP
}
func (c *Customizations) GetIgnition() *IgnitionCustomization {
if c == nil {
return nil
}
return c.Ignition
}
func (c *Customizations) GetDirectories() []DirectoryCustomization {
if c == nil {
return nil
}
return c.Directories
}
func (c *Customizations) GetFiles() []FileCustomization {
if c == nil {
return nil
}
return c.Files
}
func (c *Customizations) GetRepositories() ([]RepositoryCustomization, error) {
if c == nil {
return nil, nil
}
for idx := range c.Repositories {
err := validateCustomRepository(&c.Repositories[idx])
if err != nil {
return nil, err
}
}
return c.Repositories, nil
}

View file

@ -0,0 +1,89 @@
package blueprint
import (
"encoding/json"
"fmt"
"github.com/osbuild/images/internal/common"
"github.com/osbuild/images/internal/pathpolicy"
)
type FilesystemCustomization struct {
Mountpoint string `json:"mountpoint,omitempty" toml:"mountpoint,omitempty"`
MinSize uint64 `json:"minsize,omitempty" toml:"size,omitempty"`
}
func (fsc *FilesystemCustomization) UnmarshalTOML(data interface{}) error {
d, _ := data.(map[string]interface{})
switch d["mountpoint"].(type) {
case string:
fsc.Mountpoint = d["mountpoint"].(string)
default:
return fmt.Errorf("TOML unmarshal: mountpoint must be string, got %v of type %T", d["mountpoint"], d["mountpoint"])
}
switch d["size"].(type) {
case int64:
fsc.MinSize = uint64(d["size"].(int64))
case string:
size, err := common.DataSizeToUint64(d["size"].(string))
if err != nil {
return fmt.Errorf("TOML unmarshal: size is not valid filesystem size (%w)", err)
}
fsc.MinSize = size
default:
return fmt.Errorf("TOML unmarshal: size must be integer or string, got %v of type %T", d["size"], d["size"])
}
return nil
}
func (fsc *FilesystemCustomization) UnmarshalJSON(data []byte) error {
var v interface{}
if err := json.Unmarshal(data, &v); err != nil {
return err
}
d, _ := v.(map[string]interface{})
switch d["mountpoint"].(type) {
case string:
fsc.Mountpoint = d["mountpoint"].(string)
default:
return fmt.Errorf("JSON unmarshal: mountpoint must be string, got %v of type %T", d["mountpoint"], d["mountpoint"])
}
// The JSON specification only mentions float64 and Go defaults to it: https://go.dev/blog/json
switch d["minsize"].(type) {
case float64:
// Note that it uses different key than the TOML version
fsc.MinSize = uint64(d["minsize"].(float64))
case string:
size, err := common.DataSizeToUint64(d["minsize"].(string))
if err != nil {
return fmt.Errorf("JSON unmarshal: size is not valid filesystem size (%w)", err)
}
fsc.MinSize = size
default:
return fmt.Errorf("JSON unmarshal: minsize must be float64 number or string, got %v of type %T", d["minsize"], d["minsize"])
}
return nil
}
// CheckMountpointsPolicy checks if the mountpoints are allowed by the policy
func CheckMountpointsPolicy(mountpoints []FilesystemCustomization, mountpointAllowList *pathpolicy.PathPolicies) error {
invalidMountpoints := []string{}
for _, m := range mountpoints {
err := mountpointAllowList.Check(m.Mountpoint)
if err != nil {
invalidMountpoints = append(invalidMountpoints, m.Mountpoint)
}
}
if len(invalidMountpoints) > 0 {
return fmt.Errorf("The following custom mountpoints are not supported %+q", invalidMountpoints)
}
return nil
}

View file

@ -0,0 +1,471 @@
package blueprint
import (
"encoding/json"
"fmt"
"os"
"path"
"regexp"
"sort"
"strconv"
"strings"
"github.com/osbuild/images/internal/common"
"github.com/osbuild/images/internal/fsnode"
"github.com/osbuild/images/internal/pathpolicy"
)
// validateModeString checks that the given string is a valid mode octal number
func validateModeString(mode string) error {
// Check that the mode string matches the octal format regular expression.
// The leading is optional.
if regexp.MustCompile(`^[0]{0,1}[0-7]{3}$`).MatchString(mode) {
return nil
}
return fmt.Errorf("invalid mode %s: must be an octal number", mode)
}
// DirectoryCustomization represents a directory to be created in the image
type DirectoryCustomization struct {
// Absolute path to the directory
Path string `json:"path" toml:"path"`
// Owner of the directory specified as a string (user name), int64 (UID) or nil
User interface{} `json:"user,omitempty" toml:"user,omitempty"`
// Owner of the directory specified as a string (group name), int64 (UID) or nil
Group interface{} `json:"group,omitempty" toml:"group,omitempty"`
// Permissions of the directory specified as an octal number
Mode string `json:"mode,omitempty" toml:"mode,omitempty"`
// EnsureParents ensures that all parent directories of the directory exist
EnsureParents bool `json:"ensure_parents,omitempty" toml:"ensure_parents,omitempty"`
}
// Custom TOML unmarshalling for DirectoryCustomization with validation
func (d *DirectoryCustomization) UnmarshalTOML(data interface{}) error {
var dir DirectoryCustomization
dataMap, _ := data.(map[string]interface{})
switch path := dataMap["path"].(type) {
case string:
dir.Path = path
default:
return fmt.Errorf("UnmarshalTOML: path must be a string")
}
switch user := dataMap["user"].(type) {
case string:
dir.User = user
case int64:
dir.User = user
case nil:
break
default:
return fmt.Errorf("UnmarshalTOML: user must be a string or an integer, got %T", user)
}
switch group := dataMap["group"].(type) {
case string:
dir.Group = group
case int64:
dir.Group = group
case nil:
break
default:
return fmt.Errorf("UnmarshalTOML: group must be a string or an integer")
}
switch mode := dataMap["mode"].(type) {
case string:
dir.Mode = mode
case nil:
break
default:
return fmt.Errorf("UnmarshalTOML: mode must be a string")
}
switch ensureParents := dataMap["ensure_parents"].(type) {
case bool:
dir.EnsureParents = ensureParents
case nil:
break
default:
return fmt.Errorf("UnmarshalTOML: ensure_parents must be a bool")
}
// try converting to fsnode.Directory to validate all values
_, err := dir.ToFsNodeDirectory()
if err != nil {
return err
}
*d = dir
return nil
}
// Custom JSON unmarshalling for DirectoryCustomization with validation
func (d *DirectoryCustomization) UnmarshalJSON(data []byte) error {
type directoryCustomization DirectoryCustomization
var dirPrivate directoryCustomization
if err := json.Unmarshal(data, &dirPrivate); err != nil {
return err
}
dir := DirectoryCustomization(dirPrivate)
if uid, ok := dir.User.(float64); ok {
// check if uid can be converted to int64
if uid != float64(int64(uid)) {
return fmt.Errorf("invalid user %f: must be an integer", uid)
}
dir.User = int64(uid)
}
if gid, ok := dir.Group.(float64); ok {
// check if gid can be converted to int64
if gid != float64(int64(gid)) {
return fmt.Errorf("invalid group %f: must be an integer", gid)
}
dir.Group = int64(gid)
}
// try converting to fsnode.Directory to validate all values
_, err := dir.ToFsNodeDirectory()
if err != nil {
return err
}
*d = dir
return nil
}
// ToFsNodeDirectory converts the DirectoryCustomization to an fsnode.Directory
func (d DirectoryCustomization) ToFsNodeDirectory() (*fsnode.Directory, error) {
var mode *os.FileMode
if d.Mode != "" {
err := validateModeString(d.Mode)
if err != nil {
return nil, err
}
modeNum, err := strconv.ParseUint(d.Mode, 8, 32)
if err != nil {
return nil, fmt.Errorf("invalid mode %s: %v", d.Mode, err)
}
mode = common.ToPtr(os.FileMode(modeNum))
}
return fsnode.NewDirectory(d.Path, mode, d.User, d.Group, d.EnsureParents)
}
// DirectoryCustomizationsToFsNodeDirectories converts a slice of DirectoryCustomizations
// to a slice of fsnode.Directories
func DirectoryCustomizationsToFsNodeDirectories(dirs []DirectoryCustomization) ([]*fsnode.Directory, error) {
if len(dirs) == 0 {
return nil, nil
}
var fsDirs []*fsnode.Directory
var errors []error
for _, dir := range dirs {
fsDir, err := dir.ToFsNodeDirectory()
if err != nil {
errors = append(errors, err)
}
fsDirs = append(fsDirs, fsDir)
}
if len(errors) > 0 {
return nil, fmt.Errorf("invalid directory customizations: %v", errors)
}
return fsDirs, nil
}
// FileCustomization represents a file to be created in the image
type FileCustomization struct {
// Absolute path to the file
Path string `json:"path" toml:"path"`
// Owner of the directory specified as a string (user name), int64 (UID) or nil
User interface{} `json:"user,omitempty" toml:"user,omitempty"`
// Owner of the directory specified as a string (group name), int64 (UID) or nil
Group interface{} `json:"group,omitempty" toml:"group,omitempty"`
// Permissions of the file specified as an octal number
Mode string `json:"mode,omitempty" toml:"mode,omitempty"`
// Data is the file content in plain text
Data string `json:"data,omitempty" toml:"data,omitempty"`
}
// Custom TOML unmarshalling for FileCustomization with validation
func (f *FileCustomization) UnmarshalTOML(data interface{}) error {
var file FileCustomization
dataMap, _ := data.(map[string]interface{})
switch path := dataMap["path"].(type) {
case string:
file.Path = path
default:
return fmt.Errorf("UnmarshalTOML: path must be a string")
}
switch user := dataMap["user"].(type) {
case string:
file.User = user
case int64:
file.User = user
case nil:
break
default:
return fmt.Errorf("UnmarshalTOML: user must be a string or an integer")
}
switch group := dataMap["group"].(type) {
case string:
file.Group = group
case int64:
file.Group = group
case nil:
break
default:
return fmt.Errorf("UnmarshalTOML: group must be a string or an integer")
}
switch mode := dataMap["mode"].(type) {
case string:
file.Mode = mode
case nil:
break
default:
return fmt.Errorf("UnmarshalTOML: mode must be a string")
}
switch data := dataMap["data"].(type) {
case string:
file.Data = data
case nil:
break
default:
return fmt.Errorf("UnmarshalTOML: data must be a string")
}
// try converting to fsnode.File to validate all values
_, err := file.ToFsNodeFile()
if err != nil {
return err
}
*f = file
return nil
}
// Custom JSON unmarshalling for FileCustomization with validation
func (f *FileCustomization) UnmarshalJSON(data []byte) error {
type fileCustomization FileCustomization
var filePrivate fileCustomization
if err := json.Unmarshal(data, &filePrivate); err != nil {
return err
}
file := FileCustomization(filePrivate)
if uid, ok := file.User.(float64); ok {
// check if uid can be converted to int64
if uid != float64(int64(uid)) {
return fmt.Errorf("invalid user %f: must be an integer", uid)
}
file.User = int64(uid)
}
if gid, ok := file.Group.(float64); ok {
// check if gid can be converted to int64
if gid != float64(int64(gid)) {
return fmt.Errorf("invalid group %f: must be an integer", gid)
}
file.Group = int64(gid)
}
// try converting to fsnode.File to validate all values
_, err := file.ToFsNodeFile()
if err != nil {
return err
}
*f = file
return nil
}
// ToFsNodeFile converts the FileCustomization to an fsnode.File
func (f FileCustomization) ToFsNodeFile() (*fsnode.File, error) {
var data []byte
if f.Data != "" {
data = []byte(f.Data)
}
var mode *os.FileMode
if f.Mode != "" {
err := validateModeString(f.Mode)
if err != nil {
return nil, err
}
modeNum, err := strconv.ParseUint(f.Mode, 8, 32)
if err != nil {
return nil, fmt.Errorf("invalid mode %s: %v", f.Mode, err)
}
mode = common.ToPtr(os.FileMode(modeNum))
}
return fsnode.NewFile(f.Path, mode, f.User, f.Group, data)
}
// FileCustomizationsToFsNodeFiles converts a slice of FileCustomization to a slice of *fsnode.File
func FileCustomizationsToFsNodeFiles(files []FileCustomization) ([]*fsnode.File, error) {
if len(files) == 0 {
return nil, nil
}
var fsFiles []*fsnode.File
var errors []error
for _, file := range files {
fsFile, err := file.ToFsNodeFile()
if err != nil {
errors = append(errors, err)
}
fsFiles = append(fsFiles, fsFile)
}
if len(errors) > 0 {
return nil, fmt.Errorf("invalid file customizations: %v", errors)
}
return fsFiles, nil
}
// ValidateDirFileCustomizations validates the given Directory and File customizations.
// If the customizations are invalid, an error is returned. Otherwise, nil is returned.
//
// It currently ensures that:
// - No file path is a prefix of another file or directory path
// - There are no duplicate file or directory paths in the customizations
func ValidateDirFileCustomizations(dirs []DirectoryCustomization, files []FileCustomization) error {
fsNodesMap := make(map[string]interface{}, len(dirs)+len(files))
nodesPaths := make([]string, 0, len(dirs)+len(files))
// First check for duplicate paths
duplicatePaths := make([]string, 0)
for _, dir := range dirs {
if _, ok := fsNodesMap[dir.Path]; ok {
duplicatePaths = append(duplicatePaths, dir.Path)
}
fsNodesMap[dir.Path] = dir
nodesPaths = append(nodesPaths, dir.Path)
}
for _, file := range files {
if _, ok := fsNodesMap[file.Path]; ok {
duplicatePaths = append(duplicatePaths, file.Path)
}
fsNodesMap[file.Path] = file
nodesPaths = append(nodesPaths, file.Path)
}
// There is no point in continuing if there are duplicate paths,
// since the fsNodesMap will not be valid.
if len(duplicatePaths) > 0 {
return fmt.Errorf("duplicate files / directory customization paths: %v", duplicatePaths)
}
invalidFSNodes := make([]string, 0)
checkedPaths := make(map[string]bool)
// Sort the paths so that we always check the longest paths first. This
// ensures that we don't check a parent path before we check the child
// path. Reverse sort the slice based on directory depth.
sort.Slice(nodesPaths, func(i, j int) bool {
return strings.Count(nodesPaths[i], "/") > strings.Count(nodesPaths[j], "/")
})
for _, nodePath := range nodesPaths {
// Skip paths that we have already checked
if checkedPaths[nodePath] {
continue
}
// Check all parent paths of the current path. If any of them have
// already been checked, then we do not need to check them again.
// This is because we always check the longest paths first. If a parent
// path exists in the filesystem nodes map and it is a File,
// then it is an error because it is a parent of a Directory or File.
// Parent paths can be only Directories.
parentPath := nodePath
for {
parentPath = path.Dir(parentPath)
// "." is returned only when the path is relative and we reached
// the root directory. This should never happen because File
// and Directory customization paths are validated as part of
// the unmarshalling process from JSON and TOML.
if parentPath == "." {
panic("filesystem node has relative path set.")
}
if parentPath == "/" {
break
}
if checkedPaths[parentPath] {
break
}
// If the node is not a Directory, then it is an error because
// it is a parent of a Directory or File.
if node, ok := fsNodesMap[parentPath]; ok {
switch node.(type) {
case DirectoryCustomization:
break
case FileCustomization:
invalidFSNodes = append(invalidFSNodes, nodePath)
default:
panic(fmt.Sprintf("unexpected filesystem node customization type: %T", node))
}
}
checkedPaths[parentPath] = true
}
checkedPaths[nodePath] = true
}
if len(invalidFSNodes) > 0 {
return fmt.Errorf("the following filesystem nodes are parents of another node and are not directories: %s", invalidFSNodes)
}
return nil
}
// CheckFileCustomizationsPolicy checks if the given File customizations are allowed by the path policy.
// If any of the customizations are not allowed by the path policy, an error is returned. Otherwise, nil is returned.
func CheckFileCustomizationsPolicy(files []FileCustomization, pathPolicy *pathpolicy.PathPolicies) error {
var invalidPaths []string
for _, file := range files {
if err := pathPolicy.Check(file.Path); err != nil {
invalidPaths = append(invalidPaths, file.Path)
}
}
if len(invalidPaths) > 0 {
return fmt.Errorf("the following custom files are not allowed: %+q", invalidPaths)
}
return nil
}
// CheckDirectoryCustomizationsPolicy checks if the given Directory customizations are allowed by the path policy.
// If any of the customizations are not allowed by the path policy, an error is returned. Otherwise, nil is returned.
func CheckDirectoryCustomizationsPolicy(dirs []DirectoryCustomization, pathPolicy *pathpolicy.PathPolicies) error {
var invalidPaths []string
for _, dir := range dirs {
if err := pathPolicy.Check(dir.Path); err != nil {
invalidPaths = append(invalidPaths, dir.Path)
}
}
if len(invalidPaths) > 0 {
return fmt.Errorf("the following custom directories are not allowed: %+q", invalidPaths)
}
return nil
}

View file

@ -0,0 +1,137 @@
package blueprint
import (
"fmt"
"net/url"
"regexp"
"strings"
"github.com/osbuild/images/internal/common"
"github.com/osbuild/images/internal/fsnode"
"github.com/osbuild/images/pkg/rpmmd"
)
type RepositoryCustomization struct {
Id string `json:"id" toml:"id"`
BaseURLs []string `json:"baseurls,omitempty" toml:"baseurls,omitempty"`
GPGKeys []string `json:"gpgkeys,omitempty" toml:"gpgkeys,omitempty"`
Metalink string `json:"metalink,omitempty" toml:"metalink,omitempty"`
Mirrorlist string `json:"mirrorlist,omitempty" toml:"mirrorlist,omitempty"`
Name string `json:"name,omitempty" toml:"name,omitempty"`
Priority *int `json:"priority,omitempty" toml:"priority,omitempty"`
Enabled *bool `json:"enabled,omitempty" toml:"enabled,omitempty"`
GPGCheck *bool `json:"gpgcheck,omitempty" toml:"gpgcheck,omitempty"`
RepoGPGCheck *bool `json:"repo_gpgcheck,omitempty" toml:"repo_gpgcheck,omitempty"`
SSLVerify *bool `json:"sslverify,omitempty" toml:"sslverify,omitempty"`
Filename string `json:"filename,omitempty" toml:"filename,omitempty"`
}
const repoFilenameRegex = "^[\\w.-]{1,250}\\.repo$"
func validateCustomRepository(repo *RepositoryCustomization) error {
if repo.Id == "" {
return fmt.Errorf("Repository ID is required")
}
filenameRegex := regexp.MustCompile(repoFilenameRegex)
if !filenameRegex.MatchString(repo.getFilename()) {
return fmt.Errorf("Repository filename %q is invalid", repo.getFilename())
}
if len(repo.BaseURLs) == 0 && repo.Mirrorlist == "" && repo.Metalink == "" {
return fmt.Errorf("Repository base URL, mirrorlist or metalink is required")
}
if repo.GPGCheck != nil && *repo.GPGCheck && len(repo.GPGKeys) == 0 {
return fmt.Errorf("Repository gpg check is set to true but no gpg keys are provided")
}
for _, key := range repo.GPGKeys {
// check for a valid GPG key prefix & contains GPG suffix
keyIsGPGKey := strings.HasPrefix(key, "-----BEGIN PGP PUBLIC KEY BLOCK-----") && strings.Contains(key, "-----END PGP PUBLIC KEY BLOCK-----")
// check for a valid URL
keyIsURL := false
_, err := url.ParseRequestURI(key)
if err == nil {
keyIsURL = true
}
if !keyIsGPGKey && !keyIsURL {
return fmt.Errorf("Repository gpg key is not a valid URL or a valid gpg key")
}
}
return nil
}
func (rc *RepositoryCustomization) getFilename() string {
if rc.Filename == "" {
return fmt.Sprintf("%s.repo", rc.Id)
}
if !strings.HasSuffix(rc.Filename, ".repo") {
return fmt.Sprintf("%s.repo", rc.Filename)
}
return rc.Filename
}
func RepoCustomizationsToRepoConfigAndGPGKeyFiles(repos []RepositoryCustomization) (map[string][]rpmmd.RepoConfig, []*fsnode.File, error) {
if len(repos) == 0 {
return nil, nil, nil
}
repoMap := make(map[string][]rpmmd.RepoConfig, len(repos))
var gpgKeyFiles []*fsnode.File
for _, repo := range repos {
filename := repo.getFilename()
convertedRepo := repo.customRepoToRepoConfig()
// convert any inline gpgkeys to fsnode.File and
// replace the gpgkey with the file path
for idx, gpgkey := range repo.GPGKeys {
if _, ok := url.ParseRequestURI(gpgkey); ok != nil {
// create the file path
path := fmt.Sprintf("/etc/pki/rpm-gpg/RPM-GPG-KEY-%s-%d", repo.Id, idx)
// replace the gpgkey with the file path
convertedRepo.GPGKeys[idx] = fmt.Sprintf("file://%s", path)
// create the fsnode for the gpgkey keyFile
keyFile, err := fsnode.NewFile(path, nil, nil, nil, []byte(gpgkey))
if err != nil {
return nil, nil, err
}
gpgKeyFiles = append(gpgKeyFiles, keyFile)
}
}
repoMap[filename] = append(repoMap[filename], convertedRepo)
}
return repoMap, gpgKeyFiles, nil
}
func (repo RepositoryCustomization) customRepoToRepoConfig() rpmmd.RepoConfig {
urls := make([]string, len(repo.BaseURLs))
copy(urls, repo.BaseURLs)
keys := make([]string, len(repo.GPGKeys))
copy(keys, repo.GPGKeys)
repoConfig := rpmmd.RepoConfig{
Id: repo.Id,
BaseURLs: urls,
GPGKeys: keys,
Name: repo.Name,
Metalink: repo.Metalink,
MirrorList: repo.Mirrorlist,
CheckGPG: repo.GPGCheck,
CheckRepoGPG: repo.RepoGPGCheck,
Priority: repo.Priority,
Enabled: repo.Enabled,
}
if repo.SSLVerify != nil {
repoConfig.IgnoreSSL = common.ToPtr(!*repo.SSLVerify)
}
return repoConfig
}

View file

@ -0,0 +1,506 @@
// package container implements a client for a container
// registry. It can be used to upload container images.
package container
import (
"context"
"fmt"
"io"
"os"
"path/filepath"
"strconv"
"strings"
_ "github.com/containers/image/v5/docker/archive"
_ "github.com/containers/image/v5/oci/archive"
_ "github.com/containers/image/v5/oci/layout"
"golang.org/x/sys/unix"
"github.com/osbuild/images/internal/common"
"github.com/containers/common/pkg/retry"
"github.com/containers/image/v5/copy"
"github.com/containers/image/v5/docker"
"github.com/containers/image/v5/docker/reference"
"github.com/containers/image/v5/manifest"
"github.com/containers/image/v5/signature"
"github.com/containers/image/v5/transports"
"github.com/containers/image/v5/types"
"github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
)
const (
DefaultUserAgent = "osbuild-composer/1.0"
DefaultPolicyPath = "/etc/containers/policy.json"
)
// GetDefaultAuthFile returns the authentication file to use for the
// current environment.
//
// This is basically a re-implementation of `getPathToAuthWithOS` from
// containers/image/pkg/docker/config/config.go[1], but we ensure that
// the returned path is either accessible. This is needed since any
// other error than os.ErrNotExist will lead to an overall failure and
// thus prevent any operation even with public resources.
//
// [1] https://github.com/containers/image/blob/55ea76c7db702ed1af60924a0b57c8da533d9e5a/pkg/docker/config/config.go#L506
func GetDefaultAuthFile() string {
checkAccess := func(path string) bool {
err := unix.Access(path, unix.R_OK)
return err == nil
}
if authFile := os.Getenv("REGISTRY_AUTH_FILE"); authFile != "" {
if checkAccess(authFile) {
return authFile
}
}
if runtimeDir := os.Getenv("XDG_RUNTIME_DIR"); runtimeDir != "" {
if checkAccess(runtimeDir) {
return filepath.Join(runtimeDir, "containers", "auth.json")
}
}
if rundir := filepath.FromSlash("/run/containers"); checkAccess(rundir) {
return filepath.Join(rundir, strconv.Itoa(os.Getuid()), "auth.json")
}
return filepath.FromSlash("/var/empty/containers-auth.json")
}
// ApplyDefaultPath checks if the target includes a domain and if it doesn't adds the default ones
// to the returned string. If also returns a bool indicating whether the defaults were applied
func ApplyDefaultDomainPath(target, defaultDomain, defaultPath string) (string, bool) {
appliedDefaults := false
i := strings.IndexRune(target, '/')
if i == -1 || (!strings.ContainsAny(target[:i], ".:") && target[:i] != "localhost") {
if defaultDomain != "" {
base := defaultDomain
if defaultPath != "" {
base = fmt.Sprintf("%s/%s", base, defaultPath)
}
target = fmt.Sprintf("%s/%s", base, target)
appliedDefaults = true
}
}
return target, appliedDefaults
}
// A Client to interact with the given Target object at a
// container registry, like e.g. uploading an image to.
// All mentioned defaults are only set when using the
// NewClient constructor.
type Client struct {
Target reference.Named // the target object to interact with
ReportWriter io.Writer // used for writing status reports, defaults to os.Stdout
PrecomputeDigests bool // precompute digest in order to avoid uploads
MaxRetries int // how often to retry http requests
UserAgent string // user agent string to use for requests, defaults to DefaultUserAgent
// internal state
policy *signature.Policy
sysCtx *types.SystemContext
}
// NewClient constructs a new Client for target with default options.
// It will add the "latest" tag if target does not contain it.
func NewClient(target string) (*Client, error) {
ref, err := reference.ParseNormalizedNamed(target)
if err != nil {
return nil, fmt.Errorf("failed to parse '%s': %w", target, err)
}
var policy *signature.Policy
if _, err := os.Stat(DefaultPolicyPath); err == nil {
policy, err = signature.NewPolicyFromFile(DefaultPolicyPath)
if err != nil {
return nil, err
}
} else {
policy = &signature.Policy{
Default: []signature.PolicyRequirement{
signature.NewPRInsecureAcceptAnything(),
},
}
}
client := Client{
Target: reference.TagNameOnly(ref),
ReportWriter: os.Stdout,
PrecomputeDigests: true,
UserAgent: DefaultUserAgent,
sysCtx: &types.SystemContext{
RegistriesDirPath: "",
SystemRegistriesConfPath: "",
BigFilesTemporaryDir: "/var/tmp",
OSChoice: "linux",
AuthFilePath: GetDefaultAuthFile(),
},
policy: policy,
}
return &client, nil
}
// SetAuthFilePath sets the location of the `containers-auth.json(5)` file.
func (cl *Client) SetAuthFilePath(path string) {
cl.sysCtx.AuthFilePath = path
}
// GetAuthFilePath gets the location of the `containers-auth.json(5)` file.
func (cl *Client) GetAuthFilePath() string {
return cl.sysCtx.AuthFilePath
}
func (cl *Client) SetArchitectureChoice(arch string) {
// Translate some well-known Composer architecture strings
// into the corresponding container ones
variant := ""
switch arch {
case "x86_64":
arch = "amd64"
case "aarch64":
arch = "arm64"
if variant == "" {
variant = "v8"
}
case "armhfp":
arch = "arm"
if variant == "" {
variant = "v7"
}
//ppc64le and s390x are the same
}
cl.sysCtx.ArchitectureChoice = arch
cl.sysCtx.VariantChoice = variant
}
func (cl *Client) SetVariantChoice(variant string) {
cl.sysCtx.VariantChoice = variant
}
// SetCredentials will set username and password for Client
func (cl *Client) SetCredentials(username, password string) {
if cl.sysCtx.DockerAuthConfig == nil {
cl.sysCtx.DockerAuthConfig = &types.DockerAuthConfig{}
}
cl.sysCtx.DockerAuthConfig.Username = username
cl.sysCtx.DockerAuthConfig.Password = password
}
func (cl *Client) SetDockerCertPath(path string) {
cl.sysCtx.DockerCertPath = path
}
// SetSkipTLSVerify controls if TLS verification happens when
// making requests. If nil is passed it falls back to the default.
func (cl *Client) SetTLSVerify(verify *bool) {
if verify == nil {
cl.sysCtx.DockerInsecureSkipTLSVerify = types.OptionalBoolUndefined
} else {
cl.sysCtx.DockerInsecureSkipTLSVerify = types.NewOptionalBool(!*verify)
}
}
// GetSkipTLSVerify returns current TLS verification state.
func (cl *Client) GetTLSVerify() *bool {
skip := cl.sysCtx.DockerInsecureSkipTLSVerify
if skip == types.OptionalBoolUndefined {
return nil
}
// NB: we invert the state, i.e. verify == (skip == false)
return common.ToPtr(skip == types.OptionalBoolFalse)
}
// SkipTLSVerify is a convenience helper that internally calls
// SetTLSVerify with false
func (cl *Client) SkipTLSVerify() {
cl.SetTLSVerify(common.ToPtr(false))
}
func parseImageName(name string) (types.ImageReference, error) {
parts := strings.SplitN(name, ":", 2)
if len(parts) != 2 {
return nil, fmt.Errorf("invalid image name '%s'", name)
}
transport := transports.Get(parts[0])
if transport == nil {
return nil, fmt.Errorf("unknown transport '%s'", parts[0])
}
return transport.ParseReference(parts[1])
}
// UploadImage takes an container image located at from and uploads it
// to the Target of Client. If tag is set, i.e. not the empty string,
// it will replace any previously set tag or digest of the target.
// Returns the digest of the manifest that was written to the server.
func (cl *Client) UploadImage(ctx context.Context, from, tag string) (digest.Digest, error) {
targetCtx := *cl.sysCtx
targetCtx.DockerRegistryPushPrecomputeDigests = cl.PrecomputeDigests
policyContext, err := signature.NewPolicyContext(cl.policy)
if err != nil {
return "", err
}
srcRef, err := parseImageName(from)
if err != nil {
return "", fmt.Errorf("invalid source name '%s': %w", from, err)
}
target := cl.Target
if tag != "" {
target = reference.TrimNamed(target)
target, err = reference.WithTag(target, tag)
if err != nil {
return "", fmt.Errorf("error creating reference with tag '%s': %w", tag, err)
}
}
destRef, err := docker.NewReference(target)
if err != nil {
return "", err
}
retryOpts := retry.RetryOptions{
MaxRetry: cl.MaxRetries,
}
var manifestDigest digest.Digest
err = retry.RetryIfNecessary(ctx, func() error {
manifestBytes, err := copy.Image(ctx, policyContext, destRef, srcRef, &copy.Options{
RemoveSignatures: false,
SignBy: "",
SignPassphrase: "",
ReportWriter: cl.ReportWriter,
SourceCtx: cl.sysCtx,
DestinationCtx: &targetCtx,
ForceManifestMIMEType: "",
ImageListSelection: copy.CopyAllImages,
PreserveDigests: false,
})
if err != nil {
return err
}
manifestDigest, err = manifest.Digest(manifestBytes)
return err
}, &retryOpts)
if err != nil {
return "", err
}
return manifestDigest, nil
}
// A RawManifest contains the raw manifest Data and its MimeType
type RawManifest struct {
Data []byte
MimeType string
}
// Digest computes the digest from the raw manifest data
func (m RawManifest) Digest() (digest.Digest, error) {
return manifest.Digest(m.Data)
}
// GetManifest fetches the raw manifest data from the server. If digest is not empty
// it will override any given tag for the Client's Target.
func (cl *Client) GetManifest(ctx context.Context, digest digest.Digest) (r RawManifest, err error) {
target := cl.Target
if digest != "" {
t := reference.TrimNamed(cl.Target)
t, err = reference.WithDigest(t, digest)
if err != nil {
return
}
target = t
}
ref, err := docker.NewReference(target)
if err != nil {
return
}
src, err := ref.NewImageSource(ctx, cl.sysCtx)
if err != nil {
return
}
defer func() {
if e := src.Close(); e != nil {
err = fmt.Errorf("could not close image: %w", e)
}
}()
retryOpts := retry.RetryOptions{
MaxRetry: cl.MaxRetries,
}
if err = retry.RetryIfNecessary(ctx, func() error {
r.Data, r.MimeType, err = src.GetManifest(ctx, nil)
return err
}, &retryOpts); err != nil {
return
}
return
}
type manifestList interface {
ChooseInstance(ctx *types.SystemContext) (digest.Digest, error)
}
type resolvedIds struct {
Manifest digest.Digest
Config digest.Digest
ListManifest digest.Digest
}
func (cl *Client) resolveManifestList(ctx context.Context, list manifestList) (resolvedIds, error) {
digest, err := list.ChooseInstance(cl.sysCtx)
if err != nil {
return resolvedIds{}, err
}
raw, err := cl.GetManifest(ctx, digest)
if err != nil {
return resolvedIds{}, fmt.Errorf("error getting manifest: %w", err)
}
ids, err := cl.resolveRawManifest(ctx, raw)
if err != nil {
return resolvedIds{}, err
}
return ids, err
}
func (cl *Client) resolveRawManifest(ctx context.Context, rm RawManifest) (resolvedIds, error) {
var imageID digest.Digest
switch rm.MimeType {
case manifest.DockerV2ListMediaType:
list, err := manifest.Schema2ListFromManifest(rm.Data)
if err != nil {
return resolvedIds{}, err
}
// Save digest of the manifest list as well.
ids, err := cl.resolveManifestList(ctx, list)
if err != nil {
return resolvedIds{}, err
}
// NOTE: Comment in Digest() source says this should never fail. Ignore the error.
ids.ListManifest, _ = rm.Digest()
return ids, nil
case imgspecv1.MediaTypeImageIndex:
index, err := manifest.OCI1IndexFromManifest(rm.Data)
if err != nil {
return resolvedIds{}, err
}
// Save digest of the manifest list as well.
ids, err := cl.resolveManifestList(ctx, index)
if err != nil {
return resolvedIds{}, err
}
// NOTE: Comment in Digest() source says this should never fail. Ignore the error.
ids.ListManifest, _ = rm.Digest()
return ids, nil
case imgspecv1.MediaTypeImageManifest:
m, err := manifest.OCI1FromManifest(rm.Data)
if err != nil {
return resolvedIds{}, nil
}
imageID = m.ConfigInfo().Digest
case manifest.DockerV2Schema2MediaType:
m, err := manifest.Schema2FromManifest(rm.Data)
if err != nil {
return resolvedIds{}, nil
}
imageID = m.ConfigInfo().Digest
default:
return resolvedIds{}, fmt.Errorf("unsupported manifest format '%s'", rm.MimeType)
}
dg, err := rm.Digest()
if err != nil {
return resolvedIds{}, err
}
return resolvedIds{
Manifest: dg,
Config: imageID,
}, nil
}
// Resolve the Client's Target to the manifest digest and the corresponding image id
// which is the digest of the configuration object. It uses the architecture and
// variant specified via SetArchitectureChoice or the corresponding defaults for
// the host.
func (cl *Client) Resolve(ctx context.Context, name string) (Spec, error) {
raw, err := cl.GetManifest(ctx, "")
if err != nil {
return Spec{}, fmt.Errorf("error getting manifest: %w", err)
}
ids, err := cl.resolveRawManifest(ctx, raw)
if err != nil {
return Spec{}, err
}
spec := NewSpec(cl.Target, ids.Manifest, ids.Config, cl.GetTLSVerify(), ids.ListManifest.String(), name)
return spec, nil
}

View file

@ -0,0 +1,83 @@
package container
import (
"context"
"fmt"
"strings"
)
type resolveResult struct {
spec Spec
err error
}
type Resolver struct {
jobs int
queue chan resolveResult
ctx context.Context
Arch string
AuthFilePath string
}
type SourceSpec struct {
Source string
Name string
TLSVerify *bool
}
func NewResolver(arch string) Resolver {
return Resolver{
ctx: context.Background(),
queue: make(chan resolveResult, 2),
Arch: arch,
}
}
func (r *Resolver) Add(spec SourceSpec) {
client, err := NewClient(spec.Source)
r.jobs += 1
if err != nil {
r.queue <- resolveResult{err: err}
return
}
client.SetTLSVerify(spec.TLSVerify)
client.SetArchitectureChoice(r.Arch)
if r.AuthFilePath != "" {
client.SetAuthFilePath(r.AuthFilePath)
}
go func() {
spec, err := client.Resolve(r.ctx, spec.Name)
if err != nil {
err = fmt.Errorf("'%s': %w", spec.Source, err)
}
r.queue <- resolveResult{spec: spec, err: err}
}()
}
func (r *Resolver) Finish() ([]Spec, error) {
specs := make([]Spec, 0, r.jobs)
errs := make([]string, 0, r.jobs)
for r.jobs > 0 {
result := <-r.queue
r.jobs -= 1
if result.err == nil {
specs = append(specs, result.spec)
} else {
errs = append(errs, result.err.Error())
}
}
if len(errs) > 0 {
detail := strings.Join(errs, "; ")
return specs, fmt.Errorf("failed to resolve container: %s", detail)
}
return specs, nil
}

37
vendor/github.com/osbuild/images/pkg/container/spec.go generated vendored Normal file
View file

@ -0,0 +1,37 @@
package container
import (
"github.com/containers/image/v5/docker/reference"
"github.com/opencontainers/go-digest"
)
// A Spec is the specification of how to get a specific
// container from a Source and under what LocalName to
// store it in an image. The container is identified by
// at the Source via Digest and ImageID. The latter one
// should remain the same in the target image as well.
type Spec struct {
Source string // does not include the manifest digest
Digest string // digest of the manifest at the Source
TLSVerify *bool // controls TLS verification
ImageID string // container image identifier
LocalName string // name to use inside the image
ListDigest string // digest of the list manifest at the Source (optional)
}
// NewSpec creates a new Spec from the essential information.
// It also converts is the transition point from container
// specific types (digest.Digest) to generic types (string).
func NewSpec(source reference.Named, digest, imageID digest.Digest, tlsVerify *bool, listDigest string, localName string) Spec {
if localName == "" {
localName = source.String()
}
return Spec{
Source: source.Name(),
Digest: digest.String(),
TLSVerify: tlsVerify,
ImageID: imageID.String(),
LocalName: localName,
ListDigest: listDigest,
}
}

59
vendor/github.com/osbuild/images/pkg/crypt/crypt.go generated vendored Normal file
View file

@ -0,0 +1,59 @@
package crypt
import (
"crypto/rand"
"math/big"
"strings"
)
// CryptSHA512 encrypts the given password with SHA512 and a random salt.
//
// Note that this function is not deterministic.
func CryptSHA512(phrase string) (string, error) {
const SHA512SaltLength = 16
salt, err := genSalt(SHA512SaltLength)
if err != nil {
return "", nil
}
hashSettings := "$6$" + salt
return crypt(phrase, hashSettings)
}
func genSalt(length int) (string, error) {
saltChars := "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789./"
b := make([]byte, length)
for i := range b {
runeIndex, err := rand.Int(rand.Reader, big.NewInt(int64(len(saltChars))))
if err != nil {
return "", err
}
b[i] = saltChars[runeIndex.Int64()]
}
return string(b), nil
}
// PasswordIsCrypted returns true if the password appears to be an encrypted
// one, according to a very simple heuristic.
//
// Any string starting with one of $2$, $6$ or $5$ is considered to be
// encrypted. Any other string is consdirede to be unencrypted.
//
// This functionality is taken from pylorax.
func PasswordIsCrypted(s string) bool {
// taken from lorax src: src/pylorax/api/compose.py:533
prefixes := [...]string{"$2b$", "$6$", "$5$"}
for _, prefix := range prefixes {
if strings.HasPrefix(s, prefix) {
return true
}
}
return false
}

View file

@ -0,0 +1,87 @@
//go:build !darwin
// Copied from https://github.com/amoghe/go-crypt/blob/b3e291286513a0c993f7c4dd7060d327d2d56143/crypt_r.go
// Original sources are under MIT license:
// The MIT License (MIT)
//
// # Copyright (c) 2015 Akshay Moghe
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in all
// copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
// SOFTWARE.
//
// # Package crypt provides wrappers around functions available in crypt.h
//
// It wraps around the GNU specific extension (crypt_r) when it is available
// (i.e. where GOOS=linux). This makes the go function reentrant (and thus
// callable from concurrent goroutines).
package crypt
import (
"unsafe"
)
/*
#cgo LDFLAGS: -lcrypt
// this is needed for Ubuntu
#define _GNU_SOURCE
#include <stdlib.h>
#include <string.h>
#include <crypt.h>
char *gnu_ext_crypt(char *pass, char *salt) {
char *enc = NULL;
char *ret = NULL;
struct crypt_data data;
data.initialized = 0;
enc = crypt_r(pass, salt, &data);
if(enc == NULL) {
return NULL;
}
ret = (char *)malloc((strlen(enc)+1) * sizeof(char)); // for trailing null
strcpy(ret, enc);
ret[strlen(enc)]= '\0';
return ret;
}
*/
import "C"
// Crypt provides a wrapper around the glibc crypt_r() function.
// For the meaning of the arguments, refer to the package README.
func crypt(pass, salt string) (string, error) {
c_pass := C.CString(pass)
defer C.free(unsafe.Pointer(c_pass))
c_salt := C.CString(salt)
defer C.free(unsafe.Pointer(c_salt))
c_enc, err := C.gnu_ext_crypt(c_pass, c_salt)
if c_enc == nil {
return "", err
}
defer C.free(unsafe.Pointer(c_enc))
// Return nil error if the string is non-nil.
// As per the errno.h manpage, functions are allowed to set errno
// on success. Caller should ignore errno on success.
return C.GoString(c_enc), nil
}

View file

@ -0,0 +1,7 @@
//go:build darwin
package crypt
func crypt(pass, salt string) (string, error) {
panic("You must not run osbuild-composer on macOS!")
}

158
vendor/github.com/osbuild/images/pkg/disk/btrfs.go generated vendored Normal file
View file

@ -0,0 +1,158 @@
package disk
import (
"fmt"
"math/rand"
"strings"
"github.com/google/uuid"
)
type Btrfs struct {
UUID string
Label string
Mountpoint string
Subvolumes []BtrfsSubvolume
}
func (b *Btrfs) IsContainer() bool {
return true
}
func (b *Btrfs) Clone() Entity {
if b == nil {
return nil
}
clone := &Btrfs{
UUID: b.UUID,
Label: b.Label,
Mountpoint: b.Mountpoint,
Subvolumes: make([]BtrfsSubvolume, len(b.Subvolumes)),
}
for idx, subvol := range b.Subvolumes {
entClone := subvol.Clone()
svClone, cloneOk := entClone.(*BtrfsSubvolume)
if !cloneOk {
panic("BtrfsSubvolume.Clone() returned an Entity that cannot be converted to *BtrfsSubvolume; this is a programming error")
}
clone.Subvolumes[idx] = *svClone
}
return clone
}
func (b *Btrfs) GetItemCount() uint {
return uint(len(b.Subvolumes))
}
func (b *Btrfs) GetChild(n uint) Entity {
return &b.Subvolumes[n]
}
func (b *Btrfs) CreateMountpoint(mountpoint string, size uint64) (Entity, error) {
name := mountpoint
if name == "/" {
name = "root"
}
subvolume := BtrfsSubvolume{
Size: size,
Mountpoint: mountpoint,
GroupID: 0,
UUID: b.UUID, // subvolumes inherit UUID of main volume
Name: name,
}
b.Subvolumes = append(b.Subvolumes, subvolume)
return &b.Subvolumes[len(b.Subvolumes)-1], nil
}
func (b *Btrfs) AlignUp(size uint64) uint64 {
return size // No extra alignment necessary for subvolumes
}
func (b *Btrfs) GenUUID(rng *rand.Rand) {
if b.UUID == "" {
b.UUID = uuid.Must(newRandomUUIDFromReader(rng)).String()
}
}
type BtrfsSubvolume struct {
Name string
Size uint64
Mountpoint string
GroupID uint64
MntOps string
// UUID of the parent volume
UUID string
}
func (subvol *BtrfsSubvolume) IsContainer() bool {
return false
}
func (bs *BtrfsSubvolume) Clone() Entity {
if bs == nil {
return nil
}
return &BtrfsSubvolume{
Name: bs.Name,
Size: bs.Size,
Mountpoint: bs.Mountpoint,
GroupID: bs.GroupID,
MntOps: bs.MntOps,
UUID: bs.UUID,
}
}
func (bs *BtrfsSubvolume) GetSize() uint64 {
if bs == nil {
return 0
}
return bs.Size
}
func (bs *BtrfsSubvolume) EnsureSize(s uint64) bool {
if s > bs.Size {
bs.Size = s
return true
}
return false
}
func (bs *BtrfsSubvolume) GetMountpoint() string {
if bs == nil {
return ""
}
return bs.Mountpoint
}
func (bs *BtrfsSubvolume) GetFSType() string {
return "btrfs"
}
func (bs *BtrfsSubvolume) GetFSSpec() FSSpec {
if bs == nil {
return FSSpec{}
}
return FSSpec{
UUID: bs.UUID,
Label: bs.Name,
}
}
func (bs *BtrfsSubvolume) GetFSTabOptions() FSTabOptions {
if bs == nil {
return FSTabOptions{}
}
ops := strings.Join([]string{bs.MntOps, fmt.Sprintf("subvol=%s", bs.Name)}, ",")
return FSTabOptions{
MntOps: ops,
Freq: 0,
PassNo: 0,
}
}

170
vendor/github.com/osbuild/images/pkg/disk/disk.go generated vendored Normal file
View file

@ -0,0 +1,170 @@
// Disk package contains abstract data-types to define disk-related entities.
//
// The disk package is a collection of interfaces and structs that can be used
// to represent an disk image with its layout. Various concrete types, such as
// PartitionTable, Partition and Filesystem types are defined to model a given
// disk layout. These implement a collection of interfaces that can be used to
// navigate and operate on the various possible combinations of entities in a
// generic way. The entity data model is very generic so that it can represent
// all possible layouts, which can be arbitrarily complex, since technologies
// like logical volume management, LUKS2 container and file systems, that can
// have sub-volumes, allow for complex layouts.
// Entity and Container are the two main interfaces that are used to model the
// tree structure of a disk image layout. The other entity interfaces, such as
// Sizeable and Mountable, then describe various properties and capabilities
// of a given entity.
package disk
import (
"encoding/hex"
"io"
"math/rand"
"github.com/google/uuid"
)
const (
// Default sector size in bytes
DefaultSectorSize = 512
DefaultGrainBytes = uint64(1048576) // 1 MiB
// UUIDs
BIOSBootPartitionGUID = "21686148-6449-6E6F-744E-656564454649"
BIOSBootPartitionUUID = "FAC7F1FB-3E8D-4137-A512-961DE09A5549"
FilesystemDataGUID = "0FC63DAF-8483-4772-8E79-3D69D8477DE4"
FilesystemDataUUID = "CB07C243-BC44-4717-853E-28852021225B"
EFISystemPartitionGUID = "C12A7328-F81F-11D2-BA4B-00A0C93EC93B"
EFISystemPartitionUUID = "68B2905B-DF3E-4FB3-80FA-49D1E773AA33"
EFIFilesystemUUID = "7B77-95E7"
LVMPartitionGUID = "E6D6D379-F507-44C2-A23C-238F2A3DF928"
PRePartitionGUID = "9E1A2D38-C612-4316-AA26-8B49521E5A8B"
RootPartitionUUID = "6264D520-3FB9-423F-8AB8-7A0A8E3D3562"
// Extended Boot Loader Partition
XBootLDRPartitionGUID = "BC13C2FF-59E6-4262-A352-B275FD6F7172"
)
// Entity is the base interface for all disk-related entities.
type Entity interface {
// IsContainer indicates if the implementing type can
// contain any other entities.
IsContainer() bool
// Clone returns a deep copy of the entity.
Clone() Entity
}
// Container is the interface for entities that can contain other entities.
// Together with the base Entity interface this allows to model a generic
// entity tree of theoretically arbitrary depth and width.
type Container interface {
Entity
// GetItemCount returns the number of actual child entities.
GetItemCount() uint
// GetChild returns the child entity at the given index.
GetChild(n uint) Entity
}
// Sizeable is implemented by entities that carry size information.
type Sizeable interface {
// EnsureSize will resize the entity to the given size in case
// it is currently smaller. Returns if the size was changed.
EnsureSize(size uint64) bool
// GetSize returns the size of the entity in bytes.
GetSize() uint64
}
// A Mountable entity is an entity that can be mounted.
type Mountable interface {
// GetMountPoint returns the path of the mount point.
GetMountpoint() string
// GetFSType returns the file system type, e.g. 'xfs'.
GetFSType() string
// GetFSSpec returns the file system spec information.
GetFSSpec() FSSpec
// GetFSTabOptions returns options for mounting the entity.
GetFSTabOptions() FSTabOptions
}
// A MountpointCreator is a container that is able to create new volumes.
//
// CreateMountpoint creates a new mountpoint with the given size and
// returns the entity that represents the new mountpoint.
type MountpointCreator interface {
CreateMountpoint(mountpoint string, size uint64) (Entity, error)
// AlignUp will align the given bytes according to the
// requirements of the container type.
AlignUp(size uint64) uint64
}
// A UniqueEntity is an entity that can be uniquely identified via a UUID.
//
// GenUUID generates a UUID for the entity if it does not yet have one.
type UniqueEntity interface {
Entity
GenUUID(rng *rand.Rand)
}
// VolumeContainer is a specific container that contains volume entities
type VolumeContainer interface {
// MetadataSize returns the size of the container's metadata (in
// bytes), i.e. the storage space that needs to be reserved for
// the container itself, in contrast to the data it contains.
MetadataSize() uint64
}
// FSSpec for a filesystem (UUID and Label); the first field of fstab(5)
type FSSpec struct {
UUID string
Label string
}
type FSTabOptions struct {
// The fourth field of fstab(5); fs_mntops
MntOps string
// The fifth field of fstab(5); fs_freq
Freq uint64
// The sixth field of fstab(5); fs_passno
PassNo uint64
}
// uuid generator helpers
// GeneratesnewRandomUUIDFromReader generates a new random UUID (version
// 4 using) via the given random number generator.
func newRandomUUIDFromReader(r io.Reader) (uuid.UUID, error) {
var id uuid.UUID
_, err := io.ReadFull(r, id[:])
if err != nil {
return uuid.Nil, err
}
id[6] = (id[6] & 0x0f) | 0x40 // Version 4
id[8] = (id[8] & 0x3f) | 0x80 // Variant is 10
return id, nil
}
// NewVolIDFromRand creates a random 32 bit hex string to use as a
// volume ID for FAT filesystems
func NewVolIDFromRand(r *rand.Rand) string {
volid := make([]byte, 4)
len, _ := r.Read(volid)
if len != 4 {
panic("expected four random bytes")
}
return hex.EncodeToString(volid)
}

View file

@ -0,0 +1,85 @@
package disk
import (
"math/rand"
"github.com/google/uuid"
)
// Filesystem related functions
type Filesystem struct {
Type string
// ID of the filesystem, vfat doesn't use traditional UUIDs, therefore this
// is just a string.
UUID string
Label string
Mountpoint string
// The fourth field of fstab(5); fs_mntops
FSTabOptions string
// The fifth field of fstab(5); fs_freq
FSTabFreq uint64
// The sixth field of fstab(5); fs_passno
FSTabPassNo uint64
}
func (fs *Filesystem) IsContainer() bool {
return false
}
// Clone the filesystem structure
func (fs *Filesystem) Clone() Entity {
if fs == nil {
return nil
}
return &Filesystem{
Type: fs.Type,
UUID: fs.UUID,
Label: fs.Label,
Mountpoint: fs.Mountpoint,
FSTabOptions: fs.FSTabOptions,
FSTabFreq: fs.FSTabFreq,
FSTabPassNo: fs.FSTabPassNo,
}
}
func (fs *Filesystem) GetMountpoint() string {
if fs == nil {
return ""
}
return fs.Mountpoint
}
func (fs *Filesystem) GetFSType() string {
if fs == nil {
return ""
}
return fs.Type
}
func (fs *Filesystem) GetFSSpec() FSSpec {
if fs == nil {
return FSSpec{}
}
return FSSpec{
UUID: fs.UUID,
Label: fs.Label,
}
}
func (fs *Filesystem) GetFSTabOptions() FSTabOptions {
if fs == nil {
return FSTabOptions{}
}
return FSTabOptions{
MntOps: fs.FSTabOptions,
Freq: fs.FSTabFreq,
PassNo: fs.FSTabPassNo,
}
}
func (fs *Filesystem) GenUUID(rng *rand.Rand) {
if fs.UUID == "" {
fs.UUID = uuid.Must(newRandomUUIDFromReader(rng)).String()
}
}

100
vendor/github.com/osbuild/images/pkg/disk/luks.go generated vendored Normal file
View file

@ -0,0 +1,100 @@
package disk
import (
"fmt"
"math/rand"
"github.com/google/uuid"
)
type Argon2id struct {
Iterations uint
Memory uint
Parallelism uint
}
type ClevisBind struct {
Pin string
Policy string
RemovePassphrase bool
}
type LUKSContainer struct {
Passphrase string
UUID string
Cipher string
Label string
Subsystem string
SectorSize uint64
// password-based key derivation function
PBKDF Argon2id
Clevis *ClevisBind
Payload Entity
}
func (lc *LUKSContainer) IsContainer() bool {
return true
}
func (lc *LUKSContainer) GetItemCount() uint {
if lc.Payload == nil {
return 0
}
return 1
}
func (lc *LUKSContainer) GetChild(n uint) Entity {
if n != 0 {
panic(fmt.Sprintf("invalid child index for LUKSContainer: %d != 0", n))
}
return lc.Payload
}
func (lc *LUKSContainer) Clone() Entity {
if lc == nil {
return nil
}
clc := &LUKSContainer{
Passphrase: lc.Passphrase,
UUID: lc.UUID,
Cipher: lc.Cipher,
Label: lc.Label,
Subsystem: lc.Subsystem,
SectorSize: lc.SectorSize,
PBKDF: Argon2id{
Iterations: lc.PBKDF.Iterations,
Memory: lc.PBKDF.Memory,
Parallelism: lc.PBKDF.Parallelism,
},
Payload: lc.Payload.Clone(),
}
if lc.Clevis != nil {
clc.Clevis = &ClevisBind{
Pin: lc.Clevis.Pin,
Policy: lc.Clevis.Policy,
RemovePassphrase: lc.Clevis.RemovePassphrase,
}
}
return clc
}
func (lc *LUKSContainer) GenUUID(rng *rand.Rand) {
if lc == nil {
return
}
if lc.UUID == "" {
lc.UUID = uuid.Must(newRandomUUIDFromReader(rng)).String()
}
}
func (lc *LUKSContainer) MetadataSize() uint64 {
if lc == nil {
return 0
}
// 16 MiB is the default size for the LUKS2 header
return 16 * 1024 * 1024
}

205
vendor/github.com/osbuild/images/pkg/disk/lvm.go generated vendored Normal file
View file

@ -0,0 +1,205 @@
package disk
import (
"fmt"
"strings"
"github.com/osbuild/images/internal/common"
)
// Default physical extent size in bytes: logical volumes
// created inside the VG will be aligned to this.
const LVMDefaultExtentSize = 4 * common.MebiByte
type LVMVolumeGroup struct {
Name string
Description string
LogicalVolumes []LVMLogicalVolume
}
func (vg *LVMVolumeGroup) IsContainer() bool {
return true
}
func (vg *LVMVolumeGroup) Clone() Entity {
if vg == nil {
return nil
}
clone := &LVMVolumeGroup{
Name: vg.Name,
Description: vg.Description,
LogicalVolumes: make([]LVMLogicalVolume, len(vg.LogicalVolumes)),
}
for idx, lv := range vg.LogicalVolumes {
ent := lv.Clone()
// lv.Clone() will return nil only if the logical volume is nil
if ent == nil {
panic(fmt.Sprintf("logical volume %d in a LVM volume group is nil; this is a programming error", idx))
}
lv, cloneOk := ent.(*LVMLogicalVolume)
if !cloneOk {
panic("LVMLogicalVolume.Clone() returned an Entity that cannot be converted to *LVMLogicalVolume; this is a programming error")
}
clone.LogicalVolumes[idx] = *lv
}
return clone
}
func (vg *LVMVolumeGroup) GetItemCount() uint {
if vg == nil {
return 0
}
return uint(len(vg.LogicalVolumes))
}
func (vg *LVMVolumeGroup) GetChild(n uint) Entity {
if vg == nil {
panic("LVMVolumeGroup.GetChild: nil entity")
}
return &vg.LogicalVolumes[n]
}
func (vg *LVMVolumeGroup) CreateMountpoint(mountpoint string, size uint64) (Entity, error) {
filesystem := Filesystem{
Type: "xfs",
Mountpoint: mountpoint,
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
}
return vg.CreateLogicalVolume(mountpoint, size, &filesystem)
}
func (vg *LVMVolumeGroup) CreateLogicalVolume(lvName string, size uint64, payload Entity) (Entity, error) {
if vg == nil {
panic("LVMVolumeGroup.CreateLogicalVolume: nil entity")
}
names := make(map[string]bool, len(vg.LogicalVolumes))
for _, lv := range vg.LogicalVolumes {
names[lv.Name] = true
}
base := lvname(lvName)
var exists bool
name := base
// Make sure that we don't collide with an existing volume, e.g. 'home/test'
// and /home/test_test would collide. We try 100 times and then give up. This
// is mimicking what blivet does. See blivet/blivet.py#L1060 commit 2eb4bd4
for i := 0; i < 100; i++ {
exists = names[name]
if !exists {
break
}
name = fmt.Sprintf("%s%02d", base, i)
}
if exists {
return nil, fmt.Errorf("could not create logical volume: name collision")
}
lv := LVMLogicalVolume{
Name: name,
Size: vg.AlignUp(size),
Payload: payload,
}
vg.LogicalVolumes = append(vg.LogicalVolumes, lv)
return &vg.LogicalVolumes[len(vg.LogicalVolumes)-1], nil
}
func (vg *LVMVolumeGroup) AlignUp(size uint64) uint64 {
if size%LVMDefaultExtentSize != 0 {
size += LVMDefaultExtentSize - size%LVMDefaultExtentSize
}
return size
}
func (vg *LVMVolumeGroup) MetadataSize() uint64 {
if vg == nil {
return 0
}
// LVM2 allows for a lot of customizations that will affect the size
// of the metadata and its location and thus the start of the physical
// extent. For now we assume the default which results in a start of
// the physical extent 1 MiB
return 1024 * 1024
}
type LVMLogicalVolume struct {
Name string
Size uint64
Payload Entity
}
func (lv *LVMLogicalVolume) IsContainer() bool {
return true
}
func (lv *LVMLogicalVolume) Clone() Entity {
if lv == nil {
return nil
}
return &LVMLogicalVolume{
Name: lv.Name,
Size: lv.Size,
Payload: lv.Payload.Clone(),
}
}
func (lv *LVMLogicalVolume) GetItemCount() uint {
if lv == nil || lv.Payload == nil {
return 0
}
return 1
}
func (lv *LVMLogicalVolume) GetChild(n uint) Entity {
if n != 0 || lv == nil {
panic(fmt.Sprintf("invalid child index for LVMLogicalVolume: %d != 0", n))
}
return lv.Payload
}
func (lv *LVMLogicalVolume) GetSize() uint64 {
if lv == nil {
return 0
}
return lv.Size
}
func (lv *LVMLogicalVolume) EnsureSize(s uint64) bool {
if lv == nil {
panic("LVMLogicalVolume.EnsureSize: nil entity")
}
if s > lv.Size {
lv.Size = s
return true
}
return false
}
// lvname returns a name for a logical volume based on the mountpoint.
func lvname(path string) string {
if path == "/" {
return "rootlv"
}
path = strings.TrimLeft(path, "/")
return strings.ReplaceAll(path, "/", "_") + "lv"
}

87
vendor/github.com/osbuild/images/pkg/disk/partition.go generated vendored Normal file
View file

@ -0,0 +1,87 @@
package disk
import (
"fmt"
)
type Partition struct {
Start uint64 // Start of the partition in bytes
Size uint64 // Size of the partition in bytes
Type string // Partition type, e.g. 0x83 for MBR or a UUID for gpt
Bootable bool // `Legacy BIOS bootable` (GPT) or `active` (DOS) flag
// ID of the partition, dos doesn't use traditional UUIDs, therefore this
// is just a string.
UUID string
// If nil, the partition is raw; It doesn't contain a payload.
Payload Entity
}
func (p *Partition) IsContainer() bool {
return true
}
func (p *Partition) Clone() Entity {
if p == nil {
return nil
}
partition := &Partition{
Start: p.Start,
Size: p.Size,
Type: p.Type,
Bootable: p.Bootable,
UUID: p.UUID,
}
if p.Payload != nil {
partition.Payload = p.Payload.Clone()
}
return partition
}
func (pt *Partition) GetItemCount() uint {
if pt == nil || pt.Payload == nil {
return 0
}
return 1
}
func (p *Partition) GetChild(n uint) Entity {
if n != 0 {
panic(fmt.Sprintf("invalid child index for Partition: %d != 0", n))
}
return p.Payload
}
func (p *Partition) GetSize() uint64 {
return p.Size
}
// Ensure the partition has at least the given size. Will do nothing
// if the partition is already larger. Returns if the size changed.
func (p *Partition) EnsureSize(s uint64) bool {
if s > p.Size {
p.Size = s
return true
}
return false
}
func (p *Partition) IsBIOSBoot() bool {
if p == nil {
return false
}
return p.Type == BIOSBootPartitionGUID
}
func (p *Partition) IsPReP() bool {
if p == nil {
return false
}
return p.Type == "41" || p.Type == PRePartitionGUID
}

View file

@ -0,0 +1,691 @@
package disk
import (
"fmt"
"math/rand"
"path/filepath"
"github.com/google/uuid"
"github.com/osbuild/images/pkg/blueprint"
)
type PartitionTable struct {
Size uint64 // Size of the disk (in bytes).
UUID string // Unique identifier of the partition table (GPT only).
Type string // Partition table type, e.g. dos, gpt.
Partitions []Partition
SectorSize uint64 // Sector size in bytes
ExtraPadding uint64 // Extra space at the end of the partition table (sectors)
}
func NewPartitionTable(basePT *PartitionTable, mountpoints []blueprint.FilesystemCustomization, imageSize uint64, lvmify bool, requiredSizes map[string]uint64, rng *rand.Rand) (*PartitionTable, error) {
newPT := basePT.Clone().(*PartitionTable)
// first pass: enlarge existing mountpoints and collect new ones
newMountpoints, _ := newPT.applyCustomization(mountpoints, false)
// if there is any new mountpoint and lvmify is enabled, ensure we have LVM layout
if lvmify && len(newMountpoints) > 0 {
err := newPT.ensureLVM()
if err != nil {
return nil, err
}
}
// second pass: deal with new mountpoints and newly created ones, after switching to
// the LVM layout, if requested, which might introduce new mount points, i.e. `/boot`
_, err := newPT.applyCustomization(newMountpoints, true)
if err != nil {
return nil, err
}
// If no separate requiredSizes are given then we use our defaults
if requiredSizes == nil {
requiredSizes = map[string]uint64{
"/": 1073741824,
"/usr": 2147483648,
}
}
if len(requiredSizes) != 0 {
newPT.EnsureDirectorySizes(requiredSizes)
}
// Calculate partition table offsets and sizes
newPT.relayout(imageSize)
// Generate new UUIDs for filesystems and partitions
newPT.GenerateUUIDs(rng)
return newPT, nil
}
func (pt *PartitionTable) IsContainer() bool {
return true
}
func (pt *PartitionTable) Clone() Entity {
if pt == nil {
return nil
}
clone := &PartitionTable{
Size: pt.Size,
UUID: pt.UUID,
Type: pt.Type,
Partitions: make([]Partition, len(pt.Partitions)),
SectorSize: pt.SectorSize,
ExtraPadding: pt.ExtraPadding,
}
for idx, partition := range pt.Partitions {
ent := partition.Clone()
// partition.Clone() will return nil only if the partition is nil
if ent == nil {
panic(fmt.Sprintf("partition %d in a Partition Table is nil; this is a programming error", idx))
}
part, cloneOk := ent.(*Partition)
if !cloneOk {
panic("PartitionTable.Clone() returned an Entity that cannot be converted to *PartitionTable; this is a programming error")
}
clone.Partitions[idx] = *part
}
return clone
}
// AlignUp will align the given bytes to next aligned grain if not already
// aligned
func (pt *PartitionTable) AlignUp(size uint64) uint64 {
grain := DefaultGrainBytes
if size%grain == 0 {
// already aligned: return unchanged
return size
}
return ((size + grain) / grain) * grain
}
// Convert the given bytes to the number of sectors.
func (pt *PartitionTable) BytesToSectors(size uint64) uint64 {
sectorSize := pt.SectorSize
if sectorSize == 0 {
sectorSize = DefaultSectorSize
}
return size / sectorSize
}
// Convert the given number of sectors to bytes.
func (pt *PartitionTable) SectorsToBytes(size uint64) uint64 {
sectorSize := pt.SectorSize
if sectorSize == 0 {
sectorSize = DefaultSectorSize
}
return size * sectorSize
}
// Returns if the partition table contains a filesystem with the given
// mount point.
func (pt *PartitionTable) ContainsMountpoint(mountpoint string) bool {
return len(entityPath(pt, mountpoint)) > 0
}
// Generate all needed UUIDs for all the partiton and filesystems
//
// Will not overwrite existing UUIDs and only generate UUIDs for
// partitions if the layout is GPT.
func (pt *PartitionTable) GenerateUUIDs(rng *rand.Rand) {
setuuid := func(ent Entity, path []Entity) error {
if ui, ok := ent.(UniqueEntity); ok {
ui.GenUUID(rng)
}
return nil
}
_ = pt.ForEachEntity(setuuid)
// if this is a MBR partition table, there is no need to generate
// uuids for the partitions themselves
if pt.Type != "gpt" {
return
}
for idx, part := range pt.Partitions {
if part.UUID == "" {
pt.Partitions[idx].UUID = uuid.Must(newRandomUUIDFromReader(rng)).String()
}
}
}
func (pt *PartitionTable) GetItemCount() uint {
return uint(len(pt.Partitions))
}
func (pt *PartitionTable) GetChild(n uint) Entity {
return &pt.Partitions[n]
}
func (pt *PartitionTable) GetSize() uint64 {
return pt.Size
}
func (pt *PartitionTable) EnsureSize(s uint64) bool {
if s > pt.Size {
pt.Size = s
return true
}
return false
}
func (pt *PartitionTable) findDirectoryEntityPath(dir string) []Entity {
if path := entityPath(pt, dir); path != nil {
return path
}
parent := filepath.Dir(dir)
if dir == parent {
// invalid dir or pt has no root
return nil
}
// move up the directory path and check again
return pt.findDirectoryEntityPath(parent)
}
// EnsureDirectorySizes takes a mapping of directory paths to sizes (in bytes)
// and resizes the appropriate partitions such that they are at least the size
// of the sum of their subdirectories plus their own sizes.
// The function will panic if any of the directory paths are invalid.
func (pt *PartitionTable) EnsureDirectorySizes(dirSizeMap map[string]uint64) {
type mntSize struct {
entPath []Entity
newSize uint64
}
// add up the required size for each directory grouped by their mountpoints
mntSizeMap := make(map[string]*mntSize)
for dir, size := range dirSizeMap {
entPath := pt.findDirectoryEntityPath(dir)
if entPath == nil {
panic(fmt.Sprintf("EnsureDirectorySizes: invalid dir path %q", dir))
}
mnt := entPath[0].(Mountable)
mountpoint := mnt.GetMountpoint()
if _, ok := mntSizeMap[mountpoint]; !ok {
mntSizeMap[mountpoint] = &mntSize{entPath, 0}
}
es := mntSizeMap[mountpoint]
es.newSize += size
}
// resize all the entities in the map
for _, es := range mntSizeMap {
resizeEntityBranch(es.entPath, es.newSize)
}
}
func (pt *PartitionTable) CreateMountpoint(mountpoint string, size uint64) (Entity, error) {
filesystem := Filesystem{
Type: "xfs",
Mountpoint: mountpoint,
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
}
partition := Partition{
Size: size,
Payload: &filesystem,
}
n := len(pt.Partitions)
var maxNo int
if pt.Type == "gpt" {
switch mountpoint {
case "/boot":
partition.Type = XBootLDRPartitionGUID
default:
partition.Type = FilesystemDataGUID
}
maxNo = 128
} else {
maxNo = 4
}
if n == maxNo {
return nil, fmt.Errorf("maximum number of partitions reached (%d)", maxNo)
}
pt.Partitions = append(pt.Partitions, partition)
return &pt.Partitions[len(pt.Partitions)-1], nil
}
type EntityCallback func(e Entity, path []Entity) error
func forEachEntity(e Entity, path []Entity, cb EntityCallback) error {
childPath := append(path, e)
err := cb(e, childPath)
if err != nil {
return err
}
c, ok := e.(Container)
if !ok {
return nil
}
for idx := uint(0); idx < c.GetItemCount(); idx++ {
child := c.GetChild(idx)
err = forEachEntity(child, childPath, cb)
if err != nil {
return err
}
}
return nil
}
// ForEachEntity runs the provided callback function on each entity in
// the PartitionTable.
func (pt *PartitionTable) ForEachEntity(cb EntityCallback) error {
return forEachEntity(pt, []Entity{}, cb)
}
func (pt *PartitionTable) HeaderSize() uint64 {
// always reserve one extra sector for the GPT header
// this also ensure we have enough space for the MBR
header := pt.SectorsToBytes(1)
if pt.Type == "dos" {
return header
}
// calculate the space we need for
parts := len(pt.Partitions)
// reserve a minimum of 128 partition entires
if parts < 128 {
parts = 128
}
// Assume that each partition entry is 128 bytes
// which might not be the case if the partition
// name exceeds 72 bytes
header += uint64(parts * 128)
return header
}
// Apply filesystem filesystem customization to the partiton table. If create is false
// will only apply customizations to existing partitions and return unhandled, i.e new
// ones. An error can only occur if create is set. Conversely, it will only return non
// empty list of new mountpoints if create is false.
// Does not relayout the table, i.e. a call to relayout might be needed.
func (pt *PartitionTable) applyCustomization(mountpoints []blueprint.FilesystemCustomization, create bool) ([]blueprint.FilesystemCustomization, error) {
newMountpoints := []blueprint.FilesystemCustomization{}
for _, mnt := range mountpoints {
size := clampFSSize(mnt.Mountpoint, mnt.MinSize)
if path := entityPath(pt, mnt.Mountpoint); len(path) != 0 {
size = alignEntityBranch(path, size)
resizeEntityBranch(path, size)
} else {
if !create {
newMountpoints = append(newMountpoints, mnt)
} else if err := pt.createFilesystem(mnt.Mountpoint, size); err != nil {
return nil, err
}
}
}
return newMountpoints, nil
}
// Dynamically calculate and update the start point for each of the existing
// partitions. Adjusts the overall size of image to either the supplied
// value in `size` or to the sum of all partitions if that is lager.
// Will grow the root partition if there is any empty space.
// Returns the updated start point.
func (pt *PartitionTable) relayout(size uint64) uint64 {
// always reserve one extra sector for the GPT header
header := pt.HeaderSize()
footer := uint64(0)
// The GPT header is also at the end of the partition table
if pt.Type == "gpt" {
footer = header
}
start := pt.AlignUp(header)
size = pt.AlignUp(size)
var rootIdx = -1
for idx := range pt.Partitions {
partition := &pt.Partitions[idx]
if len(entityPath(partition, "/")) != 0 {
rootIdx = idx
continue
}
partition.Start = start
partition.Size = pt.AlignUp(partition.Size)
start += partition.Size
}
if rootIdx < 0 {
panic("no root filesystem found; this is a programming error")
}
root := &pt.Partitions[rootIdx]
root.Start = start
// add the extra padding specified in the partition table
footer += pt.ExtraPadding
// If the sum of all partitions is bigger then the specified size,
// we use that instead. Grow the partition table size if needed.
end := pt.AlignUp(root.Start + footer + root.Size)
if end > size {
size = end
}
if size > pt.Size {
pt.Size = size
}
// If there is space left in the partition table, grow root
root.Size = pt.Size - root.Start
// Finally we shrink the last partition, i.e. the root partition,
// to leave space for the footer, e.g. the secondary GPT header.
root.Size -= footer
return start
}
func (pt *PartitionTable) createFilesystem(mountpoint string, size uint64) error {
rootPath := entityPath(pt, "/")
if rootPath == nil {
panic("no root mountpoint for PartitionTable")
}
var vc MountpointCreator
var entity Entity
var idx int
for idx, entity = range rootPath {
var ok bool
if vc, ok = entity.(MountpointCreator); ok {
break
}
}
if vc == nil {
panic("could not find root volume container")
}
newVol, err := vc.CreateMountpoint(mountpoint, 0)
if err != nil {
return fmt.Errorf("failed creating volume: " + err.Error())
}
vcPath := append([]Entity{newVol}, rootPath[idx:]...)
size = alignEntityBranch(vcPath, size)
resizeEntityBranch(vcPath, size)
return nil
}
// entityPath stats at ent and searches for an Entity with a Mountpoint equal
// to the target. Returns a slice of all the Entities leading to the Mountable
// in reverse order. If no Entity has the target as a Mountpoint, returns nil.
// If a slice is returned, the last element is always the starting Entity ent
// and the first element is always a Mountable with a Mountpoint equal to the
// target.
func entityPath(ent Entity, target string) []Entity {
switch e := ent.(type) {
case Mountable:
if target == e.GetMountpoint() {
return []Entity{ent}
}
case Container:
for idx := uint(0); idx < e.GetItemCount(); idx++ {
child := e.GetChild(idx)
path := entityPath(child, target)
if path != nil {
path = append(path, e)
return path
}
}
}
return nil
}
type MountableCallback func(mnt Mountable, path []Entity) error
func forEachMountable(c Container, path []Entity, cb MountableCallback) error {
for idx := uint(0); idx < c.GetItemCount(); idx++ {
child := c.GetChild(idx)
childPath := append(path, child)
var err error
switch ent := child.(type) {
case Mountable:
err = cb(ent, childPath)
case Container:
err = forEachMountable(ent, childPath, cb)
}
if err != nil {
return err
}
}
return nil
}
// ForEachMountable runs the provided callback function on each Mountable in
// the PartitionTable.
func (pt *PartitionTable) ForEachMountable(cb MountableCallback) error {
return forEachMountable(pt, []Entity{pt}, cb)
}
// FindMountable returns the Mountable entity with the given mountpoint in the
// PartitionTable. Returns nil if no Entity has the target as a Mountpoint.
func (pt *PartitionTable) FindMountable(mountpoint string) Mountable {
path := entityPath(pt, mountpoint)
if len(path) == 0 {
return nil
}
// first path element is guaranteed to be Mountable
return path[0].(Mountable)
}
func clampFSSize(mountpoint string, size uint64) uint64 {
// set a minimum size of 1GB for all mountpoints
// with the exception for '/boot' (= 500 MB)
var minSize uint64 = 1073741824
if mountpoint == "/boot" {
minSize = 524288000
}
if minSize > size {
return minSize
}
return size
}
func alignEntityBranch(path []Entity, size uint64) uint64 {
if len(path) == 0 {
return size
}
element := path[0]
if c, ok := element.(MountpointCreator); ok {
size = c.AlignUp(size)
}
return alignEntityBranch(path[1:], size)
}
// resizeEntityBranch resizes the first entity in the specified path to be at
// least the specified size and then grows every entity up the path to the
// PartitionTable accordingly.
func resizeEntityBranch(path []Entity, size uint64) {
if len(path) == 0 {
return
}
element := path[0]
if c, ok := element.(Container); ok {
containerSize := uint64(0)
for idx := uint(0); idx < c.GetItemCount(); idx++ {
if s, ok := c.GetChild(idx).(Sizeable); ok {
containerSize += s.GetSize()
} else {
break
}
}
if vc, ok := element.(VolumeContainer); ok {
containerSize += vc.MetadataSize()
}
if containerSize > size {
size = containerSize
}
}
if sz, ok := element.(Sizeable); ok {
if !sz.EnsureSize(size) {
return
}
}
resizeEntityBranch(path[1:], size)
}
// GenUUID generates and sets UUIDs for all Partitions in the PartitionTable if
// the layout is GPT.
func (pt *PartitionTable) GenUUID(rng *rand.Rand) {
if pt.UUID == "" {
pt.UUID = uuid.Must(newRandomUUIDFromReader(rng)).String()
}
}
// ensureLVM will ensure that the root partition is on an LVM volume, i.e. if
// it currently is not, it will wrap it in one
func (pt *PartitionTable) ensureLVM() error {
rootPath := entityPath(pt, "/")
if rootPath == nil {
panic("no root mountpoint for PartitionTable")
}
// we need a /boot partition to boot LVM, ensure one exists
bootPath := entityPath(pt, "/boot")
if bootPath == nil {
_, err := pt.CreateMountpoint("/boot", 512*1024*1024)
if err != nil {
return err
}
rootPath = entityPath(pt, "/")
}
parent := rootPath[1] // NB: entityPath has reversed order
if _, ok := parent.(*LVMLogicalVolume); ok {
return nil
} else if part, ok := parent.(*Partition); ok {
filesystem := part.Payload
vg := &LVMVolumeGroup{
Name: "rootvg",
Description: "created via lvm2 and osbuild",
}
_, err := vg.CreateLogicalVolume("root", part.Size, filesystem)
if err != nil {
panic(fmt.Sprintf("Could not create LV: %v", err))
}
part.Payload = vg
// reset it so it will be grown later
part.Size = 0
if pt.Type == "gpt" {
part.Type = LVMPartitionGUID
} else {
part.Type = "8e"
}
} else {
panic("unsupported parent for LVM")
}
return nil
}
func (pt *PartitionTable) GetBuildPackages() []string {
packages := []string{}
hasLVM := false
hasBtrfs := false
hasXFS := false
hasFAT := false
hasEXT4 := false
hasLUKS := false
introspectPT := func(e Entity, path []Entity) error {
switch ent := e.(type) {
case *LVMLogicalVolume:
hasLVM = true
case *Btrfs:
hasBtrfs = true
case *Filesystem:
switch ent.GetFSType() {
case "vfat":
hasFAT = true
case "btrfs":
hasBtrfs = true
case "xfs":
hasXFS = true
case "ext4":
hasEXT4 = true
}
case *LUKSContainer:
hasLUKS = true
}
return nil
}
_ = pt.ForEachEntity(introspectPT)
// TODO: LUKS
if hasLVM {
packages = append(packages, "lvm2")
}
if hasBtrfs {
packages = append(packages, "btrfs-progs")
}
if hasXFS {
packages = append(packages, "xfsprogs")
}
if hasFAT {
packages = append(packages, "dosfstools")
}
if hasEXT4 {
packages = append(packages, "e2fsprogs")
}
if hasLUKS {
packages = append(packages,
"clevis",
"clevis-luks",
"cryptsetup",
)
}
return packages
}

161
vendor/github.com/osbuild/images/pkg/distro/distro.go generated vendored Normal file
View file

@ -0,0 +1,161 @@
package distro
import (
"github.com/osbuild/images/pkg/blueprint"
"github.com/osbuild/images/pkg/disk"
"github.com/osbuild/images/pkg/manifest"
"github.com/osbuild/images/pkg/ostree"
"github.com/osbuild/images/pkg/rhsm/facts"
"github.com/osbuild/images/pkg/rpmmd"
"github.com/osbuild/images/pkg/subscription"
)
type BootMode uint64
const (
BOOT_NONE BootMode = iota
BOOT_LEGACY
BOOT_UEFI
BOOT_HYBRID
)
func (m BootMode) String() string {
switch m {
case BOOT_NONE:
return "none"
case BOOT_LEGACY:
return "legacy"
case BOOT_UEFI:
return "uefi"
case BOOT_HYBRID:
return "hybrid"
default:
panic("invalid boot mode")
}
}
// A Distro represents composer's notion of what a given distribution is.
type Distro interface {
// Returns the name of the distro.
Name() string
// Returns the release version of the distro. This is used in repo
// files on the host system and required for the subscription support.
Releasever() string
// Returns the module platform id of the distro. This is used by DNF
// for modularity support.
ModulePlatformID() string
// Returns the ostree reference template
OSTreeRef() string
// Returns a sorted list of the names of the architectures this distro
// supports.
ListArches() []string
// Returns an object representing the given architecture as support
// by this distro.
GetArch(arch string) (Arch, error)
}
// An Arch represents a given distribution's support for a given architecture.
type Arch interface {
// Returns the name of the architecture.
Name() string
// Returns a sorted list of the names of the image types this architecture
// supports.
ListImageTypes() []string
// Returns an object representing a given image format for this architecture,
// on this distro.
GetImageType(imageType string) (ImageType, error)
// Returns the parent distro
Distro() Distro
}
// An ImageType represents a given distribution's support for a given Image Type
// for a given architecture.
type ImageType interface {
// Returns the name of the image type.
Name() string
// Returns the parent architecture
Arch() Arch
// Returns the canonical filename for the image type.
Filename() string
// Retrns the MIME-type for the image type.
MIMEType() string
// Returns the default OSTree ref for the image type.
OSTreeRef() string
// Returns the proper image size for a given output format. If the input size
// is 0 the default value for the format will be returned.
Size(size uint64) uint64
// Returns the corresponding partion type ("gpt", "dos") or "" the image type
// has no partition table. Only support for RHEL 8.5+
PartitionType() string
// Returns the corresponding boot mode ("legacy", "uefi", "hybrid") or "none"
BootMode() BootMode
// Returns the names of the pipelines that set up the build environment (buildroot).
BuildPipelines() []string
// Returns the names of the pipelines that create the image.
PayloadPipelines() []string
// Returns the package set names safe to install custom packages via custom repositories.
PayloadPackageSets() []string
// Returns named arrays of package set names which should be depsolved in a chain.
PackageSetsChains() map[string][]string
// Returns the names of the stages that will produce the build output.
Exports() []string
// Returns an osbuild manifest, containing the sources and pipeline necessary
// to build an image, given output format with all packages and customizations
// specified in the given blueprint; it also returns any warnings (e.g.
// deprecation notices) generated by the manifest.
// The packageSpecSets must be labelled in the same way as the originating PackageSets.
Manifest(bp *blueprint.Blueprint, options ImageOptions, repos []rpmmd.RepoConfig, seed int64) (*manifest.Manifest, []string, error)
}
// The ImageOptions specify options for a specific image build
type ImageOptions struct {
Size uint64
OSTree *ostree.ImageOptions
Subscription *subscription.ImageOptions
Facts *facts.ImageOptions
}
type BasePartitionTableMap map[string]disk.PartitionTable
// Fallbacks: When a new method is added to an interface to provide to provide
// information that isn't available for older implementations, the older
// methods should return a fallback/default value by calling the appropriate
// function from below.
// Example: Exports() simply returns "assembler" for older image type
// implementations that didn't produce v1 manifests that have named pipelines.
func BuildPipelinesFallback() []string {
return []string{"build"}
}
func PayloadPipelinesFallback() []string {
return []string{"os", "assembler"}
}
func ExportsFallback() []string {
return []string{"assembler"}
}
func PayloadPackageSets() []string {
return []string{}
}

View file

@ -0,0 +1,623 @@
package distro_test_common
import (
"encoding/json"
"fmt"
"os"
"path"
"path/filepath"
"strings"
"testing"
"github.com/google/go-cmp/cmp"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/osbuild/images/internal/common"
"github.com/osbuild/images/internal/dnfjson"
"github.com/osbuild/images/pkg/blueprint"
"github.com/osbuild/images/pkg/container"
"github.com/osbuild/images/pkg/distro"
"github.com/osbuild/images/pkg/distroregistry"
"github.com/osbuild/images/pkg/manifest"
"github.com/osbuild/images/pkg/ostree"
"github.com/osbuild/images/pkg/rhsm/facts"
"github.com/osbuild/images/pkg/rpmmd"
)
const RandomTestSeed = 0
func TestDistro_Manifest(t *testing.T, pipelinePath string, prefix string, registry *distroregistry.Registry, depsolvePkgSets bool, dnfCacheDir, dnfJsonPath string) {
assert := assert.New(t)
fileNames, err := filepath.Glob(filepath.Join(pipelinePath, prefix))
assert.NoErrorf(err, "Could not read pipelines directory '%s': %v", pipelinePath, err)
require.Greaterf(t, len(fileNames), 0, "No pipelines found in %s for %s", pipelinePath, prefix)
for _, fileName := range fileNames {
type repository struct {
BaseURL string `json:"baseurl,omitempty"`
Metalink string `json:"metalink,omitempty"`
MirrorList string `json:"mirrorlist,omitempty"`
GPGKey string `json:"gpgkey,omitempty"`
CheckGPG bool `json:"check_gpg,omitempty"`
PackageSets []string `json:"package-sets,omitempty"`
}
type ostreeOptions struct {
Ref string `json:"ref"`
URL string `json:"url"`
Parent string `json:"parent"`
RHSM bool `json:"rhsm"`
}
type composeRequest struct {
Distro string `json:"distro"`
Arch string `json:"arch"`
ImageType string `json:"image-type"`
Repositories []repository `json:"repositories"`
Blueprint *blueprint.Blueprint `json:"blueprint"`
OSTree ostreeOptions `json:"ostree"`
}
var tt struct {
ComposeRequest *composeRequest `json:"compose-request"`
PackageSpecSets map[string][]rpmmd.PackageSpec `json:"rpmmd"`
Manifest manifest.OSBuildManifest `json:"manifest,omitempty"`
Containers map[string][]container.Spec `json:"containers,omitempty"`
OSTreeCommits map[string][]ostree.CommitSpec `json:"ostree-commits,omitempty"`
}
file, err := os.ReadFile(fileName)
assert.NoErrorf(err, "Could not read test-case '%s': %v", fileName, err)
err = json.Unmarshal([]byte(file), &tt)
assert.NoErrorf(err, "Could not parse test-case '%s': %v", fileName, err)
if tt.ComposeRequest == nil || tt.ComposeRequest.Blueprint == nil {
t.Logf("Skipping '%s'.", fileName)
continue
}
repos := make([]rpmmd.RepoConfig, len(tt.ComposeRequest.Repositories))
for i, repo := range tt.ComposeRequest.Repositories {
var urls []string
if repo.BaseURL != "" {
urls = []string{repo.BaseURL}
}
var keys []string
if repo.GPGKey != "" {
keys = []string{repo.GPGKey}
}
repos[i] = rpmmd.RepoConfig{
Name: fmt.Sprintf("repo-%d", i),
BaseURLs: urls,
Metalink: repo.Metalink,
MirrorList: repo.MirrorList,
GPGKeys: keys,
CheckGPG: common.ToPtr(repo.CheckGPG),
PackageSets: repo.PackageSets,
}
}
t.Run(path.Base(fileName), func(t *testing.T) {
require.NoError(t, err)
d := registry.GetDistro(tt.ComposeRequest.Distro)
if d == nil {
t.Errorf("unknown distro: %v", tt.ComposeRequest.Distro)
return
}
arch, err := d.GetArch(tt.ComposeRequest.Arch)
if err != nil {
t.Errorf("unknown arch: %v", tt.ComposeRequest.Arch)
return
}
imageType, err := arch.GetImageType(tt.ComposeRequest.ImageType)
if err != nil {
t.Errorf("unknown image type: %v", tt.ComposeRequest.ImageType)
return
}
ostreeOptions := &ostree.ImageOptions{
ImageRef: tt.ComposeRequest.OSTree.Ref,
ParentRef: tt.ComposeRequest.OSTree.Parent,
URL: tt.ComposeRequest.OSTree.URL,
RHSM: tt.ComposeRequest.OSTree.RHSM,
}
options := distro.ImageOptions{
Size: imageType.Size(0),
OSTree: ostreeOptions,
Facts: &facts.ImageOptions{
APIType: facts.TEST_APITYPE,
},
}
var imgPackageSpecSets map[string][]rpmmd.PackageSpec
// depsolve the image's package set to catch changes in the image's default package set.
// downside is that this takes long time
if depsolvePkgSets {
require.NotEmptyf(t, dnfCacheDir, "DNF cache directory path must be provided when chosen to depsolve image package sets")
require.NotEmptyf(t, dnfJsonPath, "path to 'dnf-json' must be provided when chosen to depsolve image package sets")
imgPackageSpecSets = getImageTypePkgSpecSets(
imageType,
*tt.ComposeRequest.Blueprint,
options,
repos,
dnfCacheDir,
dnfJsonPath,
)
} else {
imgPackageSpecSets = tt.PackageSpecSets
}
manifest, _, err := imageType.Manifest(tt.ComposeRequest.Blueprint, options, repos, RandomTestSeed)
if err != nil {
t.Errorf("distro.Manifest() error = %v", err)
return
}
got, err := manifest.Serialize(imgPackageSpecSets, tt.Containers, tt.OSTreeCommits)
if (err == nil && tt.Manifest == nil) || (err != nil && tt.Manifest != nil) {
t.Errorf("distro.Manifest() error = %v", err)
return
}
if tt.Manifest != nil {
var expected, actual interface{}
err = json.Unmarshal(tt.Manifest, &expected)
require.NoError(t, err)
err = json.Unmarshal(got, &actual)
require.NoError(t, err)
diff := cmp.Diff(expected, actual)
require.Emptyf(t, diff, "Distro: %s\nArch: %s\nImage type: %s\nTest case file: %s\n", d.Name(), arch.Name(), imageType.Name(), fileName)
}
})
}
}
func getImageTypePkgSpecSets(imageType distro.ImageType, bp blueprint.Blueprint, options distro.ImageOptions, repos []rpmmd.RepoConfig, cacheDir, dnfJsonPath string) map[string][]rpmmd.PackageSpec {
manifest, _, err := imageType.Manifest(&bp, options, repos, 0)
if err != nil {
panic("Could not generate manifest for package sets: " + err.Error())
}
imgPackageSets := manifest.GetPackageSetChains()
solver := dnfjson.NewSolver(imageType.Arch().Distro().ModulePlatformID(),
imageType.Arch().Distro().Releasever(),
imageType.Arch().Name(),
imageType.Arch().Distro().Name(),
cacheDir)
solver.SetDNFJSONPath(dnfJsonPath)
depsolvedSets := make(map[string][]rpmmd.PackageSpec)
for name, packages := range imgPackageSets {
res, err := solver.Depsolve(packages)
if err != nil {
panic("Could not depsolve: " + err.Error())
}
depsolvedSets[name] = res
}
return depsolvedSets
}
func isOSTree(imgType distro.ImageType) bool {
return imgType.OSTreeRef() != ""
}
var knownKernels = []string{"kernel", "kernel-debug", "kernel-rt"}
// Returns the number of known kernels in the package list
func kernelCount(imgType distro.ImageType, bp blueprint.Blueprint) int {
ostreeOptions := &ostree.ImageOptions{
URL: "https://example.com", // required by some image types
}
manifest, _, err := imgType.Manifest(&bp, distro.ImageOptions{OSTree: ostreeOptions}, nil, 0)
if err != nil {
panic(err)
}
sets := manifest.GetPackageSetChains()
// Use a map to count unique kernels in a package set. If the same kernel
// name appears twice, it will only be installed once, so we only count it
// once.
kernels := make(map[string]bool)
for _, name := range []string{
// payload package set names
"os", "ostree-tree", "anaconda-tree",
"packages", "installer",
} {
for _, pset := range sets[name] {
for _, pkg := range pset.Include {
for _, kernel := range knownKernels {
if kernel == pkg {
kernels[kernel] = true
}
}
}
if len(kernels) > 0 {
// BUG: some RHEL image types contain both 'packages'
// and 'installer' even though only 'installer' is used
// this counts the kernel package twice. None of these
// sets should appear more than once, so return the count
// for the first package set that has at least one kernel.
return len(kernels)
}
}
}
return len(kernels)
}
func TestDistro_KernelOption(t *testing.T, d distro.Distro) {
skipList := map[string]bool{
// Ostree installers and raw images download a payload to embed or
// deploy. The kernel is part of the payload so it doesn't appear in
// the image type's package lists.
"iot-installer": true,
"edge-installer": true,
"edge-simplified-installer": true,
"iot-raw-image": true,
"edge-raw-image": true,
"edge-ami": true,
// the tar image type is a minimal image type which is not expected to
// be usable without a blueprint (see commit 83a63aaf172f556f6176e6099ffaa2b5357b58f5).
"tar": true,
// containers don't have kernels
"container": true,
// image installer on Fedora doesn't support kernel customizations
// on RHEL we support kernel name
// TODO: Remove when we unify the allowed options
"image-installer": true,
"live-installer": true,
}
{ // empty blueprint: all image types should just have the default kernel
for _, archName := range d.ListArches() {
arch, err := d.GetArch(archName)
assert.NoError(t, err)
for _, typeName := range arch.ListImageTypes() {
if skipList[typeName] {
continue
}
imgType, err := arch.GetImageType(typeName)
assert.NoError(t, err)
nk := kernelCount(imgType, blueprint.Blueprint{})
if nk != 1 {
assert.Fail(t, fmt.Sprintf("%s Kernel count", d.Name()),
"Image type %s (arch %s) specifies %d Kernel packages", typeName, archName, nk)
}
}
}
}
{ // kernel in blueprint: the specified kernel replaces the default
for _, kernelName := range []string{"kernel", "kernel-debug"} {
bp := blueprint.Blueprint{
Customizations: &blueprint.Customizations{
Kernel: &blueprint.KernelCustomization{
Name: kernelName,
},
},
}
for _, archName := range d.ListArches() {
arch, err := d.GetArch(archName)
assert.NoError(t, err)
for _, typeName := range arch.ListImageTypes() {
if typeName != "image-installer" {
continue
}
if typeName != "live-installer" {
continue
}
if skipList[typeName] {
continue
}
imgType, err := arch.GetImageType(typeName)
assert.NoError(t, err)
nk := kernelCount(imgType, bp)
// ostree image types should have only one kernel
// other image types should have at least 1
if nk < 1 || (nk != 1 && isOSTree(imgType)) {
assert.Fail(t, fmt.Sprintf("%s Kernel count", d.Name()),
"Image type %s (arch %s) specifies %d Kernel packages", typeName, archName, nk)
}
}
}
}
}
}
func TestDistro_OSTreeOptions(t *testing.T, d distro.Distro) {
// test that ostree parameters are properly resolved by image functions that should support them
typesWithParent := map[string]bool{ // image types that support specifying a parent commit
"edge-commit": true,
"edge-container": true,
"iot-commit": true,
"iot-container": true,
}
typesWithPayload := map[string]bool{
"edge-ami": true,
"edge-installer": true,
"edge-raw-image": true,
"edge-simplified-installer": true,
"iot-ami": true,
"iot-installer": true,
"iot-raw-image": true,
"iot-simplified-installer": true,
}
assert := assert.New(t)
{ // empty options: payload ref should equal default
for _, archName := range d.ListArches() {
arch, err := d.GetArch(archName)
assert.NoError(err)
for _, typeName := range arch.ListImageTypes() {
bp := &blueprint.Blueprint{}
if strings.HasSuffix(typeName, "simplified-installer") {
// simplified installers require installation device
bp = &blueprint.Blueprint{
Customizations: &blueprint.Customizations{
InstallationDevice: "/dev/sda42",
},
}
}
imgType, err := arch.GetImageType(typeName)
assert.NoError(err)
ostreeOptions := ostree.ImageOptions{}
if typesWithPayload[typeName] {
// payload types require URL
ostreeOptions.URL = "https://example.com/repo"
}
options := distro.ImageOptions{OSTree: &ostreeOptions}
m, _, err := imgType.Manifest(bp, options, nil, 0)
assert.NoError(err)
nrefs := 0
// If a manifest returns an ostree source spec, the ref should
// match the default.
for _, commits := range m.GetOSTreeSourceSpecs() {
for _, commit := range commits {
assert.Equal(options.OSTree.URL, commit.URL, "url does not match expected for image type %q\n", typeName)
assert.Equal(imgType.OSTreeRef(), commit.Ref, "ref does not match expected for image type %q\n", typeName)
nrefs++
}
}
nexpected := 0
if typesWithPayload[typeName] {
// image types with payload should return a ref
nexpected = 1
}
assert.Equal(nexpected, nrefs, "incorrect ref count for image type %q\n", typeName)
}
}
}
{ // ImageRef set: should be returned as payload ref - no parent for commits and containers
for _, archName := range d.ListArches() {
arch, err := d.GetArch(archName)
assert.NoError(err)
for _, typeName := range arch.ListImageTypes() {
bp := &blueprint.Blueprint{}
if strings.HasSuffix(typeName, "simplified-installer") {
// simplified installers require installation device
bp = &blueprint.Blueprint{
Customizations: &blueprint.Customizations{
InstallationDevice: "/dev/sda42",
},
}
}
imgType, err := arch.GetImageType(typeName)
assert.NoError(err)
ostreeOptions := ostree.ImageOptions{
ImageRef: "test/x86_64/01",
}
if typesWithPayload[typeName] {
// payload types require URL
ostreeOptions.URL = "https://example.com/repo"
}
options := distro.ImageOptions{OSTree: &ostreeOptions}
m, _, err := imgType.Manifest(bp, options, nil, 0)
assert.NoError(err)
nrefs := 0
// if a manifest returns an ostree source spec, the ref should
// match the default
for _, commits := range m.GetOSTreeSourceSpecs() {
for _, commit := range commits {
assert.Equal(options.OSTree.URL, commit.URL, "url does not match expected for image type %q\n", typeName)
assert.Equal(options.OSTree.ImageRef, commit.Ref, "ref does not match expected for image type %q\n", typeName)
nrefs++
}
}
nexpected := 0
if typesWithPayload[typeName] {
// image types with payload should return a ref
nexpected = 1
}
assert.Equal(nexpected, nrefs, "incorrect ref count for image type %q\n", typeName)
}
}
}
{ // URL always specified: should add a parent to image types that support it and the ref should match the option
for _, archName := range d.ListArches() {
arch, err := d.GetArch(archName)
assert.NoError(err)
for _, typeName := range arch.ListImageTypes() {
bp := &blueprint.Blueprint{}
if strings.HasSuffix(typeName, "simplified-installer") {
// simplified installers require installation device
bp = &blueprint.Blueprint{
Customizations: &blueprint.Customizations{
InstallationDevice: "/dev/sda42",
},
}
}
imgType, err := arch.GetImageType(typeName)
assert.NoError(err)
ostreeOptions := ostree.ImageOptions{
ImageRef: "test/x86_64/01",
URL: "https://example.com/repo",
}
options := distro.ImageOptions{OSTree: &ostreeOptions}
m, _, err := imgType.Manifest(bp, options, nil, 0)
assert.NoError(err)
nrefs := 0
for _, commits := range m.GetOSTreeSourceSpecs() {
for _, commit := range commits {
assert.Equal(options.OSTree.URL, commit.URL, "url does not match expected for image type %q\n", typeName)
assert.Equal(options.OSTree.ImageRef, commit.Ref, "ref does not match expected for image type %q\n", typeName)
nrefs++
}
}
nexpected := 0
if typesWithPayload[typeName] || typesWithParent[typeName] {
// image types with payload or parent should return a ref
nexpected = 1
}
assert.Equal(nexpected, nrefs, "incorrect ref count for image type %q\n", typeName)
}
}
}
{ // URL and parent ref always specified: payload ref should be default - parent ref should match option
for _, archName := range d.ListArches() {
arch, err := d.GetArch(archName)
assert.NoError(err)
for _, typeName := range arch.ListImageTypes() {
bp := &blueprint.Blueprint{}
if strings.HasSuffix(typeName, "simplified-installer") {
// simplified installers require installation device
bp = &blueprint.Blueprint{
Customizations: &blueprint.Customizations{
InstallationDevice: "/dev/sda42",
},
}
}
imgType, err := arch.GetImageType(typeName)
assert.NoError(err)
ostreeOptions := ostree.ImageOptions{
ParentRef: "test/x86_64/01",
URL: "https://example.com/repo",
}
options := distro.ImageOptions{OSTree: &ostreeOptions}
m, _, err := imgType.Manifest(bp, options, nil, 0)
assert.NoError(err)
nrefs := 0
for _, commits := range m.GetOSTreeSourceSpecs() {
for _, commit := range commits {
assert.Equal(options.OSTree.URL, commit.URL, "url does not match expected for image type %q\n", typeName)
if typesWithPayload[typeName] {
// payload ref should fall back to default
assert.Equal(imgType.OSTreeRef(), commit.Ref, "ref does not match expected for image type %q\n", typeName)
} else if typesWithParent[typeName] {
// parent ref should match option
assert.Equal(options.OSTree.ParentRef, commit.Ref, "ref does not match expected for image type %q\n", typeName)
} else {
// image type requires ostree commit but isn't specified: this shouldn't happen
panic(fmt.Sprintf("image type %q requires ostree commit but is not covered by test", typeName))
}
nrefs++
}
}
nexpected := 0
if typesWithPayload[typeName] || typesWithParent[typeName] {
// image types with payload or parent should return a ref
nexpected = 1
}
assert.Equal(nexpected, nrefs, "incorrect ref count for image type %q\n", typeName)
}
}
}
{ // All options set: all refs should match the corresponding option
for _, archName := range d.ListArches() {
arch, err := d.GetArch(archName)
assert.NoError(err)
for _, typeName := range arch.ListImageTypes() {
bp := &blueprint.Blueprint{}
if strings.HasSuffix(typeName, "simplified-installer") {
// simplified installers require installation device
bp = &blueprint.Blueprint{
Customizations: &blueprint.Customizations{
InstallationDevice: "/dev/sda42",
},
}
}
imgType, err := arch.GetImageType(typeName)
assert.NoError(err)
ostreeOptions := ostree.ImageOptions{
ImageRef: "test/x86_64/01",
ParentRef: "test/x86_64/02",
URL: "https://example.com/repo",
}
options := distro.ImageOptions{OSTree: &ostreeOptions}
m, _, err := imgType.Manifest(bp, options, nil, 0)
assert.NoError(err)
nrefs := 0
for _, commits := range m.GetOSTreeSourceSpecs() {
for _, commit := range commits {
assert.Equal(options.OSTree.URL, commit.URL, "url does not match expected for image type %q\n", typeName)
if typesWithPayload[typeName] {
// payload ref should match image ref
assert.Equal(options.OSTree.ImageRef, commit.Ref, "ref does not match expected for image type %q\n", typeName)
} else if typesWithParent[typeName] {
// parent ref should match option
assert.Equal(options.OSTree.ParentRef, commit.Ref, "ref does not match expected for image type %q\n", typeName)
} else {
// image type requires ostree commit but isn't specified: this shouldn't happen
panic(fmt.Sprintf("image type %q requires ostree commit but is not covered by test", typeName))
}
nrefs++
}
}
nexpected := 0
if typesWithPayload[typeName] || typesWithParent[typeName] {
// image types with payload or parent should return a ref
nexpected = 1
}
assert.Equal(nexpected, nrefs, "incorrect ref count for image type %q\n", typeName)
}
}
}
{ // Parent set without URL: always causes error
for _, archName := range d.ListArches() {
arch, err := d.GetArch(archName)
assert.NoError(err)
for _, typeName := range arch.ListImageTypes() {
bp := &blueprint.Blueprint{}
if strings.HasSuffix(typeName, "simplified-installer") {
// simplified installers require installation device
bp = &blueprint.Blueprint{
Customizations: &blueprint.Customizations{
InstallationDevice: "/dev/sda42",
},
}
}
imgType, err := arch.GetImageType(typeName)
assert.NoError(err)
ostreeOptions := ostree.ImageOptions{
ParentRef: "test/x86_64/02",
}
options := distro.ImageOptions{OSTree: &ostreeOptions}
_, _, err = imgType.Manifest(bp, options, nil, 0)
assert.Error(err)
}
}
}
}

View file

@ -0,0 +1,722 @@
package fedora
import (
"errors"
"fmt"
"sort"
"strconv"
"github.com/osbuild/images/internal/common"
"github.com/osbuild/images/internal/environment"
"github.com/osbuild/images/internal/oscap"
"github.com/osbuild/images/pkg/distro"
"github.com/osbuild/images/pkg/platform"
"github.com/osbuild/images/pkg/rpmmd"
"github.com/osbuild/images/pkg/runner"
)
const (
// package set names
// main/common os image package set name
osPkgsKey = "os"
// container package set name
containerPkgsKey = "container"
// installer package set name
installerPkgsKey = "installer"
// blueprint package set name
blueprintPkgsKey = "blueprint"
//Kernel options for ami, qcow2, openstack, vhd and vmdk types
defaultKernelOptions = "ro no_timer_check console=ttyS0,115200n8 biosdevname=0 net.ifnames=0"
)
var (
oscapProfileAllowList = []oscap.Profile{
oscap.Ospp,
oscap.PciDss,
oscap.Standard,
}
// Services
iotServices = []string{
"NetworkManager.service",
"firewalld.service",
"rngd.service",
"sshd.service",
"zezere_ignition.timer",
"zezere_ignition_banner.service",
"greenboot-grub2-set-counter",
"greenboot-grub2-set-success",
"greenboot-healthcheck",
"greenboot-rpm-ostree-grub2-check-fallback",
"greenboot-status",
"greenboot-task-runner",
"redboot-auto-reboot",
"redboot-task-runner",
"parsec",
"dbus-parsec",
}
// Image Definitions
imageInstallerImgType = imageType{
name: "image-installer",
nameAliases: []string{"fedora-image-installer"},
filename: "installer.iso",
mimeType: "application/x-iso9660-image",
packageSets: map[string]packageSetFunc{
osPkgsKey: minimalrpmPackageSet,
installerPkgsKey: imageInstallerPackageSet,
},
bootable: true,
bootISO: true,
rpmOstree: false,
image: imageInstallerImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"anaconda-tree", "rootfs-image", "efiboot-tree", "os", "bootiso-tree", "bootiso"},
exports: []string{"bootiso"},
}
liveInstallerImgType = imageType{
name: "live-installer",
nameAliases: []string{},
filename: "live-installer.iso",
mimeType: "application/x-iso9660-image",
packageSets: map[string]packageSetFunc{
installerPkgsKey: liveInstallerPackageSet,
},
bootable: true,
bootISO: true,
rpmOstree: false,
image: liveInstallerImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"anaconda-tree", "rootfs-image", "efiboot-tree", "bootiso-tree", "bootiso"},
exports: []string{"bootiso"},
}
iotCommitImgType = imageType{
name: "iot-commit",
nameAliases: []string{"fedora-iot-commit"},
filename: "commit.tar",
mimeType: "application/x-tar",
packageSets: map[string]packageSetFunc{
osPkgsKey: iotCommitPackageSet,
},
defaultImageConfig: &distro.ImageConfig{
EnabledServices: iotServices,
},
rpmOstree: true,
image: iotCommitImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"os", "ostree-commit", "commit-archive"},
exports: []string{"commit-archive"},
}
iotOCIImgType = imageType{
name: "iot-container",
nameAliases: []string{"fedora-iot-container"},
filename: "container.tar",
mimeType: "application/x-tar",
packageSets: map[string]packageSetFunc{
osPkgsKey: iotCommitPackageSet,
containerPkgsKey: func(t *imageType) rpmmd.PackageSet {
return rpmmd.PackageSet{}
},
},
defaultImageConfig: &distro.ImageConfig{
EnabledServices: iotServices,
},
rpmOstree: true,
bootISO: false,
image: iotContainerImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"os", "ostree-commit", "container-tree", "container"},
exports: []string{"container"},
}
iotInstallerImgType = imageType{
name: "iot-installer",
nameAliases: []string{"fedora-iot-installer"},
filename: "installer.iso",
mimeType: "application/x-iso9660-image",
packageSets: map[string]packageSetFunc{
installerPkgsKey: iotInstallerPackageSet,
},
defaultImageConfig: &distro.ImageConfig{
Locale: common.ToPtr("en_US.UTF-8"),
EnabledServices: iotServices,
},
rpmOstree: true,
bootISO: true,
image: iotInstallerImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"anaconda-tree", "rootfs-image", "efiboot-tree", "bootiso-tree", "bootiso"},
exports: []string{"bootiso"},
}
iotRawImgType = imageType{
name: "iot-raw-image",
nameAliases: []string{"fedora-iot-raw-image"},
filename: "image.raw.xz",
compression: "xz",
mimeType: "application/xz",
packageSets: map[string]packageSetFunc{},
defaultImageConfig: &distro.ImageConfig{
Locale: common.ToPtr("en_US.UTF-8"),
},
defaultSize: 4 * common.GibiByte,
rpmOstree: true,
bootable: true,
image: iotRawImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"ostree-deployment", "image", "xz"},
exports: []string{"xz"},
basePartitionTables: iotBasePartitionTables,
// Passing an empty map into the required partition sizes disables the
// default partition sizes normally set so our `basePartitionTables` can
// override them (and make them smaller, in this case).
requiredPartitionSizes: map[string]uint64{},
}
qcow2ImgType = imageType{
name: "qcow2",
filename: "disk.qcow2",
mimeType: "application/x-qemu-disk",
packageSets: map[string]packageSetFunc{
osPkgsKey: qcow2CommonPackageSet,
},
defaultImageConfig: &distro.ImageConfig{
DefaultTarget: common.ToPtr("multi-user.target"),
EnabledServices: []string{
"cloud-init.service",
"cloud-config.service",
"cloud-final.service",
"cloud-init-local.service",
},
},
kernelOptions: defaultKernelOptions,
bootable: true,
defaultSize: 5 * common.GibiByte,
image: liveImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"os", "image", "qcow2"},
exports: []string{"qcow2"},
basePartitionTables: defaultBasePartitionTables,
}
vhdImgType = imageType{
name: "vhd",
filename: "disk.vhd",
mimeType: "application/x-vhd",
packageSets: map[string]packageSetFunc{
osPkgsKey: vhdCommonPackageSet,
},
defaultImageConfig: &distro.ImageConfig{
Locale: common.ToPtr("en_US.UTF-8"),
EnabledServices: []string{
"sshd",
},
DefaultTarget: common.ToPtr("multi-user.target"),
DisabledServices: []string{
"proc-sys-fs-binfmt_misc.mount",
"loadmodules.service",
},
},
kernelOptions: defaultKernelOptions,
bootable: true,
defaultSize: 2 * common.GibiByte,
image: liveImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"os", "image", "vpc"},
exports: []string{"vpc"},
basePartitionTables: defaultBasePartitionTables,
environment: &environment.Azure{},
}
vmdkDefaultImageConfig = &distro.ImageConfig{
Locale: common.ToPtr("en_US.UTF-8"),
EnabledServices: []string{
"cloud-init.service",
"cloud-config.service",
"cloud-final.service",
"cloud-init-local.service",
},
}
vmdkImgType = imageType{
name: "vmdk",
filename: "disk.vmdk",
mimeType: "application/x-vmdk",
packageSets: map[string]packageSetFunc{
osPkgsKey: vmdkCommonPackageSet,
},
defaultImageConfig: vmdkDefaultImageConfig,
kernelOptions: defaultKernelOptions,
bootable: true,
defaultSize: 2 * common.GibiByte,
image: liveImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"os", "image", "vmdk"},
exports: []string{"vmdk"},
basePartitionTables: defaultBasePartitionTables,
}
ovaImgType = imageType{
name: "ova",
filename: "image.ova",
mimeType: "application/ovf",
packageSets: map[string]packageSetFunc{
osPkgsKey: vmdkCommonPackageSet,
},
defaultImageConfig: vmdkDefaultImageConfig,
kernelOptions: defaultKernelOptions,
bootable: true,
defaultSize: 2 * common.GibiByte,
image: liveImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"os", "image", "vmdk", "ovf", "archive"},
exports: []string{"archive"},
basePartitionTables: defaultBasePartitionTables,
}
containerImgType = imageType{
name: "container",
filename: "container.tar",
mimeType: "application/x-tar",
packageSets: map[string]packageSetFunc{
osPkgsKey: containerPackageSet,
},
defaultImageConfig: &distro.ImageConfig{
NoSElinux: common.ToPtr(true),
ExcludeDocs: common.ToPtr(true),
Locale: common.ToPtr("C.UTF-8"),
Timezone: common.ToPtr("Etc/UTC"),
},
image: containerImage,
bootable: false,
buildPipelines: []string{"build"},
payloadPipelines: []string{"os", "container"},
exports: []string{"container"},
}
minimalrawImgType = imageType{
name: "minimal-raw",
filename: "raw.img",
mimeType: "application/disk",
packageSets: map[string]packageSetFunc{
osPkgsKey: minimalrpmPackageSet,
},
rpmOstree: false,
kernelOptions: defaultKernelOptions,
bootable: true,
defaultSize: 2 * common.GibiByte,
image: liveImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"os", "image"},
exports: []string{"image"},
basePartitionTables: defaultBasePartitionTables,
}
)
type distribution struct {
name string
product string
osVersion string
releaseVersion string
modulePlatformID string
ostreeRefTmpl string
isolabelTmpl string
runner runner.Runner
arches map[string]distro.Arch
defaultImageConfig *distro.ImageConfig
}
// Fedora based OS image configuration defaults
var defaultDistroImageConfig = &distro.ImageConfig{
Timezone: common.ToPtr("UTC"),
Locale: common.ToPtr("en_US"),
}
func getDistro(version int) distribution {
return distribution{
name: fmt.Sprintf("fedora-%d", version),
product: "Fedora",
osVersion: strconv.Itoa(version),
releaseVersion: strconv.Itoa(version),
modulePlatformID: fmt.Sprintf("platform:f%d", version),
ostreeRefTmpl: fmt.Sprintf("fedora/%d/%%s/iot", version),
isolabelTmpl: fmt.Sprintf("Fedora-%d-BaseOS-%%s", version),
runner: &runner.Fedora{Version: uint64(version)},
defaultImageConfig: defaultDistroImageConfig,
}
}
func (d *distribution) Name() string {
return d.name
}
func (d *distribution) Releasever() string {
return d.releaseVersion
}
func (d *distribution) ModulePlatformID() string {
return d.modulePlatformID
}
func (d *distribution) OSTreeRef() string {
return d.ostreeRefTmpl
}
func (d *distribution) ListArches() []string {
archNames := make([]string, 0, len(d.arches))
for name := range d.arches {
archNames = append(archNames, name)
}
sort.Strings(archNames)
return archNames
}
func (d *distribution) GetArch(name string) (distro.Arch, error) {
arch, exists := d.arches[name]
if !exists {
return nil, errors.New("invalid architecture: " + name)
}
return arch, nil
}
func (d *distribution) addArches(arches ...architecture) {
if d.arches == nil {
d.arches = map[string]distro.Arch{}
}
// Do not make copies of architectures, as opposed to image types,
// because architecture definitions are not used by more than a single
// distro definition.
for idx := range arches {
d.arches[arches[idx].name] = &arches[idx]
}
}
func (d *distribution) getDefaultImageConfig() *distro.ImageConfig {
return d.defaultImageConfig
}
type architecture struct {
distro *distribution
name string
imageTypes map[string]distro.ImageType
imageTypeAliases map[string]string
}
func (a *architecture) Name() string {
return a.name
}
func (a *architecture) ListImageTypes() []string {
itNames := make([]string, 0, len(a.imageTypes))
for name := range a.imageTypes {
itNames = append(itNames, name)
}
sort.Strings(itNames)
return itNames
}
func (a *architecture) GetImageType(name string) (distro.ImageType, error) {
t, exists := a.imageTypes[name]
if !exists {
aliasForName, exists := a.imageTypeAliases[name]
if !exists {
return nil, errors.New("invalid image type: " + name)
}
t, exists = a.imageTypes[aliasForName]
if !exists {
panic(fmt.Sprintf("image type '%s' is an alias to a non-existing image type '%s'", name, aliasForName))
}
}
return t, nil
}
func (a *architecture) addImageTypes(platform platform.Platform, imageTypes ...imageType) {
if a.imageTypes == nil {
a.imageTypes = map[string]distro.ImageType{}
}
for idx := range imageTypes {
it := imageTypes[idx]
it.arch = a
it.platform = platform
a.imageTypes[it.name] = &it
for _, alias := range it.nameAliases {
if a.imageTypeAliases == nil {
a.imageTypeAliases = map[string]string{}
}
if existingAliasFor, exists := a.imageTypeAliases[alias]; exists {
panic(fmt.Sprintf("image type alias '%s' for '%s' is already defined for another image type '%s'", alias, it.name, existingAliasFor))
}
a.imageTypeAliases[alias] = it.name
}
}
}
func (a *architecture) Distro() distro.Distro {
return a.distro
}
// New creates a new distro object, defining the supported architectures and image types
func NewF37() distro.Distro {
return newDistro(37)
}
func NewF38() distro.Distro {
return newDistro(38)
}
func NewF39() distro.Distro {
return newDistro(39)
}
func newDistro(version int) distro.Distro {
rd := getDistro(version)
// Architecture definitions
x86_64 := architecture{
name: platform.ARCH_X86_64.String(),
distro: &rd,
}
aarch64 := architecture{
name: platform.ARCH_AARCH64.String(),
distro: &rd,
}
ociImgType := qcow2ImgType
ociImgType.name = "oci"
amiImgType := qcow2ImgType
amiImgType.name = "ami"
amiImgType.filename = "image.raw"
amiImgType.mimeType = "application/octet-stream"
amiImgType.payloadPipelines = []string{"os", "image"}
amiImgType.exports = []string{"image"}
amiImgType.environment = &environment.EC2{}
openstackImgType := qcow2ImgType
openstackImgType.name = "openstack"
x86_64.addImageTypes(
&platform.X86{
BIOS: true,
UEFIVendor: "fedora",
BasePlatform: platform.BasePlatform{
ImageFormat: platform.FORMAT_QCOW2,
QCOW2Compat: "1.1",
},
},
qcow2ImgType,
ociImgType,
)
x86_64.addImageTypes(
&platform.X86{
BIOS: true,
UEFIVendor: "fedora",
BasePlatform: platform.BasePlatform{
ImageFormat: platform.FORMAT_QCOW2,
},
},
openstackImgType,
)
x86_64.addImageTypes(
&platform.X86{
BIOS: true,
UEFIVendor: "fedora",
BasePlatform: platform.BasePlatform{
ImageFormat: platform.FORMAT_VHD,
},
},
vhdImgType,
)
x86_64.addImageTypes(
&platform.X86{
BIOS: true,
UEFIVendor: "fedora",
BasePlatform: platform.BasePlatform{
ImageFormat: platform.FORMAT_VMDK,
},
},
vmdkImgType,
)
x86_64.addImageTypes(
&platform.X86{
BIOS: true,
UEFIVendor: "fedora",
BasePlatform: platform.BasePlatform{
ImageFormat: platform.FORMAT_OVA,
},
},
ovaImgType,
)
x86_64.addImageTypes(
&platform.X86{
BIOS: true,
UEFIVendor: "fedora",
BasePlatform: platform.BasePlatform{
ImageFormat: platform.FORMAT_RAW,
},
},
amiImgType,
)
x86_64.addImageTypes(
&platform.X86{},
containerImgType,
)
x86_64.addImageTypes(
&platform.X86{
BasePlatform: platform.BasePlatform{
FirmwarePackages: []string{
"microcode_ctl", // ??
"iwl1000-firmware",
"iwl100-firmware",
"iwl105-firmware",
"iwl135-firmware",
"iwl2000-firmware",
"iwl2030-firmware",
"iwl3160-firmware",
"iwl5000-firmware",
"iwl5150-firmware",
"iwl6000-firmware",
"iwl6050-firmware",
},
},
BIOS: true,
UEFIVendor: "fedora",
},
iotOCIImgType,
iotCommitImgType,
iotInstallerImgType,
imageInstallerImgType,
liveInstallerImgType,
)
x86_64.addImageTypes(
&platform.X86{
BasePlatform: platform.BasePlatform{
ImageFormat: platform.FORMAT_RAW,
},
BIOS: false,
UEFIVendor: "fedora",
},
iotRawImgType,
)
aarch64.addImageTypes(
&platform.Aarch64{
UEFIVendor: "fedora",
BasePlatform: platform.BasePlatform{
ImageFormat: platform.FORMAT_RAW,
},
},
amiImgType,
)
aarch64.addImageTypes(
&platform.Aarch64{
UEFIVendor: "fedora",
BasePlatform: platform.BasePlatform{
ImageFormat: platform.FORMAT_QCOW2,
QCOW2Compat: "1.1",
},
},
qcow2ImgType,
ociImgType,
)
aarch64.addImageTypes(
&platform.Aarch64{
UEFIVendor: "fedora",
BasePlatform: platform.BasePlatform{
ImageFormat: platform.FORMAT_QCOW2,
},
},
openstackImgType,
)
aarch64.addImageTypes(
&platform.Aarch64{},
containerImgType,
)
aarch64.addImageTypes(
&platform.Aarch64{
BasePlatform: platform.BasePlatform{
FirmwarePackages: []string{
"uboot-images-armv8", // ??
"bcm283x-firmware",
"arm-image-installer", // ??
},
},
UEFIVendor: "fedora",
},
iotCommitImgType,
iotOCIImgType,
iotInstallerImgType,
imageInstallerImgType,
liveInstallerImgType,
)
aarch64.addImageTypes(
&platform.Aarch64_IoT{
BasePlatform: platform.BasePlatform{
ImageFormat: platform.FORMAT_RAW,
},
UEFIVendor: "fedora",
BootFiles: [][2]string{
{"/usr/lib/ostree-boot/efi/bcm2710-rpi-2-b.dtb", "/boot/efi/"},
{"/usr/lib/ostree-boot/efi/bcm2710-rpi-3-b-plus.dtb", "/boot/efi/"},
{"/usr/lib/ostree-boot/efi/bcm2710-rpi-3-b.dtb", "/boot/efi/"},
{"/usr/lib/ostree-boot/efi/bcm2710-rpi-cm3.dtb", "/boot/efi/"},
{"/usr/lib/ostree-boot/efi/bcm2710-rpi-zero-2-w.dtb", "/boot/efi/"},
{"/usr/lib/ostree-boot/efi/bcm2710-rpi-zero-2.dtb", "/boot/efi/"},
{"/usr/lib/ostree-boot/efi/bcm2711-rpi-4-b.dtb", "/boot/efi/"},
{"/usr/lib/ostree-boot/efi/bcm2711-rpi-400.dtb", "/boot/efi/"},
{"/usr/lib/ostree-boot/efi/bcm2711-rpi-cm4.dtb", "/boot/efi/"},
{"/usr/lib/ostree-boot/efi/bcm2711-rpi-cm4s.dtb", "/boot/efi/"},
{"/usr/lib/ostree-boot/efi/bootcode.bin", "/boot/efi/"},
{"/usr/lib/ostree-boot/efi/config.txt", "/boot/efi/config.txt"},
{"/usr/lib/ostree-boot/efi/fixup.dat", "/boot/efi/"},
{"/usr/lib/ostree-boot/efi/fixup4.dat", "/boot/efi/"},
{"/usr/lib/ostree-boot/efi/fixup4cd.dat", "/boot/efi/"},
{"/usr/lib/ostree-boot/efi/fixup4db.dat", "/boot/efi/"},
{"/usr/lib/ostree-boot/efi/fixup4x.dat", "/boot/efi/"},
{"/usr/lib/ostree-boot/efi/fixup_cd.dat", "/boot/efi/"},
{"/usr/lib/ostree-boot/efi/fixup_db.dat", "/boot/efi/"},
{"/usr/lib/ostree-boot/efi/fixup_x.dat", "/boot/efi/"},
{"/usr/lib/ostree-boot/efi/overlays", "/boot/efi/"},
{"/usr/share/uboot/rpi_arm64/u-boot.bin", "/boot/efi/rpi-u-boot.bin"},
{"/usr/lib/ostree-boot/efi/start.elf", "/boot/efi/"},
{"/usr/lib/ostree-boot/efi/start4.elf", "/boot/efi/"},
{"/usr/lib/ostree-boot/efi/start4cd.elf", "/boot/efi/"},
{"/usr/lib/ostree-boot/efi/start4db.elf", "/boot/efi/"},
{"/usr/lib/ostree-boot/efi/start4x.elf", "/boot/efi/"},
{"/usr/lib/ostree-boot/efi/start_cd.elf", "/boot/efi/"},
{"/usr/lib/ostree-boot/efi/start_db.elf", "/boot/efi/"},
{"/usr/lib/ostree-boot/efi/start_x.elf", "/boot/efi/"},
},
},
iotRawImgType,
)
x86_64.addImageTypes(
&platform.X86{
UEFIVendor: "fedora",
BasePlatform: platform.BasePlatform{
ImageFormat: platform.FORMAT_RAW,
},
},
minimalrawImgType,
)
aarch64.addImageTypes(
&platform.Aarch64{
UEFIVendor: "fedora",
BasePlatform: platform.BasePlatform{
ImageFormat: platform.FORMAT_RAW,
},
},
minimalrawImgType,
)
rd.addArches(x86_64, aarch64)
return &rd
}

View file

@ -0,0 +1,532 @@
package fedora
import (
"fmt"
"math/rand"
"strings"
"github.com/osbuild/images/internal/common"
"github.com/osbuild/images/internal/oscap"
"github.com/osbuild/images/internal/users"
"github.com/osbuild/images/internal/workload"
"github.com/osbuild/images/pkg/blueprint"
"github.com/osbuild/images/pkg/container"
"github.com/osbuild/images/pkg/distro"
"github.com/osbuild/images/pkg/image"
"github.com/osbuild/images/pkg/manifest"
"github.com/osbuild/images/pkg/osbuild"
"github.com/osbuild/images/pkg/ostree"
"github.com/osbuild/images/pkg/rpmmd"
)
// HELPERS
func osCustomizations(
t *imageType,
osPackageSet rpmmd.PackageSet,
containers []container.SourceSpec,
c *blueprint.Customizations) manifest.OSCustomizations {
imageConfig := t.getDefaultImageConfig()
osc := manifest.OSCustomizations{}
if t.bootable || t.rpmOstree {
osc.KernelName = c.GetKernel().Name
var kernelOptions []string
if t.kernelOptions != "" {
kernelOptions = append(kernelOptions, t.kernelOptions)
}
if bpKernel := c.GetKernel(); bpKernel.Append != "" {
kernelOptions = append(kernelOptions, bpKernel.Append)
}
osc.KernelOptionsAppend = kernelOptions
}
osc.ExtraBasePackages = osPackageSet.Include
osc.ExcludeBasePackages = osPackageSet.Exclude
osc.ExtraBaseRepos = osPackageSet.Repositories
osc.Containers = containers
osc.GPGKeyFiles = imageConfig.GPGKeyFiles
if imageConfig.ExcludeDocs != nil {
osc.ExcludeDocs = *imageConfig.ExcludeDocs
}
if !t.bootISO {
// don't put users and groups in the payload of an installer
// add them via kickstart instead
osc.Groups = users.GroupsFromBP(c.GetGroups())
osc.Users = users.UsersFromBP(c.GetUsers())
}
osc.EnabledServices = imageConfig.EnabledServices
osc.DisabledServices = imageConfig.DisabledServices
if imageConfig.DefaultTarget != nil {
osc.DefaultTarget = *imageConfig.DefaultTarget
}
if fw := c.GetFirewall(); fw != nil {
options := osbuild.FirewallStageOptions{
Ports: fw.Ports,
}
if fw.Services != nil {
options.EnabledServices = fw.Services.Enabled
options.DisabledServices = fw.Services.Disabled
}
osc.Firewall = &options
}
language, keyboard := c.GetPrimaryLocale()
if language != nil {
osc.Language = *language
} else if imageConfig.Locale != nil {
osc.Language = *imageConfig.Locale
}
if keyboard != nil {
osc.Keyboard = keyboard
} else if imageConfig.Keyboard != nil {
osc.Keyboard = &imageConfig.Keyboard.Keymap
}
if hostname := c.GetHostname(); hostname != nil {
osc.Hostname = *hostname
} else {
osc.Hostname = "localhost.localdomain"
}
timezone, ntpServers := c.GetTimezoneSettings()
if timezone != nil {
osc.Timezone = *timezone
} else if imageConfig.Timezone != nil {
osc.Timezone = *imageConfig.Timezone
}
if len(ntpServers) > 0 {
for _, server := range ntpServers {
osc.NTPServers = append(osc.NTPServers, osbuild.ChronyConfigServer{Hostname: server})
}
} else if imageConfig.TimeSynchronization != nil {
osc.NTPServers = imageConfig.TimeSynchronization.Servers
}
// Relabel the tree, unless the `NoSElinux` flag is explicitly set to `true`
if imageConfig.NoSElinux == nil || imageConfig.NoSElinux != nil && !*imageConfig.NoSElinux {
osc.SElinux = "targeted"
}
if oscapConfig := c.GetOpenSCAP(); oscapConfig != nil {
if t.rpmOstree {
panic("unexpected oscap options for ostree image type")
}
var datastream = oscapConfig.DataStream
if datastream == "" {
datastream = oscap.DefaultFedoraDatastream()
}
osc.OpenSCAPConfig = osbuild.NewOscapRemediationStageOptions(
osbuild.OscapConfig{
Datastream: datastream,
ProfileID: oscapConfig.ProfileID,
},
)
}
var err error
osc.Directories, err = blueprint.DirectoryCustomizationsToFsNodeDirectories(c.GetDirectories())
if err != nil {
// In theory this should never happen, because the blueprint directory customizations
// should have been validated before this point.
panic(fmt.Sprintf("failed to convert directory customizations to fs node directories: %v", err))
}
osc.Files, err = blueprint.FileCustomizationsToFsNodeFiles(c.GetFiles())
if err != nil {
// In theory this should never happen, because the blueprint file customizations
// should have been validated before this point.
panic(fmt.Sprintf("failed to convert file customizations to fs node files: %v", err))
}
customRepos, err := c.GetRepositories()
if err != nil {
// This shouldn't happen and since the repos
// should have already been validated
panic(fmt.Sprintf("failed to get custom repos: %v", err))
}
// This function returns a map of filename and corresponding yum repos
// and a list of fs node files for the inline gpg keys so we can save
// them to disk. This step also swaps the inline gpg key with the path
// to the file in the os file tree
yumRepos, gpgKeyFiles, err := blueprint.RepoCustomizationsToRepoConfigAndGPGKeyFiles(customRepos)
if err != nil {
panic(fmt.Sprintf("failed to convert inline gpgkeys to fs node files: %v", err))
}
// add the gpg key files to the list of files to be added to the tree
if len(gpgKeyFiles) > 0 {
osc.Files = append(osc.Files, gpgKeyFiles...)
}
for filename, repos := range yumRepos {
osc.YUMRepos = append(osc.YUMRepos, osbuild.NewYumReposStageOptions(filename, repos))
}
osc.ShellInit = imageConfig.ShellInit
osc.Grub2Config = imageConfig.Grub2Config
osc.Sysconfig = imageConfig.Sysconfig
osc.SystemdLogind = imageConfig.SystemdLogind
osc.CloudInit = imageConfig.CloudInit
osc.Modprobe = imageConfig.Modprobe
osc.DracutConf = imageConfig.DracutConf
osc.SystemdUnit = imageConfig.SystemdUnit
osc.Authselect = imageConfig.Authselect
osc.SELinuxConfig = imageConfig.SELinuxConfig
osc.Tuned = imageConfig.Tuned
osc.Tmpfilesd = imageConfig.Tmpfilesd
osc.PamLimitsConf = imageConfig.PamLimitsConf
osc.Sysctld = imageConfig.Sysctld
osc.DNFConfig = imageConfig.DNFConfig
osc.SshdConfig = imageConfig.SshdConfig
osc.AuthConfig = imageConfig.Authconfig
osc.PwQuality = imageConfig.PwQuality
return osc
}
// IMAGES
func liveImage(workload workload.Workload,
t *imageType,
customizations *blueprint.Customizations,
options distro.ImageOptions,
packageSets map[string]rpmmd.PackageSet,
containers []container.SourceSpec,
rng *rand.Rand) (image.ImageKind, error) {
img := image.NewLiveImage()
img.Platform = t.platform
img.OSCustomizations = osCustomizations(t, packageSets[osPkgsKey], containers, customizations)
img.Environment = t.environment
img.Workload = workload
// TODO: move generation into LiveImage
pt, err := t.getPartitionTable(customizations.GetFilesystems(), options, rng)
if err != nil {
return nil, err
}
img.PartitionTable = pt
img.Filename = t.Filename()
return img, nil
}
func containerImage(workload workload.Workload,
t *imageType,
c *blueprint.Customizations,
options distro.ImageOptions,
packageSets map[string]rpmmd.PackageSet,
containers []container.SourceSpec,
rng *rand.Rand) (image.ImageKind, error) {
img := image.NewBaseContainer()
img.Platform = t.platform
img.OSCustomizations = osCustomizations(t, packageSets[osPkgsKey], containers, c)
img.Environment = t.environment
img.Workload = workload
img.Filename = t.Filename()
return img, nil
}
func liveInstallerImage(workload workload.Workload,
t *imageType,
customizations *blueprint.Customizations,
options distro.ImageOptions,
packageSets map[string]rpmmd.PackageSet,
containers []container.SourceSpec,
rng *rand.Rand) (image.ImageKind, error) {
img := image.NewAnacondaLiveInstaller()
distro := t.Arch().Distro()
// If the live installer is generated for Fedora 39 or higher then we enable the web ui
// kernel options. This is a temporary thing as the check for this should really lie with
// anaconda and their `liveinst` script to determine which frontend to start.
if common.VersionLessThan(distro.Releasever(), "39") {
img.AdditionalKernelOpts = []string{}
} else {
img.AdditionalKernelOpts = []string{"inst.webui"}
}
img.Platform = t.platform
img.Workload = workload
img.ExtraBasePackages = packageSets[installerPkgsKey]
d := t.arch.distro
img.ISOLabelTempl = d.isolabelTmpl
img.Product = d.product
img.OSName = "fedora"
img.OSVersion = d.osVersion
img.Release = fmt.Sprintf("%s %s", d.product, d.osVersion)
img.Filename = t.Filename()
return img, nil
}
func imageInstallerImage(workload workload.Workload,
t *imageType,
customizations *blueprint.Customizations,
options distro.ImageOptions,
packageSets map[string]rpmmd.PackageSet,
containers []container.SourceSpec,
rng *rand.Rand) (image.ImageKind, error) {
img := image.NewAnacondaTarInstaller()
// Enable anaconda-webui for Fedora > 38
distro := t.Arch().Distro()
if strings.HasPrefix(distro.Name(), "fedora") && !common.VersionLessThan(distro.Releasever(), "38") {
img.AdditionalAnacondaModules = []string{
"org.fedoraproject.Anaconda.Modules.Security",
"org.fedoraproject.Anaconda.Modules.Timezone",
"org.fedoraproject.Anaconda.Modules.Localization",
}
img.AdditionalKernelOpts = []string{"inst.webui", "inst.webui.remote"}
}
img.AdditionalAnacondaModules = append(img.AdditionalAnacondaModules, "org.fedoraproject.Anaconda.Modules.Users")
img.Platform = t.platform
img.Workload = workload
img.OSCustomizations = osCustomizations(t, packageSets[osPkgsKey], containers, customizations)
img.ExtraBasePackages = packageSets[installerPkgsKey]
img.Users = users.UsersFromBP(customizations.GetUsers())
img.Groups = users.GroupsFromBP(customizations.GetGroups())
img.SquashfsCompression = "lz4"
d := t.arch.distro
img.ISOLabelTempl = d.isolabelTmpl
img.Product = d.product
img.OSName = "fedora"
img.OSVersion = d.osVersion
img.Release = fmt.Sprintf("%s %s", d.product, d.osVersion)
img.Filename = t.Filename()
return img, nil
}
func iotCommitImage(workload workload.Workload,
t *imageType,
customizations *blueprint.Customizations,
options distro.ImageOptions,
packageSets map[string]rpmmd.PackageSet,
containers []container.SourceSpec,
rng *rand.Rand) (image.ImageKind, error) {
parentCommit, commitRef := makeOSTreeParentCommit(options.OSTree, t.OSTreeRef())
img := image.NewOSTreeArchive(commitRef)
img.Platform = t.platform
img.OSCustomizations = osCustomizations(t, packageSets[osPkgsKey], containers, customizations)
img.Environment = t.environment
img.Workload = workload
img.OSTreeParent = parentCommit
img.OSVersion = t.arch.distro.osVersion
img.Filename = t.Filename()
return img, nil
}
func iotContainerImage(workload workload.Workload,
t *imageType,
customizations *blueprint.Customizations,
options distro.ImageOptions,
packageSets map[string]rpmmd.PackageSet,
containers []container.SourceSpec,
rng *rand.Rand) (image.ImageKind, error) {
parentCommit, commitRef := makeOSTreeParentCommit(options.OSTree, t.OSTreeRef())
img := image.NewOSTreeContainer(commitRef)
img.Platform = t.platform
img.OSCustomizations = osCustomizations(t, packageSets[osPkgsKey], containers, customizations)
img.ContainerLanguage = img.OSCustomizations.Language
img.Environment = t.environment
img.Workload = workload
img.OSTreeParent = parentCommit
img.OSVersion = t.arch.distro.osVersion
img.ExtraContainerPackages = packageSets[containerPkgsKey]
img.Filename = t.Filename()
return img, nil
}
func iotInstallerImage(workload workload.Workload,
t *imageType,
customizations *blueprint.Customizations,
options distro.ImageOptions,
packageSets map[string]rpmmd.PackageSet,
containers []container.SourceSpec,
rng *rand.Rand) (image.ImageKind, error) {
d := t.arch.distro
commit, err := makeOSTreePayloadCommit(options.OSTree, t.OSTreeRef())
if err != nil {
return nil, fmt.Errorf("%s: %s", t.Name(), err.Error())
}
img := image.NewAnacondaOSTreeInstaller(commit)
img.Platform = t.platform
img.ExtraBasePackages = packageSets[installerPkgsKey]
img.Users = users.UsersFromBP(customizations.GetUsers())
img.Groups = users.GroupsFromBP(customizations.GetGroups())
img.AdditionalAnacondaModules = []string{
"org.fedoraproject.Anaconda.Modules.Timezone",
"org.fedoraproject.Anaconda.Modules.Localization",
"org.fedoraproject.Anaconda.Modules.Users",
}
img.SquashfsCompression = "lz4"
img.ISOLabelTempl = d.isolabelTmpl
img.Product = d.product
img.Variant = "IoT"
img.OSName = "fedora"
img.OSVersion = d.osVersion
img.Release = fmt.Sprintf("%s %s", d.product, d.osVersion)
img.Filename = t.Filename()
return img, nil
}
func iotRawImage(workload workload.Workload,
t *imageType,
customizations *blueprint.Customizations,
options distro.ImageOptions,
packageSets map[string]rpmmd.PackageSet,
containers []container.SourceSpec,
rng *rand.Rand) (image.ImageKind, error) {
commit, err := makeOSTreePayloadCommit(options.OSTree, t.OSTreeRef())
if err != nil {
return nil, fmt.Errorf("%s: %s", t.Name(), err.Error())
}
img := image.NewOSTreeRawImage(commit)
// Set sysroot read-only only for Fedora 37+
distro := t.Arch().Distro()
if strings.HasPrefix(distro.Name(), "fedora") && !common.VersionLessThan(distro.Releasever(), "37") {
img.SysrootReadOnly = true
}
img.Users = users.UsersFromBP(customizations.GetUsers())
img.Groups = users.GroupsFromBP(customizations.GetGroups())
img.Directories, err = blueprint.DirectoryCustomizationsToFsNodeDirectories(customizations.GetDirectories())
if err != nil {
return nil, err
}
img.Files, err = blueprint.FileCustomizationsToFsNodeFiles(customizations.GetFiles())
if err != nil {
return nil, err
}
// "rw" kernel option is required when /sysroot is mounted read-only to
// keep stateful parts of the filesystem writeable (/var/ and /etc)
img.KernelOptionsAppend = []string{"modprobe.blacklist=vc4", "rw"}
img.Keyboard = "us"
img.Locale = "C.UTF-8"
img.Platform = t.platform
img.Workload = workload
img.Remote = ostree.Remote{
Name: "fedora-iot",
URL: "https://ostree.fedoraproject.org/iot",
ContentURL: "mirrorlist=https://ostree.fedoraproject.org/iot/mirrorlist",
GPGKeyPaths: []string{"/etc/pki/rpm-gpg/"},
}
img.OSName = "fedora-iot"
// TODO: move generation into LiveImage
pt, err := t.getPartitionTable(customizations.GetFilesystems(), options, rng)
if err != nil {
return nil, err
}
img.PartitionTable = pt
img.Filename = t.Filename()
img.Compression = t.compression
return img, nil
}
// Create an ostree SourceSpec to define an ostree parent commit using the user
// options and the default ref for the image type. Additionally returns the
// ref to be used for the new commit to be created.
func makeOSTreeParentCommit(options *ostree.ImageOptions, defaultRef string) (*ostree.SourceSpec, string) {
commitRef := defaultRef
if options == nil {
// nothing to do
return nil, commitRef
}
if options.ImageRef != "" {
// user option overrides default commit ref
commitRef = options.ImageRef
}
var parentCommit *ostree.SourceSpec
if options.URL == "" {
// no parent
return nil, commitRef
}
// ostree URL specified: set source spec for parent commit
parentRef := options.ParentRef
if parentRef == "" {
// parent ref not set: use image ref
parentRef = commitRef
}
parentCommit = &ostree.SourceSpec{
URL: options.URL,
Ref: parentRef,
RHSM: options.RHSM,
}
return parentCommit, commitRef
}
// Create an ostree SourceSpec to define an ostree payload using the user options and the default ref for the image type.
func makeOSTreePayloadCommit(options *ostree.ImageOptions, defaultRef string) (ostree.SourceSpec, error) {
if options == nil || options.URL == "" {
// this should be caught by checkOptions() in distro, but it's good
// to guard against it here as well
return ostree.SourceSpec{}, fmt.Errorf("ostree commit URL required")
}
commitRef := defaultRef
if options.ImageRef != "" {
// user option overrides default commit ref
commitRef = options.ImageRef
}
return ostree.SourceSpec{
URL: options.URL,
Ref: commitRef,
RHSM: options.RHSM,
}, nil
}

View file

@ -0,0 +1,342 @@
package fedora
import (
"fmt"
"math/rand"
"strings"
"github.com/osbuild/images/internal/common"
"github.com/osbuild/images/internal/environment"
"github.com/osbuild/images/internal/oscap"
"github.com/osbuild/images/internal/pathpolicy"
"github.com/osbuild/images/internal/workload"
"github.com/osbuild/images/pkg/blueprint"
"github.com/osbuild/images/pkg/container"
"github.com/osbuild/images/pkg/disk"
"github.com/osbuild/images/pkg/distro"
"github.com/osbuild/images/pkg/image"
"github.com/osbuild/images/pkg/manifest"
"github.com/osbuild/images/pkg/ostree"
"github.com/osbuild/images/pkg/platform"
"github.com/osbuild/images/pkg/rpmmd"
"golang.org/x/exp/slices"
)
type imageFunc func(workload workload.Workload, t *imageType, customizations *blueprint.Customizations, options distro.ImageOptions, packageSets map[string]rpmmd.PackageSet, containers []container.SourceSpec, rng *rand.Rand) (image.ImageKind, error)
type packageSetFunc func(t *imageType) rpmmd.PackageSet
type imageType struct {
arch *architecture
platform platform.Platform
environment environment.Environment
workload workload.Workload
name string
nameAliases []string
filename string
compression string
mimeType string
packageSets map[string]packageSetFunc
defaultImageConfig *distro.ImageConfig
kernelOptions string
defaultSize uint64
buildPipelines []string
payloadPipelines []string
exports []string
image imageFunc
// bootISO: installable ISO
bootISO bool
// rpmOstree: iot/ostree
rpmOstree bool
// bootable image
bootable bool
// List of valid arches for the image type
basePartitionTables distro.BasePartitionTableMap
requiredPartitionSizes map[string]uint64
}
func (t *imageType) Name() string {
return t.name
}
func (t *imageType) Arch() distro.Arch {
return t.arch
}
func (t *imageType) Filename() string {
return t.filename
}
func (t *imageType) MIMEType() string {
return t.mimeType
}
func (t *imageType) OSTreeRef() string {
d := t.arch.distro
if t.rpmOstree {
return fmt.Sprintf(d.ostreeRefTmpl, t.arch.Name())
}
return ""
}
func (t *imageType) Size(size uint64) uint64 {
// Microsoft Azure requires vhd images to be rounded up to the nearest MB
if t.name == "vhd" && size%common.MebiByte != 0 {
size = (size/common.MebiByte + 1) * common.MebiByte
}
if size == 0 {
size = t.defaultSize
}
return size
}
func (t *imageType) BuildPipelines() []string {
return t.buildPipelines
}
func (t *imageType) PayloadPipelines() []string {
return t.payloadPipelines
}
func (t *imageType) PayloadPackageSets() []string {
return []string{blueprintPkgsKey}
}
func (t *imageType) PackageSetsChains() map[string][]string {
return make(map[string][]string)
}
func (t *imageType) Exports() []string {
if len(t.exports) > 0 {
return t.exports
}
return []string{"assembler"}
}
func (t *imageType) BootMode() distro.BootMode {
if t.platform.GetUEFIVendor() != "" && t.platform.GetBIOSPlatform() != "" {
return distro.BOOT_HYBRID
} else if t.platform.GetUEFIVendor() != "" {
return distro.BOOT_UEFI
} else if t.platform.GetBIOSPlatform() != "" || t.platform.GetZiplSupport() {
return distro.BOOT_LEGACY
}
return distro.BOOT_NONE
}
func (t *imageType) getPartitionTable(
mountpoints []blueprint.FilesystemCustomization,
options distro.ImageOptions,
rng *rand.Rand,
) (*disk.PartitionTable, error) {
basePartitionTable, exists := t.basePartitionTables[t.arch.Name()]
if !exists {
return nil, fmt.Errorf("unknown arch: " + t.arch.Name())
}
imageSize := t.Size(options.Size)
lvmify := !t.rpmOstree
return disk.NewPartitionTable(&basePartitionTable, mountpoints, imageSize, lvmify, t.requiredPartitionSizes, rng)
}
func (t *imageType) getDefaultImageConfig() *distro.ImageConfig {
// ensure that image always returns non-nil default config
imageConfig := t.defaultImageConfig
if imageConfig == nil {
imageConfig = &distro.ImageConfig{}
}
return imageConfig.InheritFrom(t.arch.distro.getDefaultImageConfig())
}
func (t *imageType) PartitionType() string {
basePartitionTable, exists := t.basePartitionTables[t.arch.Name()]
if !exists {
return ""
}
return basePartitionTable.Type
}
func (t *imageType) Manifest(bp *blueprint.Blueprint,
options distro.ImageOptions,
repos []rpmmd.RepoConfig,
seed int64) (*manifest.Manifest, []string, error) {
warnings, err := t.checkOptions(bp, options)
if err != nil {
return nil, nil, err
}
// merge package sets that appear in the image type with the package sets
// of the same name from the distro and arch
staticPackageSets := make(map[string]rpmmd.PackageSet)
for name, getter := range t.packageSets {
staticPackageSets[name] = getter(t)
}
// amend with repository information and collect payload repos
payloadRepos := make([]rpmmd.RepoConfig, 0)
for _, repo := range repos {
if len(repo.PackageSets) > 0 {
// only apply the repo to the listed package sets
for _, psName := range repo.PackageSets {
if slices.Contains(t.PayloadPackageSets(), psName) {
payloadRepos = append(payloadRepos, repo)
}
ps := staticPackageSets[psName]
ps.Repositories = append(ps.Repositories, repo)
staticPackageSets[psName] = ps
}
}
}
w := t.workload
if w == nil {
cw := &workload.Custom{
BaseWorkload: workload.BaseWorkload{
Repos: payloadRepos,
},
Packages: bp.GetPackagesEx(false),
}
if services := bp.Customizations.GetServices(); services != nil {
cw.Services = services.Enabled
cw.DisabledServices = services.Disabled
}
w = cw
}
containerSources := make([]container.SourceSpec, len(bp.Containers))
for idx := range bp.Containers {
containerSources[idx] = container.SourceSpec(bp.Containers[idx])
}
source := rand.NewSource(seed)
// math/rand is good enough in this case
/* #nosec G404 */
rng := rand.New(source)
img, err := t.image(w, t, bp.Customizations, options, staticPackageSets, containerSources, rng)
if err != nil {
return nil, nil, err
}
mf := manifest.New()
mf.Distro = manifest.DISTRO_FEDORA
_, err = img.InstantiateManifest(&mf, repos, t.arch.distro.runner, rng)
if err != nil {
return nil, nil, err
}
return &mf, warnings, err
}
// checkOptions checks the validity and compatibility of options and customizations for the image type.
// Returns ([]string, error) where []string, if non-nil, will hold any generated warnings (e.g. deprecation notices).
func (t *imageType) checkOptions(bp *blueprint.Blueprint, options distro.ImageOptions) ([]string, error) {
customizations := bp.Customizations
// we do not support embedding containers on ostree-derived images, only on commits themselves
if len(bp.Containers) > 0 && t.rpmOstree && (t.name != "iot-commit" && t.name != "iot-container") {
return nil, fmt.Errorf("embedding containers is not supported for %s on %s", t.name, t.arch.distro.name)
}
ostreeURL := ""
if options.OSTree != nil {
if options.OSTree.ParentRef != "" && options.OSTree.URL == "" {
// specifying parent ref also requires URL
return nil, ostree.NewParameterComboError("ostree parent ref specified, but no URL to retrieve it")
}
ostreeURL = options.OSTree.URL
}
if t.bootISO && t.rpmOstree {
// ostree-based ISOs require a URL from which to pull a payload commit
if ostreeURL == "" {
return nil, fmt.Errorf("boot ISO image type %q requires specifying a URL from which to retrieve the OSTree commit", t.name)
}
}
if t.name == "iot-raw-image" {
allowed := []string{"User", "Group", "Directories", "Files", "Services"}
if err := customizations.CheckAllowed(allowed...); err != nil {
return nil, fmt.Errorf("unsupported blueprint customizations found for image type %q: (allowed: %s)", t.name, strings.Join(allowed, ", "))
}
// TODO: consider additional checks, such as those in "edge-simplified-installer" in RHEL distros
}
// BootISO's have limited support for customizations.
// TODO: Support kernel name selection for image-installer
if t.bootISO {
if t.name == "iot-installer" || t.name == "image-installer" {
allowed := []string{"User", "Group"}
if err := customizations.CheckAllowed(allowed...); err != nil {
return nil, fmt.Errorf("unsupported blueprint customizations found for boot ISO image type %q: (allowed: %s)", t.name, strings.Join(allowed, ", "))
}
} else if t.name == "live-installer" {
allowed := []string{}
if err := customizations.CheckAllowed(allowed...); err != nil {
return nil, fmt.Errorf("unsupported blueprint customizations found for boot ISO image type %q: (allowed: None)", t.name)
}
}
}
if kernelOpts := customizations.GetKernel(); kernelOpts.Append != "" && t.rpmOstree {
return nil, fmt.Errorf("kernel boot parameter customizations are not supported for ostree types")
}
mountpoints := customizations.GetFilesystems()
if mountpoints != nil && t.rpmOstree {
return nil, fmt.Errorf("Custom mountpoints are not supported for ostree types")
}
err := blueprint.CheckMountpointsPolicy(mountpoints, pathpolicy.MountpointPolicies)
if err != nil {
return nil, err
}
if osc := customizations.GetOpenSCAP(); osc != nil {
supported := oscap.IsProfileAllowed(osc.ProfileID, oscapProfileAllowList)
if !supported {
return nil, fmt.Errorf(fmt.Sprintf("OpenSCAP unsupported profile: %s", osc.ProfileID))
}
if t.rpmOstree {
return nil, fmt.Errorf("OpenSCAP customizations are not supported for ostree types")
}
if osc.ProfileID == "" {
return nil, fmt.Errorf("OpenSCAP profile cannot be empty")
}
}
// Check Directory/File Customizations are valid
dc := customizations.GetDirectories()
fc := customizations.GetFiles()
err = blueprint.ValidateDirFileCustomizations(dc, fc)
if err != nil {
return nil, err
}
err = blueprint.CheckDirectoryCustomizationsPolicy(dc, pathpolicy.CustomDirectoriesPolicies)
if err != nil {
return nil, err
}
err = blueprint.CheckFileCustomizationsPolicy(fc, pathpolicy.CustomFilesPolicies)
if err != nil {
return nil, err
}
// check if repository customizations are valid
_, err = customizations.GetRepositories()
if err != nil {
return nil, err
}
return nil, nil
}

View file

@ -0,0 +1,541 @@
package fedora
// This file defines package sets that are used by more than one image type.
import (
"fmt"
"strconv"
"github.com/osbuild/images/internal/common"
"github.com/osbuild/images/pkg/platform"
"github.com/osbuild/images/pkg/rpmmd"
)
func qcow2CommonPackageSet(t *imageType) rpmmd.PackageSet {
return rpmmd.PackageSet{
Include: []string{
"@Fedora Cloud Server",
"chrony", // not mentioned in the kickstart, anaconda pulls it when setting the timezone
"langpacks-en",
"qemu-guest-agent",
},
Exclude: []string{
"dracut-config-rescue",
"firewalld",
"geolite2-city",
"geolite2-country",
"plymouth",
},
}
}
func vhdCommonPackageSet(t *imageType) rpmmd.PackageSet {
ps := rpmmd.PackageSet{
Include: []string{
"@core",
"chrony",
"langpacks-en",
"net-tools",
"ntfsprogs",
"libxcrypt-compat",
"initscripts",
"glibc-all-langpacks",
},
Exclude: []string{
"dracut-config-rescue",
"geolite2-city",
"geolite2-country",
"zram-generator-defaults",
},
}
return ps
}
func vmdkCommonPackageSet(t *imageType) rpmmd.PackageSet {
ps := rpmmd.PackageSet{
Include: []string{
"@Fedora Cloud Server",
"chrony",
"systemd-udev",
"langpacks-en",
"open-vm-tools",
},
Exclude: []string{
"dracut-config-rescue",
"etables",
"firewalld",
"geolite2-city",
"geolite2-country",
"gobject-introspection",
"plymouth",
"zram-generator-defaults",
"grubby-deprecated",
"extlinux-bootloader",
},
}
return ps
}
// fedora iot commit OS package set
func iotCommitPackageSet(t *imageType) rpmmd.PackageSet {
ps := rpmmd.PackageSet{
Include: []string{
"fedora-release-iot",
"glibc",
"glibc-minimal-langpack",
"nss-altfiles",
"sssd-client",
"libsss_sudo",
"shadow-utils",
"dracut-network",
"polkit",
"lvm2",
"cryptsetup",
"pinentry",
"keyutils",
"cracklib-dicts",
"e2fsprogs",
"xfsprogs",
"dosfstools",
"gnupg2",
"basesystem",
"python3",
"bash",
"xz",
"gzip",
"coreutils",
"which",
"curl",
"firewalld",
"iptables",
"NetworkManager",
"NetworkManager-wifi",
"NetworkManager-wwan",
"wpa_supplicant",
"iwd",
"tpm2-pkcs11",
"dnsmasq",
"traceroute",
"hostname",
"iproute",
"iputils",
"openssh-clients",
"openssh-server",
"passwd",
"policycoreutils",
"procps-ng",
"rootfiles",
"rpm",
"smartmontools-selinux",
"setup",
"shadow-utils",
"sudo",
"systemd",
"util-linux",
"vim-minimal",
"less",
"tar",
"fwupd",
"usbguard",
"greenboot",
"ignition",
"zezere-ignition",
"rsync",
"attr",
"ima-evm-utils",
"bash-completion",
"tmux",
"screen",
"policycoreutils-python-utils",
"setools-console",
"audit",
"rng-tools",
"chrony",
"bluez",
"bluez-libs",
"bluez-mesh",
"kernel-tools",
"libgpiod-utils",
"podman",
"container-selinux",
"skopeo",
"criu",
"slirp4netns",
"fuse-overlayfs",
"clevis",
"clevis-dracut",
"clevis-luks",
"clevis-pin-tpm2",
"parsec",
"dbus-parsec",
"iwl7260-firmware",
"iwlax2xx-firmware",
"greenboot-default-health-checks",
},
}
return ps
}
// INSTALLER PACKAGE SET
func installerPackageSet(t *imageType) rpmmd.PackageSet {
ps := rpmmd.PackageSet{
Include: []string{
"anaconda-dracut",
"curl",
"dracut-config-generic",
"dracut-network",
"hostname",
"iwl100-firmware",
"iwl1000-firmware",
"iwl105-firmware",
"iwl135-firmware",
"iwl2000-firmware",
"iwl2030-firmware",
"iwl3160-firmware",
"iwl5000-firmware",
"iwl5150-firmware",
"iwl6050-firmware",
"iwl7260-firmware",
"kernel",
"less",
"nfs-utils",
"openssh-clients",
"ostree",
"plymouth",
"rng-tools",
"rpcbind",
"selinux-policy-targeted",
"systemd",
"tar",
"xfsprogs",
"xz",
},
}
return ps
}
func anacondaPackageSet(t *imageType) rpmmd.PackageSet {
// common installer packages
ps := installerPackageSet(t)
ps = ps.Append(rpmmd.PackageSet{
Include: []string{
"aajohan-comfortaa-fonts",
"abattis-cantarell-fonts",
"alsa-firmware",
"alsa-tools-firmware",
"anaconda",
"anaconda-dracut",
"anaconda-install-env-deps",
"anaconda-widgets",
"audit",
"bind-utils",
"bitmap-fangsongti-fonts",
"bzip2",
"cryptsetup",
"curl",
"dbus-x11",
"dejavu-sans-fonts",
"dejavu-sans-mono-fonts",
"device-mapper-persistent-data",
"dmidecode",
"dnf",
"dracut-config-generic",
"dracut-network",
"efibootmgr",
"ethtool",
"fcoe-utils",
"ftp",
"gdb-gdbserver",
"gdisk",
"glibc-all-langpacks",
"gnome-kiosk",
"google-noto-sans-cjk-ttc-fonts",
"grub2-tools",
"grub2-tools-extra",
"grub2-tools-minimal",
"grubby",
"gsettings-desktop-schemas",
"hdparm",
"hexedit",
"hostname",
"initscripts",
"ipmitool",
"iwl1000-firmware",
"iwl100-firmware",
"iwl105-firmware",
"iwl135-firmware",
"iwl2000-firmware",
"iwl2030-firmware",
"iwl3160-firmware",
"iwl5000-firmware",
"iwl5150-firmware",
"iwl6000g2a-firmware",
"iwl6000g2b-firmware",
"iwl6050-firmware",
"iwl7260-firmware",
"jomolhari-fonts",
"kacst-farsi-fonts",
"kacst-qurn-fonts",
"kbd",
"kbd-misc",
"kdump-anaconda-addon",
"kernel",
"khmeros-base-fonts",
"less",
"libblockdev-lvm-dbus",
"libibverbs",
"libreport-plugin-bugzilla",
"libreport-plugin-reportuploader",
"librsvg2",
"linux-firmware",
"lldpad",
"lohit-assamese-fonts",
"lohit-bengali-fonts",
"lohit-devanagari-fonts",
"lohit-gujarati-fonts",
"lohit-gurmukhi-fonts",
"lohit-kannada-fonts",
"lohit-odia-fonts",
"lohit-tamil-fonts",
"lohit-telugu-fonts",
"lsof",
"madan-fonts",
"mtr",
"mt-st",
"net-tools",
"nfs-utils",
"nmap-ncat",
"nm-connection-editor",
"nss-tools",
"openssh-clients",
"openssh-server",
"oscap-anaconda-addon",
"ostree",
"pciutils",
"perl-interpreter",
"pigz",
"plymouth",
"python3-pyatspi",
"rdma-core",
"rit-meera-new-fonts",
"rng-tools",
"rpcbind",
"rpm-ostree",
"rsync",
"rsyslog",
"selinux-policy-targeted",
"sg3_utils",
"sil-abyssinica-fonts",
"sil-padauk-fonts",
"sil-scheherazade-new-fonts",
"smartmontools",
"spice-vdagent",
"strace",
"systemd",
"tar",
"thai-scalable-waree-fonts",
"tigervnc-server-minimal",
"tigervnc-server-module",
"udisks2",
"udisks2-iscsi",
"usbutils",
"vim-minimal",
"volume_key",
"wget",
"xfsdump",
"xfsprogs",
"xorg-x11-drivers",
"xorg-x11-fonts-misc",
"xorg-x11-server-Xorg",
"xorg-x11-xauth",
"metacity",
"xrdb",
"xz",
},
})
if common.VersionLessThan(t.arch.distro.osVersion, "39") {
ps = ps.Append(rpmmd.PackageSet{
Include: []string{
"lklug-fonts", // orphaned, unavailable in F39
},
})
}
switch t.Arch().Name() {
case platform.ARCH_X86_64.String():
ps = ps.Append(rpmmd.PackageSet{
Include: []string{
"biosdevname",
"dmidecode",
"grub2-tools-efi",
"memtest86+",
},
})
case platform.ARCH_AARCH64.String():
ps = ps.Append(rpmmd.PackageSet{
Include: []string{
"dmidecode",
},
})
default:
panic(fmt.Sprintf("unsupported arch: %s", t.Arch().Name()))
}
return ps
}
func iotInstallerPackageSet(t *imageType) rpmmd.PackageSet {
// include anaconda packages
ps := anacondaPackageSet(t)
releasever := t.Arch().Distro().Releasever()
version, err := strconv.Atoi(releasever)
if err != nil {
panic("cannot convert releasever to int: " + err.Error())
}
if version >= 38 {
ps = ps.Append(rpmmd.PackageSet{
Include: []string{
"fedora-release-iot",
},
})
}
return ps
}
func liveInstallerPackageSet(t *imageType) rpmmd.PackageSet {
ps := rpmmd.PackageSet{
Include: []string{
"@workstation-product-environment",
"@anaconda-tools",
"anaconda-install-env-deps",
"anaconda-live",
"anaconda-dracut",
"dracut-live",
"glibc-all-langpacks",
"kernel",
"kernel-modules",
"kernel-modules-extra",
"livesys-scripts",
"rng-tools",
"rdma-core",
"gnome-kiosk",
},
Exclude: []string{
"@dial-up",
"@input-methods",
"@standard",
"device-mapper-multipath",
"fcoe-utils",
"gfs2-utils",
"reiserfs-utils",
},
}
// We want to generate a preview image when rawhide is built
if !common.VersionLessThan(t.arch.distro.osVersion, "39") {
ps = ps.Append(rpmmd.PackageSet{
Include: []string{
"anaconda-webui",
},
})
}
return ps
}
func imageInstallerPackageSet(t *imageType) rpmmd.PackageSet {
ps := anacondaPackageSet(t)
releasever := t.Arch().Distro().Releasever()
version, err := strconv.Atoi(releasever)
if err != nil {
panic("cannot convert releasever to int: " + err.Error())
}
// We want to generate a preview image when rawhide is built
if version >= 38 {
ps = ps.Append(rpmmd.PackageSet{
Include: []string{
"anaconda-webui",
},
})
}
return ps
}
func containerPackageSet(t *imageType) rpmmd.PackageSet {
ps := rpmmd.PackageSet{
Include: []string{
"bash",
"coreutils",
"dnf-yum",
"dnf",
"fedora-release-container",
"fedora-repos-modular",
"glibc-minimal-langpack",
"rootfiles",
"rpm",
"sudo",
"tar",
"util-linux-core",
"vim-minimal",
},
Exclude: []string{
"crypto-policies-scripts",
"dbus-broker",
"deltarpm",
"dosfstools",
"e2fsprogs",
"elfutils-debuginfod-client",
"fuse-libs",
"gawk-all-langpacks",
"glibc-gconv-extra",
"glibc-langpack-en",
"gnupg2-smime",
"grubby",
"kernel-core",
"kernel-debug-core",
"kernel",
"langpacks-en_GB",
"langpacks-en",
"libss",
"libxcrypt-compat",
"nano",
"openssl-pkcs11",
"pinentry",
"python3-unbound",
"shared-mime-info",
"sssd-client",
"sudo-python-plugin",
"systemd",
"trousers",
"whois-nls",
"xkeyboard-config",
},
}
return ps
}
func minimalrpmPackageSet(t *imageType) rpmmd.PackageSet {
return rpmmd.PackageSet{
Include: []string{
"@core",
},
}
}

View file

@ -0,0 +1,202 @@
package fedora
import (
"github.com/osbuild/images/internal/common"
"github.com/osbuild/images/pkg/disk"
"github.com/osbuild/images/pkg/distro"
"github.com/osbuild/images/pkg/platform"
)
var defaultBasePartitionTables = distro.BasePartitionTableMap{
platform.ARCH_X86_64.String(): disk.PartitionTable{
UUID: "D209C89E-EA5E-4FBD-B161-B461CCE297E0",
Type: "gpt",
Partitions: []disk.Partition{
{
Size: 1 * common.MebiByte, // 1MB
Bootable: true,
Type: disk.BIOSBootPartitionGUID,
UUID: disk.BIOSBootPartitionUUID,
},
{
Size: 200 * common.MebiByte, // 200 MB
Type: disk.EFISystemPartitionGUID,
UUID: disk.EFISystemPartitionUUID,
Payload: &disk.Filesystem{
Type: "vfat",
UUID: disk.EFIFilesystemUUID,
Mountpoint: "/boot/efi",
Label: "EFI-SYSTEM",
FSTabOptions: "defaults,uid=0,gid=0,umask=077,shortname=winnt",
FSTabFreq: 0,
FSTabPassNo: 2,
},
},
{
Size: 500 * common.MebiByte, // 500 MB
Type: disk.FilesystemDataGUID,
UUID: disk.FilesystemDataUUID,
Payload: &disk.Filesystem{
Type: "ext4",
Mountpoint: "/boot",
Label: "boot",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
{
Size: 2 * common.GibiByte, // 2GiB
Type: disk.FilesystemDataGUID,
UUID: disk.RootPartitionUUID,
Payload: &disk.Filesystem{
Type: "ext4",
Label: "root",
Mountpoint: "/",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
},
},
platform.ARCH_AARCH64.String(): disk.PartitionTable{
UUID: "D209C89E-EA5E-4FBD-B161-B461CCE297E0",
Type: "gpt",
Partitions: []disk.Partition{
{
Size: 200 * common.MebiByte, // 200 MB
Type: disk.EFISystemPartitionGUID,
UUID: disk.EFISystemPartitionUUID,
Payload: &disk.Filesystem{
Type: "vfat",
UUID: disk.EFIFilesystemUUID,
Mountpoint: "/boot/efi",
Label: "EFI-SYSTEM",
FSTabOptions: "defaults,uid=0,gid=0,umask=077,shortname=winnt",
FSTabFreq: 0,
FSTabPassNo: 2,
},
},
{
Size: 500 * common.MebiByte, // 500 MB
Type: disk.FilesystemDataGUID,
UUID: disk.FilesystemDataUUID,
Payload: &disk.Filesystem{
Type: "ext4",
Mountpoint: "/boot",
Label: "boot",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
{
Size: 2 * common.GibiByte, // 2GiB
Type: disk.FilesystemDataGUID,
UUID: disk.RootPartitionUUID,
Payload: &disk.Filesystem{
Type: "ext4",
Label: "root",
Mountpoint: "/",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
},
},
}
var iotBasePartitionTables = distro.BasePartitionTableMap{
platform.ARCH_X86_64.String(): disk.PartitionTable{
UUID: "D209C89E-EA5E-4FBD-B161-B461CCE297E0",
Type: "gpt",
Partitions: []disk.Partition{
{
Size: 501 * common.MebiByte, // 501 MiB
Type: disk.EFISystemPartitionGUID,
UUID: disk.EFISystemPartitionUUID,
Payload: &disk.Filesystem{
Type: "vfat",
UUID: disk.EFIFilesystemUUID,
Mountpoint: "/boot/efi",
Label: "EFI-SYSTEM",
FSTabOptions: "umask=0077,shortname=winnt",
FSTabFreq: 0,
FSTabPassNo: 2,
},
},
{
Size: 1 * common.GibiByte, // 1 GiB
Type: disk.FilesystemDataGUID,
UUID: disk.FilesystemDataUUID,
Payload: &disk.Filesystem{
Type: "ext4",
Mountpoint: "/boot",
Label: "boot",
FSTabOptions: "defaults",
FSTabFreq: 1,
FSTabPassNo: 2,
},
},
{
Size: 2569 * common.MebiByte, // 2.5 GiB
Type: disk.FilesystemDataGUID,
UUID: disk.RootPartitionUUID,
Payload: &disk.Filesystem{
Type: "ext4",
Label: "root",
Mountpoint: "/",
FSTabOptions: "defaults",
FSTabFreq: 1,
FSTabPassNo: 1,
},
},
},
},
platform.ARCH_AARCH64.String(): disk.PartitionTable{
UUID: "0xc1748067",
Type: "dos",
Partitions: []disk.Partition{
{
Size: 501 * common.MebiByte, // 501 MiB
Type: "06",
Bootable: true,
Payload: &disk.Filesystem{
Type: "vfat",
UUID: disk.EFIFilesystemUUID,
Mountpoint: "/boot/efi",
Label: "EFI-SYSTEM",
FSTabOptions: "umask=0077,shortname=winnt",
FSTabFreq: 0,
FSTabPassNo: 2,
},
},
{
Size: 1 * common.GibiByte, // 1 GiB
Type: "83",
Payload: &disk.Filesystem{
Type: "ext4",
Mountpoint: "/boot",
Label: "boot",
FSTabOptions: "defaults",
FSTabFreq: 1,
FSTabPassNo: 2,
},
},
{
Size: 2569 * common.MebiByte, // 2.5 GiB
Type: "83",
Payload: &disk.Filesystem{
Type: "ext4",
Label: "root",
Mountpoint: "/",
FSTabOptions: "defaults",
FSTabFreq: 1,
FSTabPassNo: 1,
},
},
},
},
}

View file

@ -0,0 +1,89 @@
package distro
import (
"fmt"
"reflect"
"github.com/osbuild/images/internal/shell"
"github.com/osbuild/images/pkg/osbuild"
"github.com/osbuild/images/pkg/subscription"
)
// ImageConfig represents a (default) configuration applied to the image
type ImageConfig struct {
Timezone *string
TimeSynchronization *osbuild.ChronyStageOptions
Locale *string
Keyboard *osbuild.KeymapStageOptions
EnabledServices []string
DisabledServices []string
DefaultTarget *string
Sysconfig []*osbuild.SysconfigStageOptions
// List of files from which to import GPG keys into the RPM database
GPGKeyFiles []string
// Disable SELinux labelling
NoSElinux *bool
// Do not use. Forces auto-relabelling on first boot.
// See https://github.com/osbuild/osbuild/commit/52cb27631b587c1df177cd17625c5b473e1e85d2
SELinuxForceRelabel *bool
// Disable documentation
ExcludeDocs *bool
ShellInit []shell.InitFile
// for RHSM configuration, we need to potentially distinguish the case
// when the user want the image to be subscribed on first boot and when not
RHSMConfig map[subscription.RHSMStatus]*osbuild.RHSMStageOptions
SystemdLogind []*osbuild.SystemdLogindStageOptions
CloudInit []*osbuild.CloudInitStageOptions
Modprobe []*osbuild.ModprobeStageOptions
DracutConf []*osbuild.DracutConfStageOptions
SystemdUnit []*osbuild.SystemdUnitStageOptions
Authselect *osbuild.AuthselectStageOptions
SELinuxConfig *osbuild.SELinuxConfigStageOptions
Tuned *osbuild.TunedStageOptions
Tmpfilesd []*osbuild.TmpfilesdStageOptions
PamLimitsConf []*osbuild.PamLimitsConfStageOptions
Sysctld []*osbuild.SysctldStageOptions
DNFConfig []*osbuild.DNFConfigStageOptions
SshdConfig *osbuild.SshdConfigStageOptions
Authconfig *osbuild.AuthconfigStageOptions
PwQuality *osbuild.PwqualityConfStageOptions
WAAgentConfig *osbuild.WAAgentConfStageOptions
Grub2Config *osbuild.GRUB2Config
DNFAutomaticConfig *osbuild.DNFAutomaticConfigStageOptions
YumConfig *osbuild.YumConfigStageOptions
YUMRepos []*osbuild.YumReposStageOptions
Firewall *osbuild.FirewallStageOptions
UdevRules *osbuild.UdevRulesStageOptions
GCPGuestAgentConfig *osbuild.GcpGuestAgentConfigOptions
}
// InheritFrom inherits unset values from the provided parent configuration and
// returns a new structure instance, which is a result of the inheritance.
func (c *ImageConfig) InheritFrom(parentConfig *ImageConfig) *ImageConfig {
finalConfig := ImageConfig(*c)
if parentConfig != nil {
// iterate over all struct fields and copy unset values from the parent
for i := 0; i < reflect.TypeOf(*c).NumField(); i++ {
fieldName := reflect.TypeOf(*c).Field(i).Name
field := reflect.ValueOf(&finalConfig).Elem().FieldByName(fieldName)
// Only container types or pointer are supported.
// The reason is that with basic types, we can't distinguish between unset value and zero value.
if kind := field.Kind(); kind != reflect.Ptr && kind != reflect.Slice && kind != reflect.Map {
panic(fmt.Sprintf("unsupported field type: %s (only container types or pointer are supported)",
field.Kind()))
}
if field.IsNil() {
field.Set(reflect.ValueOf(parentConfig).Elem().FieldByName(fieldName))
}
}
}
return &finalConfig
}

View file

@ -0,0 +1,384 @@
package rhel7
import (
"github.com/osbuild/images/internal/common"
"github.com/osbuild/images/pkg/disk"
"github.com/osbuild/images/pkg/distro"
"github.com/osbuild/images/pkg/osbuild"
"github.com/osbuild/images/pkg/platform"
"github.com/osbuild/images/pkg/rpmmd"
"github.com/osbuild/images/pkg/subscription"
)
var azureRhuiImgType = imageType{
name: "azure-rhui",
filename: "disk.vhd.xz",
mimeType: "application/xz",
compression: "xz",
packageSets: map[string]packageSetFunc{
osPkgsKey: azureRhuiCommonPackageSet,
},
packageSetChains: map[string][]string{
osPkgsKey: {osPkgsKey, blueprintPkgsKey},
},
defaultImageConfig: azureDefaultImgConfig,
kernelOptions: "ro crashkernel=auto console=tty1 console=ttyS0 earlyprintk=ttyS0 rootdelay=300 scsi_mod.use_blk_mq=y",
bootable: true,
defaultSize: 64 * common.GibiByte,
image: liveImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"os", "image", "vpc", "xz"},
exports: []string{"xz"},
basePartitionTables: azureRhuiBasePartitionTables,
}
var azureDefaultImgConfig = &distro.ImageConfig{
Timezone: common.ToPtr("Etc/UTC"),
Locale: common.ToPtr("en_US.UTF-8"),
GPGKeyFiles: []string{
"/etc/pki/rpm-gpg/RPM-GPG-KEY-microsoft-azure-release",
"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release",
},
SELinuxForceRelabel: common.ToPtr(true),
Authconfig: &osbuild.AuthconfigStageOptions{},
Sysconfig: []*osbuild.SysconfigStageOptions{
{
Kernel: &osbuild.SysconfigKernelOptions{
UpdateDefault: true,
DefaultKernel: "kernel-core",
},
Network: &osbuild.SysconfigNetworkOptions{
Networking: true,
NoZeroConf: true,
},
},
},
EnabledServices: []string{
"cloud-config",
"cloud-final",
"cloud-init-local",
"cloud-init",
"firewalld",
"NetworkManager",
"sshd",
"waagent",
},
SshdConfig: &osbuild.SshdConfigStageOptions{
Config: osbuild.SshdConfigConfig{
ClientAliveInterval: common.ToPtr(180),
},
},
Modprobe: []*osbuild.ModprobeStageOptions{
{
Filename: "blacklist-amdgpu.conf",
Commands: osbuild.ModprobeConfigCmdList{
osbuild.NewModprobeConfigCmdBlacklist("amdgpu"),
},
},
{
Filename: "blacklist-intel-cstate.conf",
Commands: osbuild.ModprobeConfigCmdList{
osbuild.NewModprobeConfigCmdBlacklist("intel_cstate"),
},
},
{
Filename: "blacklist-floppy.conf",
Commands: osbuild.ModprobeConfigCmdList{
osbuild.NewModprobeConfigCmdBlacklist("floppy"),
},
},
{
Filename: "blacklist-nouveau.conf",
Commands: osbuild.ModprobeConfigCmdList{
osbuild.NewModprobeConfigCmdBlacklist("nouveau"),
osbuild.NewModprobeConfigCmdBlacklist("lbm-nouveau"),
},
},
{
Filename: "blacklist-skylake-edac.conf",
Commands: osbuild.ModprobeConfigCmdList{
osbuild.NewModprobeConfigCmdBlacklist("skx_edac"),
},
},
},
CloudInit: []*osbuild.CloudInitStageOptions{
{
Filename: "06_logging_override.cfg",
Config: osbuild.CloudInitConfigFile{
Output: &osbuild.CloudInitConfigOutput{
All: common.ToPtr("| tee -a /var/log/cloud-init-output.log"),
},
},
},
{
Filename: "10-azure-kvp.cfg",
Config: osbuild.CloudInitConfigFile{
Reporting: &osbuild.CloudInitConfigReporting{
Logging: &osbuild.CloudInitConfigReportingHandlers{
Type: "log",
},
Telemetry: &osbuild.CloudInitConfigReportingHandlers{
Type: "hyperv",
},
},
},
},
{
Filename: "91-azure_datasource.cfg",
Config: osbuild.CloudInitConfigFile{
Datasource: &osbuild.CloudInitConfigDatasource{
Azure: &osbuild.CloudInitConfigDatasourceAzure{
ApplyNetworkConfig: false,
},
},
DatasourceList: []string{
"Azure",
},
},
},
},
PwQuality: &osbuild.PwqualityConfStageOptions{
Config: osbuild.PwqualityConfConfig{
Minlen: common.ToPtr(6),
Minclass: common.ToPtr(3),
Dcredit: common.ToPtr(0),
Ucredit: common.ToPtr(0),
Lcredit: common.ToPtr(0),
Ocredit: common.ToPtr(0),
},
},
WAAgentConfig: &osbuild.WAAgentConfStageOptions{
Config: osbuild.WAAgentConfig{
RDFormat: common.ToPtr(false),
RDEnableSwap: common.ToPtr(false),
},
},
RHSMConfig: map[subscription.RHSMStatus]*osbuild.RHSMStageOptions{
subscription.RHSMConfigNoSubscription: {
YumPlugins: &osbuild.RHSMStageOptionsDnfPlugins{
SubscriptionManager: &osbuild.RHSMStageOptionsDnfPlugin{
Enabled: false,
},
},
SubMan: &osbuild.RHSMStageOptionsSubMan{
Rhsmcertd: &osbuild.SubManConfigRHSMCERTDSection{
AutoRegistration: common.ToPtr(true),
},
Rhsm: &osbuild.SubManConfigRHSMSection{
ManageRepos: common.ToPtr(false),
},
},
},
subscription.RHSMConfigWithSubscription: {
SubMan: &osbuild.RHSMStageOptionsSubMan{
Rhsmcertd: &osbuild.SubManConfigRHSMCERTDSection{
AutoRegistration: common.ToPtr(true),
},
// do not disable the redhat.repo management if the user
// explicitly request the system to be subscribed
},
},
},
Grub2Config: &osbuild.GRUB2Config{
TerminalInput: []string{"serial", "console"},
TerminalOutput: []string{"serial", "console"},
Serial: "serial --speed=115200 --unit=0 --word=8 --parity=no --stop=1",
Timeout: 10,
},
UdevRules: &osbuild.UdevRulesStageOptions{
Filename: "/etc/udev/rules.d/68-azure-sriov-nm-unmanaged.rules",
Rules: osbuild.UdevRules{
osbuild.UdevRuleComment{
Comment: []string{
"Accelerated Networking on Azure exposes a new SRIOV interface to the VM.",
"This interface is transparently bonded to the synthetic interface,",
"so NetworkManager should just ignore any SRIOV interfaces.",
},
},
osbuild.NewUdevRule(
[]osbuild.UdevKV{
{K: "SUBSYSTEM", O: "==", V: "net"},
{K: "DRIVERS", O: "==", V: "hv_pci"},
{K: "ACTION", O: "==", V: "add"},
{K: "ENV", A: "NM_UNMANAGED", O: "=", V: "1"},
},
),
},
},
YumConfig: &osbuild.YumConfigStageOptions{
Config: &osbuild.YumConfigConfig{
HttpCaching: common.ToPtr("packages"),
},
Plugins: &osbuild.YumConfigPlugins{
Langpacks: &osbuild.YumConfigPluginsLangpacks{
Locales: []string{"en_US.UTF-8"},
},
},
},
DefaultTarget: common.ToPtr("multi-user.target"),
}
func azureRhuiCommonPackageSet(t *imageType) rpmmd.PackageSet {
ps := rpmmd.PackageSet{
Include: []string{
"@base",
"@core",
"authconfig",
"bpftool",
"bzip2",
"chrony",
"cloud-init",
"cloud-utils-growpart",
"dracut-config-generic",
"dracut-norescue",
"efibootmgr",
"firewalld",
"gdisk",
"grub2-efi-x64",
"grub2-pc",
"grub2",
"hyperv-daemons",
"kernel",
"lvm2",
"redhat-release-eula",
"redhat-support-tool",
"rh-dotnetcore11",
"rhn-setup",
"rhui-azure-rhel7",
"rsync",
"shim-x64",
"tar",
"tcpdump",
"WALinuxAgent",
"yum-rhn-plugin",
"yum-utils",
},
Exclude: []string{
"dracut-config-rescue",
"mariadb-libs",
"NetworkManager-config-server",
"postfix",
},
}
if t.arch.distro.isRHEL() {
ps = ps.Append(rpmmd.PackageSet{
Include: []string{
"insights-client",
},
})
}
return ps
}
var azureRhuiBasePartitionTables = distro.BasePartitionTableMap{
platform.ARCH_X86_64.String(): disk.PartitionTable{
UUID: "D209C89E-EA5E-4FBD-B161-B461CCE297E0",
Type: "gpt",
Size: 64 * common.GibiByte,
Partitions: []disk.Partition{
{
Size: 500 * common.MebiByte,
Type: disk.EFISystemPartitionGUID,
UUID: disk.EFISystemPartitionUUID,
Payload: &disk.Filesystem{
Type: "vfat",
UUID: disk.EFIFilesystemUUID,
Mountpoint: "/boot/efi",
FSTabOptions: "defaults,uid=0,gid=0,umask=077,shortname=winnt",
FSTabFreq: 0,
FSTabPassNo: 2,
},
},
{
Size: 500 * common.MebiByte,
Type: disk.FilesystemDataGUID,
UUID: disk.FilesystemDataUUID,
Payload: &disk.Filesystem{
Type: "xfs",
Mountpoint: "/boot",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
{
Size: 2 * common.MebiByte,
Bootable: true,
Type: disk.BIOSBootPartitionGUID,
UUID: disk.BIOSBootPartitionUUID,
},
{
Type: disk.LVMPartitionGUID,
UUID: disk.RootPartitionUUID,
Payload: &disk.LVMVolumeGroup{
Name: "rootvg",
Description: "built with lvm2 and osbuild",
LogicalVolumes: []disk.LVMLogicalVolume{
{
Size: 1 * common.GibiByte,
Name: "homelv",
Payload: &disk.Filesystem{
Type: "xfs",
Label: "home",
Mountpoint: "/home",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
{
Size: 2 * common.GibiByte,
Name: "rootlv",
Payload: &disk.Filesystem{
Type: "xfs",
Label: "root",
Mountpoint: "/",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
{
Size: 2 * common.GibiByte,
Name: "tmplv",
Payload: &disk.Filesystem{
Type: "xfs",
Label: "tmp",
Mountpoint: "/tmp",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
{
Size: 10 * common.GibiByte,
Name: "usrlv",
Payload: &disk.Filesystem{
Type: "xfs",
Label: "usr",
Mountpoint: "/usr",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
{
Size: 10 * common.GibiByte, // firedrill: 8 GB
Name: "varlv",
Payload: &disk.Filesystem{
Type: "xfs",
Label: "var",
Mountpoint: "/var",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
},
},
},
},
},
}

View file

@ -0,0 +1,235 @@
package rhel7
import (
"errors"
"fmt"
"sort"
"strings"
"github.com/osbuild/images/internal/common"
"github.com/osbuild/images/pkg/distro"
"github.com/osbuild/images/pkg/osbuild"
"github.com/osbuild/images/pkg/platform"
"github.com/osbuild/images/pkg/runner"
)
const (
// package set names
// main/common os image package set name
osPkgsKey = "os"
// blueprint package set name
blueprintPkgsKey = "blueprint"
)
// RHEL-based OS image configuration defaults
var defaultDistroImageConfig = &distro.ImageConfig{
Timezone: common.ToPtr("America/New_York"),
Locale: common.ToPtr("en_US.UTF-8"),
GPGKeyFiles: []string{
"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release",
},
Sysconfig: []*osbuild.SysconfigStageOptions{
{
Kernel: &osbuild.SysconfigKernelOptions{
UpdateDefault: true,
DefaultKernel: "kernel",
},
Network: &osbuild.SysconfigNetworkOptions{
Networking: true,
NoZeroConf: true,
},
},
},
}
// distribution objects without the arches > image types
var distroMap = map[string]distribution{
"rhel-7": {
name: "rhel-7",
product: "Red Hat Enterprise Linux",
osVersion: "7.9",
nick: "Maipo",
releaseVersion: "7",
modulePlatformID: "platform:el7",
vendor: "redhat",
runner: &runner.RHEL{Major: uint64(7), Minor: uint64(9)},
defaultImageConfig: defaultDistroImageConfig,
},
}
// --- Distribution ---
type distribution struct {
name string
product string
nick string
osVersion string
releaseVersion string
modulePlatformID string
vendor string
runner runner.Runner
arches map[string]distro.Arch
defaultImageConfig *distro.ImageConfig
}
func (d *distribution) Name() string {
return d.name
}
func (d *distribution) Releasever() string {
return d.releaseVersion
}
func (d *distribution) ModulePlatformID() string {
return d.modulePlatformID
}
func (d *distribution) OSTreeRef() string {
return "" // not supported
}
func (d *distribution) ListArches() []string {
archNames := make([]string, 0, len(d.arches))
for name := range d.arches {
archNames = append(archNames, name)
}
sort.Strings(archNames)
return archNames
}
func (d *distribution) GetArch(name string) (distro.Arch, error) {
arch, exists := d.arches[name]
if !exists {
return nil, errors.New("invalid architecture: " + name)
}
return arch, nil
}
func (d *distribution) addArches(arches ...architecture) {
if d.arches == nil {
d.arches = map[string]distro.Arch{}
}
// Do not make copies of architectures, as opposed to image types,
// because architecture definitions are not used by more than a single
// distro definition.
for idx := range arches {
d.arches[arches[idx].name] = &arches[idx]
}
}
func (d *distribution) isRHEL() bool {
return strings.HasPrefix(d.name, "rhel")
}
func (d *distribution) getDefaultImageConfig() *distro.ImageConfig {
return d.defaultImageConfig
}
// --- Architecture ---
type architecture struct {
distro *distribution
name string
imageTypes map[string]distro.ImageType
imageTypeAliases map[string]string
}
func (a *architecture) Name() string {
return a.name
}
func (a *architecture) ListImageTypes() []string {
itNames := make([]string, 0, len(a.imageTypes))
for name := range a.imageTypes {
itNames = append(itNames, name)
}
sort.Strings(itNames)
return itNames
}
func (a *architecture) GetImageType(name string) (distro.ImageType, error) {
t, exists := a.imageTypes[name]
if !exists {
aliasForName, exists := a.imageTypeAliases[name]
if !exists {
return nil, errors.New("invalid image type: " + name)
}
t, exists = a.imageTypes[aliasForName]
if !exists {
panic(fmt.Sprintf("image type '%s' is an alias to a non-existing image type '%s'", name, aliasForName))
}
}
return t, nil
}
func (a *architecture) addImageTypes(platform platform.Platform, imageTypes ...imageType) {
if a.imageTypes == nil {
a.imageTypes = map[string]distro.ImageType{}
}
for idx := range imageTypes {
it := imageTypes[idx]
it.arch = a
it.platform = platform
a.imageTypes[it.name] = &it
for _, alias := range it.nameAliases {
if a.imageTypeAliases == nil {
a.imageTypeAliases = map[string]string{}
}
if existingAliasFor, exists := a.imageTypeAliases[alias]; exists {
panic(fmt.Sprintf("image type alias '%s' for '%s' is already defined for another image type '%s'", alias, it.name, existingAliasFor))
}
a.imageTypeAliases[alias] = it.name
}
}
}
func (a *architecture) Distro() distro.Distro {
return a.distro
}
// New creates a new distro object, defining the supported architectures and image types
func New() distro.Distro {
return newDistro("rhel-7")
}
func newDistro(distroName string) distro.Distro {
rd := distroMap[distroName]
// Architecture definitions
x86_64 := architecture{
name: platform.ARCH_X86_64.String(),
distro: &rd,
}
x86_64.addImageTypes(
&platform.X86{
BIOS: true,
UEFIVendor: rd.vendor,
BasePlatform: platform.BasePlatform{
ImageFormat: platform.FORMAT_QCOW2,
QCOW2Compat: "0.10",
},
},
qcow2ImgType,
)
x86_64.addImageTypes(
&platform.X86{
BIOS: true,
UEFIVendor: rd.vendor,
BasePlatform: platform.BasePlatform{
ImageFormat: platform.FORMAT_VHD,
},
},
azureRhuiImgType,
)
rd.addArches(
x86_64,
)
return &rd
}

View file

@ -0,0 +1,250 @@
package rhel7
import (
"fmt"
"math/rand"
"github.com/osbuild/images/internal/common"
"github.com/osbuild/images/internal/users"
"github.com/osbuild/images/internal/workload"
"github.com/osbuild/images/pkg/blueprint"
"github.com/osbuild/images/pkg/container"
"github.com/osbuild/images/pkg/distro"
"github.com/osbuild/images/pkg/image"
"github.com/osbuild/images/pkg/manifest"
"github.com/osbuild/images/pkg/osbuild"
"github.com/osbuild/images/pkg/platform"
"github.com/osbuild/images/pkg/rpmmd"
)
func osCustomizations(
t *imageType,
osPackageSet rpmmd.PackageSet,
options distro.ImageOptions,
containers []container.SourceSpec,
c *blueprint.Customizations,
) manifest.OSCustomizations {
imageConfig := t.getDefaultImageConfig()
osc := manifest.OSCustomizations{}
if t.bootable {
osc.KernelName = c.GetKernel().Name
var kernelOptions []string
if t.kernelOptions != "" {
kernelOptions = append(kernelOptions, t.kernelOptions)
}
if bpKernel := c.GetKernel(); bpKernel.Append != "" {
kernelOptions = append(kernelOptions, bpKernel.Append)
}
osc.KernelOptionsAppend = kernelOptions
if t.platform.GetArch() != platform.ARCH_S390X {
osc.KernelOptionsBootloader = true
}
}
osc.ExtraBasePackages = osPackageSet.Include
osc.ExcludeBasePackages = osPackageSet.Exclude
osc.ExtraBaseRepos = osPackageSet.Repositories
osc.Containers = containers
osc.GPGKeyFiles = imageConfig.GPGKeyFiles
if imageConfig.ExcludeDocs != nil {
osc.ExcludeDocs = *imageConfig.ExcludeDocs
}
// don't put users and groups in the payload of an installer
// add them via kickstart instead
osc.Groups = users.GroupsFromBP(c.GetGroups())
osc.Users = users.UsersFromBP(c.GetUsers())
osc.EnabledServices = imageConfig.EnabledServices
osc.DisabledServices = imageConfig.DisabledServices
if imageConfig.DefaultTarget != nil {
osc.DefaultTarget = *imageConfig.DefaultTarget
}
osc.Firewall = imageConfig.Firewall
if fw := c.GetFirewall(); fw != nil {
options := osbuild.FirewallStageOptions{
Ports: fw.Ports,
}
if fw.Services != nil {
options.EnabledServices = fw.Services.Enabled
options.DisabledServices = fw.Services.Disabled
}
if fw.Zones != nil {
for _, z := range fw.Zones {
options.Zones = append(options.Zones, osbuild.FirewallZone{
Name: *z.Name,
Sources: z.Sources,
})
}
}
osc.Firewall = &options
}
language, keyboard := c.GetPrimaryLocale()
if language != nil {
osc.Language = *language
} else if imageConfig.Locale != nil {
osc.Language = *imageConfig.Locale
}
if keyboard != nil {
osc.Keyboard = keyboard
} else if imageConfig.Keyboard != nil {
osc.Keyboard = &imageConfig.Keyboard.Keymap
if imageConfig.Keyboard.X11Keymap != nil {
osc.X11KeymapLayouts = imageConfig.Keyboard.X11Keymap.Layouts
}
}
if hostname := c.GetHostname(); hostname != nil {
osc.Hostname = *hostname
}
timezone, ntpServers := c.GetTimezoneSettings()
if timezone != nil {
osc.Timezone = *timezone
} else if imageConfig.Timezone != nil {
osc.Timezone = *imageConfig.Timezone
}
if len(ntpServers) > 0 {
for _, server := range ntpServers {
osc.NTPServers = append(osc.NTPServers, osbuild.ChronyConfigServer{Hostname: server})
}
} else if imageConfig.TimeSynchronization != nil {
osc.NTPServers = imageConfig.TimeSynchronization.Servers
osc.LeapSecTZ = imageConfig.TimeSynchronization.LeapsecTz
}
// Relabel the tree, unless the `NoSElinux` flag is explicitly set to `true`
if imageConfig.NoSElinux == nil || imageConfig.NoSElinux != nil && !*imageConfig.NoSElinux {
osc.SElinux = "targeted"
osc.SELinuxForceRelabel = imageConfig.SELinuxForceRelabel
}
if oscapConfig := c.GetOpenSCAP(); oscapConfig != nil {
osc.OpenSCAPConfig = osbuild.NewOscapRemediationStageOptions(
osbuild.OscapConfig{
Datastream: oscapConfig.DataStream,
ProfileID: oscapConfig.ProfileID,
},
)
}
if t.arch.distro.isRHEL() && options.Facts != nil {
osc.FactAPIType = &options.Facts.APIType
}
var err error
osc.Directories, err = blueprint.DirectoryCustomizationsToFsNodeDirectories(c.GetDirectories())
if err != nil {
// In theory this should never happen, because the blueprint directory customizations
// should have been validated before this point.
panic(fmt.Sprintf("failed to convert directory customizations to fs node directories: %v", err))
}
osc.Files, err = blueprint.FileCustomizationsToFsNodeFiles(c.GetFiles())
if err != nil {
// In theory this should never happen, because the blueprint file customizations
// should have been validated before this point.
panic(fmt.Sprintf("failed to convert file customizations to fs node files: %v", err))
}
// set yum repos first, so it doesn't get overridden by
// imageConfig.YUMRepos
osc.YUMRepos = imageConfig.YUMRepos
customRepos, err := c.GetRepositories()
if err != nil {
// This shouldn't happen and since the repos
// should have already been validated
panic(fmt.Sprintf("failed to get custom repos: %v", err))
}
// This function returns a map of filename and corresponding yum repos
// and a list of fs node files for the inline gpg keys so we can save
// them to disk. This step also swaps the inline gpg key with the path
// to the file in the os file tree
yumRepos, gpgKeyFiles, err := blueprint.RepoCustomizationsToRepoConfigAndGPGKeyFiles(customRepos)
if err != nil {
panic(fmt.Sprintf("failed to convert inline gpgkeys to fs node files: %v", err))
}
// add the gpg key files to the list of files to be added to the tree
if len(gpgKeyFiles) > 0 {
osc.Files = append(osc.Files, gpgKeyFiles...)
}
for filename, repos := range yumRepos {
osc.YUMRepos = append(osc.YUMRepos, osbuild.NewYumReposStageOptions(filename, repos))
}
osc.ShellInit = imageConfig.ShellInit
osc.Grub2Config = imageConfig.Grub2Config
osc.Sysconfig = imageConfig.Sysconfig
osc.SystemdLogind = imageConfig.SystemdLogind
osc.CloudInit = imageConfig.CloudInit
osc.Modprobe = imageConfig.Modprobe
osc.DracutConf = imageConfig.DracutConf
osc.SystemdUnit = imageConfig.SystemdUnit
osc.Authselect = imageConfig.Authselect
osc.SELinuxConfig = imageConfig.SELinuxConfig
osc.Tuned = imageConfig.Tuned
osc.Tmpfilesd = imageConfig.Tmpfilesd
osc.PamLimitsConf = imageConfig.PamLimitsConf
osc.Sysctld = imageConfig.Sysctld
osc.DNFConfig = imageConfig.DNFConfig
osc.DNFAutomaticConfig = imageConfig.DNFAutomaticConfig
osc.YUMConfig = imageConfig.YumConfig
osc.SshdConfig = imageConfig.SshdConfig
osc.AuthConfig = imageConfig.Authconfig
osc.PwQuality = imageConfig.PwQuality
osc.RHSMConfig = imageConfig.RHSMConfig
osc.Subscription = options.Subscription
osc.WAAgentConfig = imageConfig.WAAgentConfig
osc.UdevRules = imageConfig.UdevRules
osc.GCPGuestAgentConfig = imageConfig.GCPGuestAgentConfig
return osc
}
func liveImage(workload workload.Workload,
t *imageType,
customizations *blueprint.Customizations,
options distro.ImageOptions,
packageSets map[string]rpmmd.PackageSet,
containers []container.SourceSpec,
rng *rand.Rand) (image.ImageKind, error) {
img := image.NewLiveImage()
img.Platform = t.platform
img.OSCustomizations = osCustomizations(t, packageSets[osPkgsKey], options, containers, customizations)
img.Environment = t.environment
img.Workload = workload
img.Compression = t.compression
img.PartTool = osbuild.PTSgdisk // all RHEL 7 images should use sgdisk
img.ForceSize = common.ToPtr(false) // RHEL 7 qemu vpc subformat does not support force_size
img.NoBLS = true // RHEL 7 grub does not support BLS
img.OSProduct = t.arch.distro.product
img.OSVersion = t.arch.distro.osVersion
img.OSNick = t.arch.distro.nick
// TODO: move generation into LiveImage
pt, err := t.getPartitionTable(customizations.GetFilesystems(), options, rng)
if err != nil {
return nil, err
}
img.PartitionTable = pt
img.Filename = t.Filename()
return img, nil
}

View file

@ -0,0 +1,285 @@
package rhel7
import (
"fmt"
"math/rand"
"github.com/osbuild/images/internal/environment"
"github.com/osbuild/images/internal/pathpolicy"
"github.com/osbuild/images/internal/workload"
"github.com/osbuild/images/pkg/blueprint"
"github.com/osbuild/images/pkg/container"
"github.com/osbuild/images/pkg/disk"
"github.com/osbuild/images/pkg/distro"
"github.com/osbuild/images/pkg/image"
"github.com/osbuild/images/pkg/manifest"
"github.com/osbuild/images/pkg/platform"
"github.com/osbuild/images/pkg/rpmmd"
"golang.org/x/exp/slices"
)
type packageSetFunc func(t *imageType) rpmmd.PackageSet
type imageFunc func(workload workload.Workload, t *imageType, customizations *blueprint.Customizations, options distro.ImageOptions, packageSets map[string]rpmmd.PackageSet, containers []container.SourceSpec, rng *rand.Rand) (image.ImageKind, error)
type imageType struct {
arch *architecture
platform platform.Platform
environment environment.Environment
workload workload.Workload
name string
nameAliases []string
filename string
compression string // TODO: remove from image definition and make it a transport option
mimeType string
packageSets map[string]packageSetFunc
packageSetChains map[string][]string
defaultImageConfig *distro.ImageConfig
kernelOptions string
defaultSize uint64
buildPipelines []string
payloadPipelines []string
exports []string
image imageFunc
// bootable image
bootable bool
// List of valid arches for the image type
basePartitionTables distro.BasePartitionTableMap
}
func (t *imageType) Name() string {
return t.name
}
func (t *imageType) Arch() distro.Arch {
return t.arch
}
func (t *imageType) Filename() string {
return t.filename
}
func (t *imageType) MIMEType() string {
return t.mimeType
}
func (t *imageType) OSTreeRef() string {
// Not supported
return ""
}
func (t *imageType) Size(size uint64) uint64 {
if size == 0 {
size = t.defaultSize
}
return size
}
func (t *imageType) BuildPipelines() []string {
return t.buildPipelines
}
func (t *imageType) PayloadPipelines() []string {
return t.payloadPipelines
}
func (t *imageType) PayloadPackageSets() []string {
return []string{blueprintPkgsKey}
}
func (t *imageType) PackageSetsChains() map[string][]string {
return t.packageSetChains
}
func (t *imageType) Exports() []string {
if len(t.exports) == 0 {
panic(fmt.Sprintf("programming error: no exports for '%s'", t.name))
}
return t.exports
}
func (t *imageType) BootMode() distro.BootMode {
if t.platform.GetUEFIVendor() != "" && t.platform.GetBIOSPlatform() != "" {
return distro.BOOT_HYBRID
} else if t.platform.GetUEFIVendor() != "" {
return distro.BOOT_UEFI
} else if t.platform.GetBIOSPlatform() != "" || t.platform.GetZiplSupport() {
return distro.BOOT_LEGACY
}
return distro.BOOT_NONE
}
func (t *imageType) getPartitionTable(
mountpoints []blueprint.FilesystemCustomization,
options distro.ImageOptions,
rng *rand.Rand,
) (*disk.PartitionTable, error) {
archName := t.arch.Name()
basePartitionTable, exists := t.basePartitionTables[archName]
if !exists {
return nil, fmt.Errorf("unknown arch: " + archName)
}
imageSize := t.Size(options.Size)
return disk.NewPartitionTable(&basePartitionTable, mountpoints, imageSize, true, nil, rng)
}
func (t *imageType) getDefaultImageConfig() *distro.ImageConfig {
// ensure that image always returns non-nil default config
imageConfig := t.defaultImageConfig
if imageConfig == nil {
imageConfig = &distro.ImageConfig{}
}
return imageConfig.InheritFrom(t.arch.distro.getDefaultImageConfig())
}
func (t *imageType) PartitionType() string {
archName := t.arch.Name()
basePartitionTable, exists := t.basePartitionTables[archName]
if !exists {
return ""
}
return basePartitionTable.Type
}
func (t *imageType) Manifest(bp *blueprint.Blueprint,
options distro.ImageOptions,
repos []rpmmd.RepoConfig,
seed int64) (*manifest.Manifest, []string, error) {
warnings, err := t.checkOptions(bp, options)
if err != nil {
return nil, nil, err
}
// merge package sets that appear in the image type with the package sets
// of the same name from the distro and arch
staticPackageSets := make(map[string]rpmmd.PackageSet)
for name, getter := range t.packageSets {
staticPackageSets[name] = getter(t)
}
// amend with repository information and collect payload repos
payloadRepos := make([]rpmmd.RepoConfig, 0)
for _, repo := range repos {
if len(repo.PackageSets) > 0 {
// only apply the repo to the listed package sets
for _, psName := range repo.PackageSets {
if slices.Contains(t.PayloadPackageSets(), psName) {
payloadRepos = append(payloadRepos, repo)
}
ps := staticPackageSets[psName]
ps.Repositories = append(ps.Repositories, repo)
staticPackageSets[psName] = ps
}
}
}
w := t.workload
if w == nil {
cw := &workload.Custom{
BaseWorkload: workload.BaseWorkload{
Repos: payloadRepos,
},
Packages: bp.GetPackagesEx(false),
}
if services := bp.Customizations.GetServices(); services != nil {
cw.Services = services.Enabled
cw.DisabledServices = services.Disabled
}
w = cw
}
containerSources := make([]container.SourceSpec, len(bp.Containers))
for idx := range bp.Containers {
containerSources[idx] = container.SourceSpec(bp.Containers[idx])
}
source := rand.NewSource(seed)
// math/rand is good enough in this case
/* #nosec G404 */
rng := rand.New(source)
if t.image == nil {
return nil, nil, nil
}
img, err := t.image(w, t, bp.Customizations, options, staticPackageSets, containerSources, rng)
if err != nil {
return nil, nil, err
}
mf := manifest.New()
mf.Distro = manifest.DISTRO_EL7
_, err = img.InstantiateManifest(&mf, repos, t.arch.distro.runner, rng)
if err != nil {
return nil, nil, err
}
return &mf, warnings, err
}
// checkOptions checks the validity and compatibility of options and customizations for the image type.
// Returns ([]string, error) where []string, if non-nil, will hold any generated warnings (e.g. deprecation notices).
func (t *imageType) checkOptions(bp *blueprint.Blueprint, options distro.ImageOptions) ([]string, error) {
customizations := bp.Customizations
// holds warnings (e.g. deprecation notices)
var warnings []string
if t.workload != nil {
// For now, if an image type defines its own workload, don't allow any
// user customizations.
// Soon we will have more workflows and each will define its allowed
// set of customizations. The current set of customizations defined in
// the blueprint spec corresponds to the Custom workflow.
if customizations != nil {
return warnings, fmt.Errorf("image type %q does not support customizations", t.name)
}
}
if len(bp.Containers) > 0 {
return warnings, fmt.Errorf("embedding containers is not supported for %s on %s", t.name, t.arch.distro.name)
}
mountpoints := customizations.GetFilesystems()
err := blueprint.CheckMountpointsPolicy(mountpoints, pathpolicy.MountpointPolicies)
if err != nil {
return warnings, err
}
if osc := customizations.GetOpenSCAP(); osc != nil {
return warnings, fmt.Errorf(fmt.Sprintf("OpenSCAP unsupported os version: %s", t.arch.distro.osVersion))
}
// Check Directory/File Customizations are valid
dc := customizations.GetDirectories()
fc := customizations.GetFiles()
err = blueprint.ValidateDirFileCustomizations(dc, fc)
if err != nil {
return warnings, err
}
err = blueprint.CheckDirectoryCustomizationsPolicy(dc, pathpolicy.CustomDirectoriesPolicies)
if err != nil {
return warnings, err
}
err = blueprint.CheckFileCustomizationsPolicy(fc, pathpolicy.CustomFilesPolicies)
if err != nil {
return warnings, err
}
// check if repository customizations are valid
_, err = customizations.GetRepositories()
if err != nil {
return warnings, err
}
return warnings, nil
}

View file

@ -0,0 +1,15 @@
package rhel7
import (
"github.com/osbuild/images/pkg/rpmmd"
)
// packages that are only in some (sub)-distributions
func distroSpecificPackageSet(t *imageType) rpmmd.PackageSet {
if t.arch.distro.isRHEL() {
return rpmmd.PackageSet{
Include: []string{"insights-client"},
}
}
return rpmmd.PackageSet{}
}

View file

@ -0,0 +1,65 @@
package rhel7
import (
"github.com/osbuild/images/internal/common"
"github.com/osbuild/images/pkg/disk"
"github.com/osbuild/images/pkg/distro"
"github.com/osbuild/images/pkg/platform"
)
// ////////// Partition table //////////
var defaultBasePartitionTables = distro.BasePartitionTableMap{
platform.ARCH_X86_64.String(): disk.PartitionTable{
UUID: "D209C89E-EA5E-4FBD-B161-B461CCE297E0",
Type: "gpt",
Partitions: []disk.Partition{
{
Size: 1 * common.MebiByte, // 1MB
Bootable: true,
Type: disk.BIOSBootPartitionGUID,
UUID: disk.BIOSBootPartitionUUID,
},
{
Size: 200 * common.MebiByte, // 200 MB
Type: disk.EFISystemPartitionGUID,
UUID: disk.EFISystemPartitionUUID,
Payload: &disk.Filesystem{
Type: "vfat",
UUID: disk.EFIFilesystemUUID,
Mountpoint: "/boot/efi",
Label: "EFI-SYSTEM",
FSTabOptions: "defaults,uid=0,gid=0,umask=077,shortname=winnt",
FSTabFreq: 0,
FSTabPassNo: 2,
},
},
{
Size: 500 * common.MebiByte, // 500 MB
Type: disk.FilesystemDataGUID,
UUID: disk.FilesystemDataUUID,
Payload: &disk.Filesystem{
Type: "xfs",
Mountpoint: "/boot",
Label: "boot",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
{
Size: 2 * common.GibiByte, // 2GiB
Type: disk.FilesystemDataGUID,
UUID: disk.RootPartitionUUID,
Payload: &disk.Filesystem{
Type: "xfs",
Label: "root",
Mountpoint: "/",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
},
},
}

View file

@ -0,0 +1,126 @@
package rhel7
import (
"github.com/osbuild/images/internal/common"
"github.com/osbuild/images/pkg/distro"
"github.com/osbuild/images/pkg/osbuild"
"github.com/osbuild/images/pkg/rpmmd"
"github.com/osbuild/images/pkg/subscription"
)
var qcow2ImgType = imageType{
name: "qcow2",
filename: "disk.qcow2",
mimeType: "application/x-qemu-disk",
kernelOptions: "console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=auto",
packageSets: map[string]packageSetFunc{
osPkgsKey: qcow2CommonPackageSet,
},
defaultImageConfig: qcow2DefaultImgConfig,
bootable: true,
defaultSize: 10 * common.GibiByte,
image: liveImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"os", "image", "qcow2"},
exports: []string{"qcow2"},
basePartitionTables: defaultBasePartitionTables,
}
var qcow2DefaultImgConfig = &distro.ImageConfig{
DefaultTarget: common.ToPtr("multi-user.target"),
SELinuxForceRelabel: common.ToPtr(true),
Sysconfig: []*osbuild.SysconfigStageOptions{
{
Kernel: &osbuild.SysconfigKernelOptions{
UpdateDefault: true,
DefaultKernel: "kernel",
},
Network: &osbuild.SysconfigNetworkOptions{
Networking: true,
NoZeroConf: true,
},
NetworkScripts: &osbuild.NetworkScriptsOptions{
IfcfgFiles: map[string]osbuild.IfcfgFile{
"eth0": {
Device: "eth0",
Bootproto: osbuild.IfcfgBootprotoDHCP,
OnBoot: common.ToPtr(true),
Type: osbuild.IfcfgTypeEthernet,
UserCtl: common.ToPtr(true),
PeerDNS: common.ToPtr(true),
IPv6Init: common.ToPtr(false),
},
},
},
},
},
RHSMConfig: map[subscription.RHSMStatus]*osbuild.RHSMStageOptions{
subscription.RHSMConfigNoSubscription: {
YumPlugins: &osbuild.RHSMStageOptionsDnfPlugins{
ProductID: &osbuild.RHSMStageOptionsDnfPlugin{
Enabled: false,
},
SubscriptionManager: &osbuild.RHSMStageOptionsDnfPlugin{
Enabled: false,
},
},
},
},
}
func qcow2CommonPackageSet(t *imageType) rpmmd.PackageSet {
ps := rpmmd.PackageSet{
Include: []string{
"@core",
"kernel",
"nfs-utils",
"yum-utils",
"cloud-init",
//"ovirt-guest-agent-common",
"rhn-setup",
"yum-rhn-plugin",
"cloud-utils-growpart",
"dracut-config-generic",
"tar",
"tcpdump",
"rsync",
},
Exclude: []string{
"biosdevname",
"dracut-config-rescue",
"iprutils",
"NetworkManager-team",
"NetworkManager-tui",
"NetworkManager",
"plymouth",
"aic94xx-firmware",
"alsa-firmware",
"alsa-lib",
"alsa-tools-firmware",
"ivtv-firmware",
"iwl1000-firmware",
"iwl100-firmware",
"iwl105-firmware",
"iwl135-firmware",
"iwl2000-firmware",
"iwl2030-firmware",
"iwl3160-firmware",
"iwl3945-firmware",
"iwl4965-firmware",
"iwl5000-firmware",
"iwl5150-firmware",
"iwl6000-firmware",
"iwl6000g2a-firmware",
"iwl6000g2b-firmware",
"iwl6050-firmware",
"iwl7260-firmware",
"libertas-sd8686-firmware",
"libertas-sd8787-firmware",
"libertas-usb8388-firmware",
},
}.Append(distroSpecificPackageSet(t))
return ps
}

View file

@ -0,0 +1,511 @@
package rhel8
import (
"github.com/osbuild/images/internal/common"
"github.com/osbuild/images/pkg/distro"
"github.com/osbuild/images/pkg/osbuild"
"github.com/osbuild/images/pkg/rpmmd"
"github.com/osbuild/images/pkg/subscription"
)
func amiImgTypeX86_64(rd distribution) imageType {
it := imageType{
name: "ami",
filename: "image.raw",
mimeType: "application/octet-stream",
packageSets: map[string]packageSetFunc{
osPkgsKey: ec2CommonPackageSet,
},
defaultImageConfig: defaultAMIImageConfigX86_64(rd),
kernelOptions: "console=ttyS0,115200n8 console=tty0 net.ifnames=0 rd.blacklist=nouveau nvme_core.io_timeout=4294967295 crashkernel=auto",
bootable: true,
defaultSize: 10 * common.GibiByte,
image: liveImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"os", "image"},
exports: []string{"image"},
basePartitionTables: ec2BasePartitionTables,
}
return it
}
func ec2ImgTypeX86_64(rd distribution) imageType {
basePartitionTables := ec2BasePartitionTables
// use legacy partition tables for RHEL 8.8 and older
if common.VersionLessThan(rd.osVersion, "8.9") {
basePartitionTables = ec2LegacyBasePartitionTables
}
it := imageType{
name: "ec2",
filename: "image.raw.xz",
mimeType: "application/xz",
compression: "xz",
packageSets: map[string]packageSetFunc{
osPkgsKey: rhelEc2PackageSet,
},
defaultImageConfig: defaultEc2ImageConfigX86_64(rd),
kernelOptions: "console=ttyS0,115200n8 console=tty0 net.ifnames=0 rd.blacklist=nouveau nvme_core.io_timeout=4294967295 crashkernel=auto",
bootable: true,
defaultSize: 10 * common.GibiByte,
image: liveImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"os", "image", "xz"},
exports: []string{"xz"},
basePartitionTables: basePartitionTables,
}
return it
}
func ec2HaImgTypeX86_64(rd distribution) imageType {
basePartitionTables := ec2BasePartitionTables
// use legacy partition tables for RHEL 8.8 and older
if common.VersionLessThan(rd.osVersion, "8.9") {
basePartitionTables = ec2LegacyBasePartitionTables
}
it := imageType{
name: "ec2-ha",
filename: "image.raw.xz",
mimeType: "application/xz",
compression: "xz",
packageSets: map[string]packageSetFunc{
osPkgsKey: rhelEc2HaPackageSet,
},
defaultImageConfig: defaultEc2ImageConfigX86_64(rd),
kernelOptions: "console=ttyS0,115200n8 console=tty0 net.ifnames=0 rd.blacklist=nouveau nvme_core.io_timeout=4294967295 crashkernel=auto",
bootable: true,
defaultSize: 10 * common.GibiByte,
image: liveImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"os", "image", "xz"},
exports: []string{"xz"},
basePartitionTables: basePartitionTables,
}
return it
}
func amiImgTypeAarch64(rd distribution) imageType {
it := imageType{
name: "ami",
filename: "image.raw",
mimeType: "application/octet-stream",
packageSets: map[string]packageSetFunc{
osPkgsKey: ec2CommonPackageSet,
},
defaultImageConfig: defaultAMIImageConfig(rd),
kernelOptions: "console=ttyS0,115200n8 console=tty0 net.ifnames=0 rd.blacklist=nouveau nvme_core.io_timeout=4294967295 iommu.strict=0 crashkernel=auto",
bootable: true,
defaultSize: 10 * common.GibiByte,
image: liveImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"os", "image"},
exports: []string{"image"},
basePartitionTables: ec2BasePartitionTables,
}
return it
}
func ec2ImgTypeAarch64(rd distribution) imageType {
basePartitionTables := ec2BasePartitionTables
// use legacy partition tables for RHEL 8.8 and older
if common.VersionLessThan(rd.osVersion, "8.9") {
basePartitionTables = ec2LegacyBasePartitionTables
}
it := imageType{
name: "ec2",
filename: "image.raw.xz",
mimeType: "application/xz",
compression: "xz",
packageSets: map[string]packageSetFunc{
osPkgsKey: rhelEc2PackageSet,
},
defaultImageConfig: defaultEc2ImageConfig(rd),
kernelOptions: "console=ttyS0,115200n8 console=tty0 net.ifnames=0 rd.blacklist=nouveau nvme_core.io_timeout=4294967295 iommu.strict=0 crashkernel=auto",
bootable: true,
defaultSize: 10 * common.GibiByte,
image: liveImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"os", "image", "xz"},
exports: []string{"xz"},
basePartitionTables: basePartitionTables,
}
return it
}
func ec2SapImgTypeX86_64(rd distribution) imageType {
basePartitionTables := ec2BasePartitionTables
// use legacy partition tables for RHEL 8.8 and older
if common.VersionLessThan(rd.osVersion, "8.9") {
basePartitionTables = ec2LegacyBasePartitionTables
}
it := imageType{
name: "ec2-sap",
filename: "image.raw.xz",
mimeType: "application/xz",
compression: "xz",
packageSets: map[string]packageSetFunc{
osPkgsKey: rhelEc2SapPackageSet,
},
defaultImageConfig: defaultEc2SapImageConfigX86_64(rd),
kernelOptions: "console=ttyS0,115200n8 console=tty0 net.ifnames=0 rd.blacklist=nouveau nvme_core.io_timeout=4294967295 crashkernel=auto processor.max_cstate=1 intel_idle.max_cstate=1",
bootable: true,
defaultSize: 10 * common.GibiByte,
image: liveImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"os", "image", "xz"},
exports: []string{"xz"},
basePartitionTables: basePartitionTables,
}
return it
}
// default EC2 images config (common for all architectures)
func baseEc2ImageConfig() *distro.ImageConfig {
return &distro.ImageConfig{
Timezone: common.ToPtr("UTC"),
TimeSynchronization: &osbuild.ChronyStageOptions{
Servers: []osbuild.ChronyConfigServer{
{
Hostname: "169.254.169.123",
Prefer: common.ToPtr(true),
Iburst: common.ToPtr(true),
Minpoll: common.ToPtr(4),
Maxpoll: common.ToPtr(4),
},
},
// empty string will remove any occurrences of the option from the configuration
LeapsecTz: common.ToPtr(""),
},
Keyboard: &osbuild.KeymapStageOptions{
Keymap: "us",
X11Keymap: &osbuild.X11KeymapOptions{
Layouts: []string{"us"},
},
},
EnabledServices: []string{
"sshd",
"NetworkManager",
"nm-cloud-setup.service",
"nm-cloud-setup.timer",
"cloud-init",
"cloud-init-local",
"cloud-config",
"cloud-final",
"reboot.target",
},
DefaultTarget: common.ToPtr("multi-user.target"),
Sysconfig: []*osbuild.SysconfigStageOptions{
{
Kernel: &osbuild.SysconfigKernelOptions{
UpdateDefault: true,
DefaultKernel: "kernel",
},
Network: &osbuild.SysconfigNetworkOptions{
Networking: true,
NoZeroConf: true,
},
NetworkScripts: &osbuild.NetworkScriptsOptions{
IfcfgFiles: map[string]osbuild.IfcfgFile{
"eth0": {
Device: "eth0",
Bootproto: osbuild.IfcfgBootprotoDHCP,
OnBoot: common.ToPtr(true),
Type: osbuild.IfcfgTypeEthernet,
UserCtl: common.ToPtr(true),
PeerDNS: common.ToPtr(true),
IPv6Init: common.ToPtr(false),
},
},
},
},
},
RHSMConfig: map[subscription.RHSMStatus]*osbuild.RHSMStageOptions{
subscription.RHSMConfigNoSubscription: {
// RHBZ#1932802
SubMan: &osbuild.RHSMStageOptionsSubMan{
Rhsmcertd: &osbuild.SubManConfigRHSMCERTDSection{
AutoRegistration: common.ToPtr(true),
},
Rhsm: &osbuild.SubManConfigRHSMSection{
ManageRepos: common.ToPtr(false),
},
},
},
subscription.RHSMConfigWithSubscription: {
// RHBZ#1932802
SubMan: &osbuild.RHSMStageOptionsSubMan{
Rhsmcertd: &osbuild.SubManConfigRHSMCERTDSection{
AutoRegistration: common.ToPtr(true),
},
// do not disable the redhat.repo management if the user
// explicitly request the system to be subscribed
},
},
},
SystemdLogind: []*osbuild.SystemdLogindStageOptions{
{
Filename: "00-getty-fixes.conf",
Config: osbuild.SystemdLogindConfigDropin{
Login: osbuild.SystemdLogindConfigLoginSection{
NAutoVTs: common.ToPtr(0),
},
},
},
},
CloudInit: []*osbuild.CloudInitStageOptions{
{
Filename: "00-rhel-default-user.cfg",
Config: osbuild.CloudInitConfigFile{
SystemInfo: &osbuild.CloudInitConfigSystemInfo{
DefaultUser: &osbuild.CloudInitConfigDefaultUser{
Name: "ec2-user",
},
},
},
},
},
Modprobe: []*osbuild.ModprobeStageOptions{
{
Filename: "blacklist-nouveau.conf",
Commands: osbuild.ModprobeConfigCmdList{
osbuild.NewModprobeConfigCmdBlacklist("nouveau"),
},
},
// COMPOSER-1807
{
Filename: "blacklist-amdgpu.conf",
Commands: osbuild.ModprobeConfigCmdList{
osbuild.NewModprobeConfigCmdBlacklist("amdgpu"),
},
},
},
DracutConf: []*osbuild.DracutConfStageOptions{
{
Filename: "sgdisk.conf",
Config: osbuild.DracutConfigFile{
Install: []string{"sgdisk"},
},
},
},
SystemdUnit: []*osbuild.SystemdUnitStageOptions{
// RHBZ#1822863
{
Unit: "nm-cloud-setup.service",
Dropin: "10-rh-enable-for-ec2.conf",
Config: osbuild.SystemdServiceUnitDropin{
Service: &osbuild.SystemdUnitServiceSection{
Environment: "NM_CLOUD_SETUP_EC2=yes",
},
},
},
},
Authselect: &osbuild.AuthselectStageOptions{
Profile: "sssd",
},
SshdConfig: &osbuild.SshdConfigStageOptions{
Config: osbuild.SshdConfigConfig{
PasswordAuthentication: common.ToPtr(false),
},
},
}
}
func defaultEc2ImageConfig(rd distribution) *distro.ImageConfig {
ic := baseEc2ImageConfig()
if rd.isRHEL() && common.VersionLessThan(rd.osVersion, "9.1") {
ic = appendRHSM(ic)
// Disable RHSM redhat.repo management
rhsmConf := ic.RHSMConfig[subscription.RHSMConfigNoSubscription]
rhsmConf.SubMan.Rhsm = &osbuild.SubManConfigRHSMSection{ManageRepos: common.ToPtr(false)}
ic.RHSMConfig[subscription.RHSMConfigNoSubscription] = rhsmConf
}
// The RHSM configuration should not be applied since 8.7, but it is instead done by installing the redhat-cloud-client-configuration package.
// See COMPOSER-1804 for more information.
rhel87PlusEc2ImageConfigOverride := &distro.ImageConfig{
RHSMConfig: map[subscription.RHSMStatus]*osbuild.RHSMStageOptions{},
}
if !common.VersionLessThan(rd.osVersion, "8.7") {
ic = rhel87PlusEc2ImageConfigOverride.InheritFrom(ic)
}
return ic
}
// default AMI (EC2 BYOS) images config
func defaultAMIImageConfig(rd distribution) *distro.ImageConfig {
ic := defaultEc2ImageConfig(rd)
if rd.isRHEL() {
// defaultAMIImageConfig() adds the rhsm options only for RHEL < 9.1
// Add it unconditionally for AMI
ic = appendRHSM(ic)
}
return ic
}
func defaultEc2ImageConfigX86_64(rd distribution) *distro.ImageConfig {
ic := defaultEc2ImageConfig(rd)
return appendEC2DracutX86_64(ic)
}
func defaultAMIImageConfigX86_64(rd distribution) *distro.ImageConfig {
ic := defaultAMIImageConfig(rd).InheritFrom(defaultEc2ImageConfigX86_64(rd))
return appendEC2DracutX86_64(ic)
}
func defaultEc2SapImageConfigX86_64(rd distribution) *distro.ImageConfig {
// default EC2-SAP image config (x86_64)
return sapImageConfig(rd).InheritFrom(defaultEc2ImageConfigX86_64(rd))
}
// common package set for RHEL (BYOS/RHUI) and CentOS Stream images
func ec2CommonPackageSet(t *imageType) rpmmd.PackageSet {
return rpmmd.PackageSet{
Include: []string{
"@core",
"authselect-compat",
"chrony",
"cloud-init",
"cloud-utils-growpart",
"dhcp-client",
"dracut-config-generic",
"dracut-norescue",
"gdisk",
"grub2",
"langpacks-en",
"NetworkManager",
"NetworkManager-cloud-setup",
"redhat-release",
"redhat-release-eula",
"rsync",
"tar",
"yum-utils",
},
Exclude: []string{
"aic94xx-firmware",
"alsa-firmware",
"alsa-tools-firmware",
"biosdevname",
"firewalld",
"iprutils",
"ivtv-firmware",
"iwl1000-firmware",
"iwl100-firmware",
"iwl105-firmware",
"iwl135-firmware",
"iwl2000-firmware",
"iwl2030-firmware",
"iwl3160-firmware",
"iwl3945-firmware",
"iwl4965-firmware",
"iwl5000-firmware",
"iwl5150-firmware",
"iwl6000-firmware",
"iwl6000g2a-firmware",
"iwl6000g2b-firmware",
"iwl6050-firmware",
"iwl7260-firmware",
"libertas-sd8686-firmware",
"libertas-sd8787-firmware",
"libertas-usb8388-firmware",
"plymouth",
// RHBZ#2075815
"qemu-guest-agent",
},
}.Append(distroSpecificPackageSet(t))
}
// common rhel ec2 RHUI image package set
func rhelEc2CommonPackageSet(t *imageType) rpmmd.PackageSet {
ps := ec2CommonPackageSet(t)
// Include "redhat-cloud-client-configuration" on 8.7+ (COMPOSER-1804)
if !common.VersionLessThan(t.arch.distro.osVersion, "8.7") {
ps.Include = append(ps.Include, "redhat-cloud-client-configuration")
}
return ps
}
// rhel-ec2 image package set
func rhelEc2PackageSet(t *imageType) rpmmd.PackageSet {
ec2PackageSet := rhelEc2CommonPackageSet(t)
ec2PackageSet.Include = append(ec2PackageSet.Include, "rh-amazon-rhui-client")
ec2PackageSet.Exclude = append(ec2PackageSet.Exclude, "alsa-lib")
return ec2PackageSet
}
// rhel-ha-ec2 image package set
func rhelEc2HaPackageSet(t *imageType) rpmmd.PackageSet {
ec2HaPackageSet := rhelEc2CommonPackageSet(t)
ec2HaPackageSet.Include = append(ec2HaPackageSet.Include,
"fence-agents-all",
"pacemaker",
"pcs",
"rh-amazon-rhui-client-ha",
)
ec2HaPackageSet.Exclude = append(ec2HaPackageSet.Exclude, "alsa-lib")
return ec2HaPackageSet
}
// rhel-sap-ec2 image package set
// Includes the common ec2 package set, the common SAP packages, and
// the amazon rhui sap package
func rhelEc2SapPackageSet(t *imageType) rpmmd.PackageSet {
return rpmmd.PackageSet{
Include: []string{
"rh-amazon-rhui-client-sap-bundle-e4s",
},
}.Append(rhelEc2CommonPackageSet(t)).Append(SapPackageSet(t))
}
// Add RHSM config options to ImageConfig.
// Used for RHEL distros.
func appendRHSM(ic *distro.ImageConfig) *distro.ImageConfig {
rhsm := &distro.ImageConfig{
RHSMConfig: map[subscription.RHSMStatus]*osbuild.RHSMStageOptions{
subscription.RHSMConfigNoSubscription: {
// RHBZ#1932802
SubMan: &osbuild.RHSMStageOptionsSubMan{
Rhsmcertd: &osbuild.SubManConfigRHSMCERTDSection{
AutoRegistration: common.ToPtr(true),
},
// Don't disable RHSM redhat.repo management on the AMI
// image, which is BYOS and does not use RHUI for content.
// Otherwise subscribing the system manually after booting
// it would result in empty redhat.repo. Without RHUI, such
// system would have no way to get Red Hat content, but
// enable the repo management manually, which would be very
// confusing.
},
},
subscription.RHSMConfigWithSubscription: {
// RHBZ#1932802
SubMan: &osbuild.RHSMStageOptionsSubMan{
Rhsmcertd: &osbuild.SubManConfigRHSMCERTDSection{
AutoRegistration: common.ToPtr(true),
},
// do not disable the redhat.repo management if the user
// explicitly request the system to be subscribed
},
},
},
}
return rhsm.InheritFrom(ic)
}
func appendEC2DracutX86_64(ic *distro.ImageConfig) *distro.ImageConfig {
ic.DracutConf = append(ic.DracutConf,
&osbuild.DracutConfStageOptions{
Filename: "ec2.conf",
Config: osbuild.DracutConfigFile{
AddDrivers: []string{
"nvme",
"xen-blkfront",
},
},
})
return ic
}

View file

@ -0,0 +1,73 @@
package rhel8
import (
"errors"
"fmt"
"sort"
"github.com/osbuild/images/pkg/distro"
"github.com/osbuild/images/pkg/platform"
)
type architecture struct {
distro *distribution
name string
imageTypes map[string]distro.ImageType
imageTypeAliases map[string]string
}
func (a *architecture) Name() string {
return a.name
}
func (a *architecture) ListImageTypes() []string {
itNames := make([]string, 0, len(a.imageTypes))
for name := range a.imageTypes {
itNames = append(itNames, name)
}
sort.Strings(itNames)
return itNames
}
func (a *architecture) GetImageType(name string) (distro.ImageType, error) {
t, exists := a.imageTypes[name]
if !exists {
aliasForName, exists := a.imageTypeAliases[name]
if !exists {
return nil, errors.New("invalid image type: " + name)
}
t, exists = a.imageTypes[aliasForName]
if !exists {
panic(fmt.Sprintf("image type '%s' is an alias to a non-existing image type '%s'", name, aliasForName))
}
}
return t, nil
}
func (a *architecture) addImageTypes(platform platform.Platform, imageTypes ...imageType) {
if a.imageTypes == nil {
a.imageTypes = map[string]distro.ImageType{}
}
for idx := range imageTypes {
it := imageTypes[idx]
if _, e := a.imageTypes[it.name]; e {
panic("already added: " + it.name)
}
it.arch = a
it.platform = platform
a.imageTypes[it.name] = &it
for _, alias := range it.nameAliases {
if a.imageTypeAliases == nil {
a.imageTypeAliases = map[string]string{}
}
if existingAliasFor, exists := a.imageTypeAliases[alias]; exists {
panic(fmt.Sprintf("image type alias '%s' for '%s' is already defined for another image type '%s'", alias, it.name, existingAliasFor))
}
a.imageTypeAliases[alias] = it.name
}
}
}
func (a *architecture) Distro() distro.Distro {
return a.distro
}

View file

@ -0,0 +1,722 @@
package rhel8
import (
"github.com/osbuild/images/internal/common"
"github.com/osbuild/images/internal/shell"
"github.com/osbuild/images/pkg/disk"
"github.com/osbuild/images/pkg/distro"
"github.com/osbuild/images/pkg/osbuild"
"github.com/osbuild/images/pkg/platform"
"github.com/osbuild/images/pkg/rpmmd"
"github.com/osbuild/images/pkg/subscription"
)
const defaultAzureKernelOptions = "ro crashkernel=auto console=tty1 console=ttyS0 earlyprintk=ttyS0 rootdelay=300"
func azureRhuiImgType() imageType {
return imageType{
name: "azure-rhui",
filename: "disk.vhd.xz",
mimeType: "application/xz",
compression: "xz",
packageSets: map[string]packageSetFunc{
osPkgsKey: azureRhuiPackageSet,
},
defaultImageConfig: defaultAzureRhuiImageConfig.InheritFrom(defaultVhdImageConfig()),
kernelOptions: defaultAzureKernelOptions,
bootable: true,
defaultSize: 64 * common.GibiByte,
image: liveImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"os", "image", "vpc", "xz"},
exports: []string{"xz"},
basePartitionTables: azureRhuiBasePartitionTables,
}
}
func azureSapRhuiImgType(rd distribution) imageType {
return imageType{
name: "azure-sap-rhui",
filename: "disk.vhd.xz",
mimeType: "application/xz",
compression: "xz",
packageSets: map[string]packageSetFunc{
osPkgsKey: azureSapPackageSet,
},
defaultImageConfig: defaultAzureRhuiImageConfig.InheritFrom(sapAzureImageConfig(rd)),
kernelOptions: defaultAzureKernelOptions,
bootable: true,
defaultSize: 64 * common.GibiByte,
image: liveImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"os", "image", "vpc", "xz"},
exports: []string{"xz"},
basePartitionTables: azureRhuiBasePartitionTables,
}
}
func azureByosImgType() imageType {
return imageType{
name: "vhd",
filename: "disk.vhd",
mimeType: "application/x-vhd",
packageSets: map[string]packageSetFunc{
osPkgsKey: azurePackageSet,
},
defaultImageConfig: defaultAzureByosImageConfig.InheritFrom(defaultVhdImageConfig()),
kernelOptions: defaultAzureKernelOptions,
bootable: true,
defaultSize: 4 * common.GibiByte,
image: liveImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"os", "image", "vpc"},
exports: []string{"vpc"},
basePartitionTables: defaultBasePartitionTables,
}
}
// Azure non-RHEL image type
func azureImgType() imageType {
return imageType{
name: "vhd",
filename: "disk.vhd",
mimeType: "application/x-vhd",
packageSets: map[string]packageSetFunc{
osPkgsKey: azurePackageSet,
},
defaultImageConfig: defaultVhdImageConfig(),
kernelOptions: defaultAzureKernelOptions,
bootable: true,
defaultSize: 4 * common.GibiByte,
image: liveImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"os", "image", "vpc"},
exports: []string{"vpc"},
basePartitionTables: defaultBasePartitionTables,
}
}
func azureEap7RhuiImgType() imageType {
return imageType{
name: "azure-eap7-rhui",
workload: eapWorkload(),
filename: "disk.vhd.xz",
mimeType: "application/xz",
compression: "xz",
packageSets: map[string]packageSetFunc{
osPkgsKey: azureEapPackageSet,
},
defaultImageConfig: defaultAzureEapImageConfig.InheritFrom(defaultAzureRhuiImageConfig.InheritFrom(defaultAzureImageConfig)),
kernelOptions: defaultAzureKernelOptions,
bootable: true,
defaultSize: 64 * common.GibiByte,
image: liveImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"os", "image", "vpc", "xz"},
exports: []string{"xz"},
basePartitionTables: azureRhuiBasePartitionTables,
}
}
// PACKAGE SETS
// Common Azure image package set
func azureCommonPackageSet(t *imageType) rpmmd.PackageSet {
ps := rpmmd.PackageSet{
Include: []string{
"@Server",
"NetworkManager",
"NetworkManager-cloud-setup",
"WALinuxAgent",
"bzip2",
"cloud-init",
"cloud-utils-growpart",
"cryptsetup-reencrypt",
"dracut-config-generic",
"dracut-norescue",
"efibootmgr",
"gdisk",
"hyperv-daemons",
"kernel",
"kernel-core",
"kernel-modules",
"langpacks-en",
"lvm2",
"nvme-cli",
"patch",
"rng-tools",
"selinux-policy-targeted",
"uuid",
"yum-utils",
},
Exclude: []string{
"NetworkManager-config-server",
"aic94xx-firmware",
"alsa-firmware",
"alsa-sof-firmware",
"alsa-tools-firmware",
"biosdevname",
"bolt",
"buildah",
"cockpit-podman",
"containernetworking-plugins",
"dnf-plugin-spacewalk",
"dracut-config-rescue",
"glibc-all-langpacks",
"iprutils",
"ivtv-firmware",
"iwl100-firmware",
"iwl1000-firmware",
"iwl105-firmware",
"iwl135-firmware",
"iwl2000-firmware",
"iwl2030-firmware",
"iwl3160-firmware",
"iwl3945-firmware",
"iwl4965-firmware",
"iwl5000-firmware",
"iwl5150-firmware",
"iwl6000-firmware",
"iwl6000g2a-firmware",
"iwl6000g2b-firmware",
"iwl6050-firmware",
"iwl7260-firmware",
"libertas-sd8686-firmware",
"libertas-sd8787-firmware",
"libertas-usb8388-firmware",
"plymouth",
"podman",
"python3-dnf-plugin-spacewalk",
"python3-hwdata",
"python3-rhnlib",
"rhn-check",
"rhn-client-tools",
"rhn-setup",
"rhnlib",
"rhnsd",
"usb_modeswitch",
},
}.Append(distroSpecificPackageSet(t))
if t.arch.distro.isRHEL() {
ps.Append(rpmmd.PackageSet{
Include: []string{
"insights-client",
"rhc",
},
})
}
return ps
}
// Azure BYOS image package set
func azurePackageSet(t *imageType) rpmmd.PackageSet {
return rpmmd.PackageSet{
Include: []string{
"firewalld",
},
Exclude: []string{
"alsa-lib",
},
}.Append(azureCommonPackageSet(t))
}
// Azure RHUI image package set
func azureRhuiPackageSet(t *imageType) rpmmd.PackageSet {
return rpmmd.PackageSet{
Include: []string{
"firewalld",
"rhui-azure-rhel8",
},
Exclude: []string{
"alsa-lib",
},
}.Append(azureCommonPackageSet(t))
}
// Azure SAP image package set
// Includes the common azure package set, the common SAP packages, and
// the azure rhui sap package.
func azureSapPackageSet(t *imageType) rpmmd.PackageSet {
return rpmmd.PackageSet{
Include: []string{
"firewalld",
"rhui-azure-rhel8-sap-ha",
},
}.Append(azureCommonPackageSet(t)).Append(SapPackageSet(t))
}
func azureEapPackageSet(t *imageType) rpmmd.PackageSet {
return rpmmd.PackageSet{
Include: []string{
"rhui-azure-rhel8",
},
Exclude: []string{
"firewalld",
},
}.Append(azureCommonPackageSet(t))
}
// PARTITION TABLES
var azureRhuiBasePartitionTables = distro.BasePartitionTableMap{
platform.ARCH_X86_64.String(): disk.PartitionTable{
UUID: "D209C89E-EA5E-4FBD-B161-B461CCE297E0",
Type: "gpt",
Size: 64 * common.GibiByte,
Partitions: []disk.Partition{
{
Size: 500 * common.MebiByte,
Type: disk.EFISystemPartitionGUID,
UUID: disk.EFISystemPartitionUUID,
Payload: &disk.Filesystem{
Type: "vfat",
UUID: disk.EFIFilesystemUUID,
Mountpoint: "/boot/efi",
FSTabOptions: "defaults,uid=0,gid=0,umask=077,shortname=winnt",
FSTabFreq: 0,
FSTabPassNo: 2,
},
},
{
Size: 500 * common.MebiByte,
Type: disk.FilesystemDataGUID,
UUID: disk.FilesystemDataUUID,
Payload: &disk.Filesystem{
Type: "xfs",
Mountpoint: "/boot",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
{
Size: 2 * common.MebiByte,
Bootable: true,
Type: disk.BIOSBootPartitionGUID,
UUID: disk.BIOSBootPartitionUUID,
},
{
Type: disk.LVMPartitionGUID,
UUID: disk.RootPartitionUUID,
Payload: &disk.LVMVolumeGroup{
Name: "rootvg",
Description: "built with lvm2 and osbuild",
LogicalVolumes: []disk.LVMLogicalVolume{
{
Size: 1 * common.GibiByte,
Name: "homelv",
Payload: &disk.Filesystem{
Type: "xfs",
Label: "home",
Mountpoint: "/home",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
{
Size: 2 * common.GibiByte,
Name: "rootlv",
Payload: &disk.Filesystem{
Type: "xfs",
Label: "root",
Mountpoint: "/",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
{
Size: 2 * common.GibiByte,
Name: "tmplv",
Payload: &disk.Filesystem{
Type: "xfs",
Label: "tmp",
Mountpoint: "/tmp",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
{
Size: 10 * common.GibiByte,
Name: "usrlv",
Payload: &disk.Filesystem{
Type: "xfs",
Label: "usr",
Mountpoint: "/usr",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
{
Size: 10 * common.GibiByte,
Name: "varlv",
Payload: &disk.Filesystem{
Type: "xfs",
Label: "var",
Mountpoint: "/var",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
},
},
},
},
},
platform.ARCH_AARCH64.String(): disk.PartitionTable{
UUID: "D209C89E-EA5E-4FBD-B161-B461CCE297E0",
Type: "gpt",
Size: 64 * common.GibiByte,
Partitions: []disk.Partition{
{
Size: 500 * common.MebiByte,
Type: disk.EFISystemPartitionGUID,
UUID: disk.EFISystemPartitionUUID,
Payload: &disk.Filesystem{
Type: "vfat",
UUID: disk.EFIFilesystemUUID,
Mountpoint: "/boot/efi",
FSTabOptions: "defaults,uid=0,gid=0,umask=077,shortname=winnt",
FSTabFreq: 0,
FSTabPassNo: 2,
},
},
{
Size: 500 * common.MebiByte,
Type: disk.FilesystemDataGUID,
UUID: disk.FilesystemDataUUID,
Payload: &disk.Filesystem{
Type: "xfs",
Mountpoint: "/boot",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
{
Type: disk.LVMPartitionGUID,
UUID: disk.RootPartitionUUID,
Payload: &disk.LVMVolumeGroup{
Name: "rootvg",
Description: "built with lvm2 and osbuild",
LogicalVolumes: []disk.LVMLogicalVolume{
{
Size: 1 * common.GibiByte,
Name: "homelv",
Payload: &disk.Filesystem{
Type: "xfs",
Label: "home",
Mountpoint: "/home",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
{
Size: 2 * common.GibiByte,
Name: "rootlv",
Payload: &disk.Filesystem{
Type: "xfs",
Label: "root",
Mountpoint: "/",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
{
Size: 2 * common.GibiByte,
Name: "tmplv",
Payload: &disk.Filesystem{
Type: "xfs",
Label: "tmp",
Mountpoint: "/tmp",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
{
Size: 10 * common.GibiByte,
Name: "usrlv",
Payload: &disk.Filesystem{
Type: "xfs",
Label: "usr",
Mountpoint: "/usr",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
{
Size: 10 * common.GibiByte,
Name: "varlv",
Payload: &disk.Filesystem{
Type: "xfs",
Label: "var",
Mountpoint: "/var",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
},
},
},
},
},
}
var defaultAzureImageConfig = &distro.ImageConfig{
Timezone: common.ToPtr("Etc/UTC"),
Locale: common.ToPtr("en_US.UTF-8"),
Keyboard: &osbuild.KeymapStageOptions{
Keymap: "us",
X11Keymap: &osbuild.X11KeymapOptions{
Layouts: []string{"us"},
},
},
Sysconfig: []*osbuild.SysconfigStageOptions{
{
Kernel: &osbuild.SysconfigKernelOptions{
UpdateDefault: true,
DefaultKernel: "kernel-core",
},
Network: &osbuild.SysconfigNetworkOptions{
Networking: true,
NoZeroConf: true,
},
},
},
EnabledServices: []string{
"nm-cloud-setup.service",
"nm-cloud-setup.timer",
"sshd",
"systemd-resolved",
"waagent",
},
SshdConfig: &osbuild.SshdConfigStageOptions{
Config: osbuild.SshdConfigConfig{
ClientAliveInterval: common.ToPtr(180),
},
},
Modprobe: []*osbuild.ModprobeStageOptions{
{
Filename: "blacklist-amdgpu.conf",
Commands: osbuild.ModprobeConfigCmdList{
osbuild.NewModprobeConfigCmdBlacklist("amdgpu"),
},
},
{
Filename: "blacklist-intel-cstate.conf",
Commands: osbuild.ModprobeConfigCmdList{
osbuild.NewModprobeConfigCmdBlacklist("intel_cstate"),
},
},
{
Filename: "blacklist-floppy.conf",
Commands: osbuild.ModprobeConfigCmdList{
osbuild.NewModprobeConfigCmdBlacklist("floppy"),
},
},
{
Filename: "blacklist-nouveau.conf",
Commands: osbuild.ModprobeConfigCmdList{
osbuild.NewModprobeConfigCmdBlacklist("nouveau"),
osbuild.NewModprobeConfigCmdBlacklist("lbm-nouveau"),
},
},
{
Filename: "blacklist-skylake-edac.conf",
Commands: osbuild.ModprobeConfigCmdList{
osbuild.NewModprobeConfigCmdBlacklist("skx_edac"),
},
},
},
CloudInit: []*osbuild.CloudInitStageOptions{
{
Filename: "10-azure-kvp.cfg",
Config: osbuild.CloudInitConfigFile{
Reporting: &osbuild.CloudInitConfigReporting{
Logging: &osbuild.CloudInitConfigReportingHandlers{
Type: "log",
},
Telemetry: &osbuild.CloudInitConfigReportingHandlers{
Type: "hyperv",
},
},
},
},
{
Filename: "91-azure_datasource.cfg",
Config: osbuild.CloudInitConfigFile{
Datasource: &osbuild.CloudInitConfigDatasource{
Azure: &osbuild.CloudInitConfigDatasourceAzure{
ApplyNetworkConfig: false,
},
},
DatasourceList: []string{
"Azure",
},
},
},
},
PwQuality: &osbuild.PwqualityConfStageOptions{
Config: osbuild.PwqualityConfConfig{
Minlen: common.ToPtr(6),
Minclass: common.ToPtr(3),
Dcredit: common.ToPtr(0),
Ucredit: common.ToPtr(0),
Lcredit: common.ToPtr(0),
Ocredit: common.ToPtr(0),
},
},
WAAgentConfig: &osbuild.WAAgentConfStageOptions{
Config: osbuild.WAAgentConfig{
RDFormat: common.ToPtr(false),
RDEnableSwap: common.ToPtr(false),
},
},
Grub2Config: &osbuild.GRUB2Config{
TerminalInput: []string{"serial", "console"},
TerminalOutput: []string{"serial", "console"},
Serial: "serial --speed=115200 --unit=0 --word=8 --parity=no --stop=1",
Timeout: 10,
},
UdevRules: &osbuild.UdevRulesStageOptions{
Filename: "/etc/udev/rules.d/68-azure-sriov-nm-unmanaged.rules",
Rules: osbuild.UdevRules{
osbuild.UdevRuleComment{
Comment: []string{
"Accelerated Networking on Azure exposes a new SRIOV interface to the VM.",
"This interface is transparently bonded to the synthetic interface,",
"so NetworkManager should just ignore any SRIOV interfaces.",
},
},
osbuild.NewUdevRule(
[]osbuild.UdevKV{
{K: "SUBSYSTEM", O: "==", V: "net"},
{K: "DRIVERS", O: "==", V: "hv_pci"},
{K: "ACTION", O: "==", V: "add"},
{K: "ENV", A: "NM_UNMANAGED", O: "=", V: "1"},
},
),
},
},
SystemdUnit: []*osbuild.SystemdUnitStageOptions{
{
Unit: "nm-cloud-setup.service",
Dropin: "10-rh-enable-for-azure.conf",
Config: osbuild.SystemdServiceUnitDropin{
Service: &osbuild.SystemdUnitServiceSection{
Environment: "NM_CLOUD_SETUP_AZURE=yes",
},
},
},
},
DefaultTarget: common.ToPtr("multi-user.target"),
}
// Diff of the default Image Config compare to the `defaultAzureImageConfig`
var defaultAzureByosImageConfig = &distro.ImageConfig{
GPGKeyFiles: []string{
"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release",
},
RHSMConfig: map[subscription.RHSMStatus]*osbuild.RHSMStageOptions{
subscription.RHSMConfigNoSubscription: {
SubMan: &osbuild.RHSMStageOptionsSubMan{
Rhsmcertd: &osbuild.SubManConfigRHSMCERTDSection{
AutoRegistration: common.ToPtr(true),
},
// Don't disable RHSM redhat.repo management on the GCE
// image, which is BYOS and does not use RHUI for content.
// Otherwise subscribing the system manually after booting
// it would result in empty redhat.repo. Without RHUI, such
// system would have no way to get Red Hat content, but
// enable the repo management manually, which would be very
// confusing.
},
},
subscription.RHSMConfigWithSubscription: {
SubMan: &osbuild.RHSMStageOptionsSubMan{
Rhsmcertd: &osbuild.SubManConfigRHSMCERTDSection{
AutoRegistration: common.ToPtr(true),
},
// do not disable the redhat.repo management if the user
// explicitly request the system to be subscribed
},
},
},
}
// Diff of the default Image Config compare to the `defaultAzureImageConfig`
var defaultAzureRhuiImageConfig = &distro.ImageConfig{
GPGKeyFiles: []string{
"/etc/pki/rpm-gpg/RPM-GPG-KEY-microsoft-azure-release",
"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release",
},
RHSMConfig: map[subscription.RHSMStatus]*osbuild.RHSMStageOptions{
subscription.RHSMConfigNoSubscription: {
DnfPlugins: &osbuild.RHSMStageOptionsDnfPlugins{
SubscriptionManager: &osbuild.RHSMStageOptionsDnfPlugin{
Enabled: false,
},
},
SubMan: &osbuild.RHSMStageOptionsSubMan{
Rhsmcertd: &osbuild.SubManConfigRHSMCERTDSection{
AutoRegistration: common.ToPtr(true),
},
Rhsm: &osbuild.SubManConfigRHSMSection{
ManageRepos: common.ToPtr(false),
},
},
},
subscription.RHSMConfigWithSubscription: {
SubMan: &osbuild.RHSMStageOptionsSubMan{
Rhsmcertd: &osbuild.SubManConfigRHSMCERTDSection{
AutoRegistration: common.ToPtr(true),
},
// do not disable the redhat.repo management if the user
// explicitly request the system to be subscribed
},
},
},
}
const wildflyPath = "/opt/rh/eap7/root/usr/share/wildfly"
var defaultAzureEapImageConfig = &distro.ImageConfig{
// shell env vars for EAP
ShellInit: []shell.InitFile{
{
Filename: "eap_env.sh",
Variables: []shell.EnvironmentVariable{
{
Key: "EAP_HOME",
Value: wildflyPath,
},
{
Key: "JBOSS_HOME",
Value: wildflyPath,
},
},
},
},
}
func defaultVhdImageConfig() *distro.ImageConfig {
imageConfig := &distro.ImageConfig{
EnabledServices: append(defaultAzureImageConfig.EnabledServices, "firewalld"),
}
return imageConfig.InheritFrom(defaultAzureImageConfig)
}
func sapAzureImageConfig(rd distribution) *distro.ImageConfig {
return sapImageConfig(rd).InheritFrom(defaultVhdImageConfig())
}

View file

@ -0,0 +1,302 @@
package rhel8
import (
"fmt"
"github.com/osbuild/images/pkg/platform"
"github.com/osbuild/images/pkg/rpmmd"
)
func imageInstaller() imageType {
return imageType{
name: "image-installer",
filename: "installer.iso",
mimeType: "application/x-iso9660-image",
packageSets: map[string]packageSetFunc{
osPkgsKey: bareMetalPackageSet,
installerPkgsKey: anacondaPackageSet,
},
rpmOstree: false,
bootISO: true,
bootable: true,
image: imageInstallerImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"anaconda-tree", "rootfs-image", "efiboot-tree", "os", "bootiso-tree", "bootiso"},
exports: []string{"bootiso"},
}
}
func tarImgType() imageType {
return imageType{
name: "tar",
filename: "root.tar.xz",
mimeType: "application/x-tar",
packageSets: map[string]packageSetFunc{
osPkgsKey: func(t *imageType) rpmmd.PackageSet {
return rpmmd.PackageSet{
Include: []string{"policycoreutils", "selinux-policy-targeted"},
Exclude: []string{"rng-tools"},
}
},
},
image: tarImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"os", "archive"},
exports: []string{"archive"},
}
}
func bareMetalPackageSet(t *imageType) rpmmd.PackageSet {
ps := rpmmd.PackageSet{
Include: []string{
"@core",
"authselect-compat",
"chrony",
"cockpit-system",
"cockpit-ws",
"dhcp-client",
"dnf",
"dnf-utils",
"dosfstools",
"dracut-norescue",
"iwl1000-firmware",
"iwl100-firmware",
"iwl105-firmware",
"iwl135-firmware",
"iwl2000-firmware",
"iwl2030-firmware",
"iwl3160-firmware",
"iwl3945-firmware",
"iwl4965-firmware",
"iwl5000-firmware",
"iwl5150-firmware",
"iwl6000-firmware",
"iwl6000g2a-firmware",
"iwl6000g2b-firmware",
"iwl6050-firmware",
"iwl7260-firmware",
"lvm2",
"net-tools",
"NetworkManager",
"nfs-utils",
"oddjob",
"oddjob-mkhomedir",
"policycoreutils",
"psmisc",
"python3-jsonschema",
"qemu-guest-agent",
"redhat-release",
"redhat-release-eula",
"rsync",
"selinux-policy-targeted",
"tar",
"tcpdump",
"yum",
},
Exclude: nil,
}.Append(distroSpecificPackageSet(t))
// Ensure to not pull in subscription-manager on non-RHEL distro
if t.arch.distro.isRHEL() {
ps = ps.Append(rpmmd.PackageSet{
Include: []string{
"subscription-manager-cockpit",
},
})
}
return ps
}
func installerPackageSet(t *imageType) rpmmd.PackageSet {
ps := rpmmd.PackageSet{
Include: []string{
"anaconda-dracut",
"curl",
"dracut-config-generic",
"dracut-network",
"hostname",
"iwl100-firmware",
"iwl1000-firmware",
"iwl105-firmware",
"iwl135-firmware",
"iwl2000-firmware",
"iwl2030-firmware",
"iwl3160-firmware",
"iwl5000-firmware",
"iwl5150-firmware",
"iwl6000-firmware",
"iwl6050-firmware",
"iwl7260-firmware",
"kernel",
"less",
"nfs-utils",
"openssh-clients",
"ostree",
"plymouth",
"prefixdevname",
"rng-tools",
"rpcbind",
"selinux-policy-targeted",
"systemd",
"tar",
"xfsprogs",
"xz",
},
}
switch t.arch.Name() {
case platform.ARCH_X86_64.String():
ps = ps.Append(rpmmd.PackageSet{
Include: []string{
"biosdevname",
},
})
}
return ps
}
func anacondaPackageSet(t *imageType) rpmmd.PackageSet {
// common installer packages
ps := installerPackageSet(t)
ps = ps.Append(rpmmd.PackageSet{
Include: []string{
"aajohan-comfortaa-fonts",
"abattis-cantarell-fonts",
"alsa-firmware",
"alsa-tools-firmware",
"anaconda",
"anaconda-install-env-deps",
"anaconda-widgets",
"audit",
"bind-utils",
"bitmap-fangsongti-fonts",
"bzip2",
"cryptsetup",
"dbus-x11",
"dejavu-sans-fonts",
"dejavu-sans-mono-fonts",
"device-mapper-persistent-data",
"dnf",
"dump",
"ethtool",
"fcoe-utils",
"ftp",
"gdb-gdbserver",
"gdisk",
"gfs2-utils",
"glibc-all-langpacks",
"google-noto-sans-cjk-ttc-fonts",
"gsettings-desktop-schemas",
"hdparm",
"hexedit",
"initscripts",
"ipmitool",
"iwl3945-firmware",
"iwl4965-firmware",
"iwl6000g2a-firmware",
"iwl6000g2b-firmware",
"jomolhari-fonts",
"kacst-farsi-fonts",
"kacst-qurn-fonts",
"kbd",
"kbd-misc",
"kdump-anaconda-addon",
"khmeros-base-fonts",
"libblockdev-lvm-dbus",
"libertas-sd8686-firmware",
"libertas-sd8787-firmware",
"libertas-usb8388-firmware",
"libertas-usb8388-olpc-firmware",
"libibverbs",
"libreport-plugin-bugzilla",
"libreport-plugin-reportuploader",
"libreport-rhel-anaconda-bugzilla",
"librsvg2",
"linux-firmware",
"lklug-fonts",
"lldpad",
"lohit-assamese-fonts",
"lohit-bengali-fonts",
"lohit-devanagari-fonts",
"lohit-gujarati-fonts",
"lohit-gurmukhi-fonts",
"lohit-kannada-fonts",
"lohit-odia-fonts",
"lohit-tamil-fonts",
"lohit-telugu-fonts",
"lsof",
"madan-fonts",
"metacity",
"mtr",
"mt-st",
"net-tools",
"nmap-ncat",
"nm-connection-editor",
"nss-tools",
"openssh-server",
"oscap-anaconda-addon",
"pciutils",
"perl-interpreter",
"pigz",
"python3-pyatspi",
"rdma-core",
"redhat-release-eula",
"rpm-ostree",
"rsync",
"rsyslog",
"sg3_utils",
"sil-abyssinica-fonts",
"sil-padauk-fonts",
"sil-scheherazade-fonts",
"smartmontools",
"smc-meera-fonts",
"spice-vdagent",
"strace",
"system-storage-manager",
"thai-scalable-waree-fonts",
"tigervnc-server-minimal",
"tigervnc-server-module",
"udisks2",
"udisks2-iscsi",
"usbutils",
"vim-minimal",
"volume_key",
"wget",
"xfsdump",
"xorg-x11-drivers",
"xorg-x11-fonts-misc",
"xorg-x11-server-utils",
"xorg-x11-server-Xorg",
"xorg-x11-xauth",
},
})
ps = ps.Append(anacondaBootPackageSet(t))
switch t.arch.Name() {
case platform.ARCH_X86_64.String():
ps = ps.Append(rpmmd.PackageSet{
Include: []string{
"biosdevname",
"dmidecode",
"memtest86+",
},
})
case platform.ARCH_AARCH64.String():
ps = ps.Append(rpmmd.PackageSet{
Include: []string{
"dmidecode",
},
})
default:
panic(fmt.Sprintf("unsupported arch: %s", t.arch.Name()))
}
return ps
}

View file

@ -0,0 +1,506 @@
package rhel8
import (
"errors"
"fmt"
"sort"
"strings"
"github.com/osbuild/images/internal/common"
"github.com/osbuild/images/internal/oscap"
"github.com/osbuild/images/pkg/distro"
"github.com/osbuild/images/pkg/osbuild"
"github.com/osbuild/images/pkg/platform"
"github.com/osbuild/images/pkg/runner"
)
var (
// rhel8 allow all
oscapProfileAllowList = []oscap.Profile{
oscap.AnssiBp28Enhanced,
oscap.AnssiBp28High,
oscap.AnssiBp28Intermediary,
oscap.AnssiBp28Minimal,
oscap.Cis,
oscap.CisServerL1,
oscap.CisWorkstationL1,
oscap.CisWorkstationL2,
oscap.Cui,
oscap.E8,
oscap.Hippa,
oscap.IsmO,
oscap.Ospp,
oscap.PciDss,
oscap.Stig,
oscap.StigGui,
}
)
type distribution struct {
name string
product string
osVersion string
releaseVersion string
modulePlatformID string
vendor string
ostreeRefTmpl string
isolabelTmpl string
runner runner.Runner
arches map[string]distro.Arch
defaultImageConfig *distro.ImageConfig
}
// RHEL-based OS image configuration defaults
var defaultDistroImageConfig = &distro.ImageConfig{
Timezone: common.ToPtr("America/New_York"),
Locale: common.ToPtr("en_US.UTF-8"),
Sysconfig: []*osbuild.SysconfigStageOptions{
{
Kernel: &osbuild.SysconfigKernelOptions{
UpdateDefault: true,
DefaultKernel: "kernel",
},
Network: &osbuild.SysconfigNetworkOptions{
Networking: true,
NoZeroConf: true,
},
},
},
}
func (d *distribution) Name() string {
return d.name
}
func (d *distribution) Releasever() string {
return d.releaseVersion
}
func (d *distribution) ModulePlatformID() string {
return d.modulePlatformID
}
func (d *distribution) OSTreeRef() string {
return d.ostreeRefTmpl
}
func (d *distribution) ListArches() []string {
archNames := make([]string, 0, len(d.arches))
for name := range d.arches {
archNames = append(archNames, name)
}
sort.Strings(archNames)
return archNames
}
func (d *distribution) GetArch(name string) (distro.Arch, error) {
arch, exists := d.arches[name]
if !exists {
return nil, errors.New("invalid architecture: " + name)
}
return arch, nil
}
func (d *distribution) addArches(arches ...architecture) {
if d.arches == nil {
d.arches = map[string]distro.Arch{}
}
// Do not make copies of architectures, as opposed to image types,
// because architecture definitions are not used by more than a single
// distro definition.
for idx := range arches {
d.arches[arches[idx].name] = &arches[idx]
}
}
func (d *distribution) isRHEL() bool {
return strings.HasPrefix(d.name, "rhel")
}
func (d *distribution) getDefaultImageConfig() *distro.ImageConfig {
return d.defaultImageConfig
}
// New creates a new distro object, defining the supported architectures and image types
func New() distro.Distro {
// default minor: create default minor version (current GA) and rename it
d := newDistro("rhel", 7)
d.name = "rhel-8"
return d
}
func NewRHEL84() distro.Distro {
return newDistro("rhel", 4)
}
func NewRHEL85() distro.Distro {
return newDistro("rhel", 5)
}
func NewRHEL86() distro.Distro {
return newDistro("rhel", 6)
}
func NewRHEL87() distro.Distro {
return newDistro("rhel", 7)
}
func NewRHEL88() distro.Distro {
return newDistro("rhel", 8)
}
func NewRHEL89() distro.Distro {
return newDistro("rhel", 9)
}
func NewCentos() distro.Distro {
return newDistro("centos", 0)
}
func newDistro(name string, minor int) *distribution {
var rd distribution
switch name {
case "rhel":
rd = distribution{
name: fmt.Sprintf("rhel-8%d", minor),
product: "Red Hat Enterprise Linux",
osVersion: fmt.Sprintf("8.%d", minor),
releaseVersion: "8",
modulePlatformID: "platform:el8",
vendor: "redhat",
ostreeRefTmpl: "rhel/8/%s/edge",
isolabelTmpl: fmt.Sprintf("RHEL-8-%d-0-BaseOS-%%s", minor),
runner: &runner.RHEL{Major: uint64(8), Minor: uint64(minor)},
defaultImageConfig: defaultDistroImageConfig,
}
case "centos":
rd = distribution{
name: "centos-8",
product: "CentOS Stream",
osVersion: "8-stream",
releaseVersion: "8",
modulePlatformID: "platform:el8",
vendor: "centos",
ostreeRefTmpl: "centos/8/%s/edge",
isolabelTmpl: "CentOS-Stream-8-%s-dvd",
runner: &runner.CentOS{Version: uint64(8)},
defaultImageConfig: defaultDistroImageConfig,
}
default:
panic(fmt.Sprintf("unknown distro name: %s", name))
}
// Architecture definitions
x86_64 := architecture{
name: platform.ARCH_X86_64.String(),
distro: &rd,
}
aarch64 := architecture{
name: platform.ARCH_AARCH64.String(),
distro: &rd,
}
ppc64le := architecture{
distro: &rd,
name: platform.ARCH_PPC64LE.String(),
}
s390x := architecture{
distro: &rd,
name: platform.ARCH_S390X.String(),
}
ociImgType := qcow2ImgType(rd)
ociImgType.name = "oci"
x86_64.addImageTypes(
&platform.X86{
BIOS: true,
UEFIVendor: rd.vendor,
BasePlatform: platform.BasePlatform{
ImageFormat: platform.FORMAT_QCOW2,
QCOW2Compat: "0.10",
},
},
qcow2ImgType(rd),
ociImgType,
)
x86_64.addImageTypes(
&platform.X86{
BIOS: true,
UEFIVendor: rd.vendor,
BasePlatform: platform.BasePlatform{
ImageFormat: platform.FORMAT_QCOW2,
},
},
openstackImgType(),
)
ec2X86Platform := &platform.X86{
BIOS: true,
UEFIVendor: rd.vendor,
BasePlatform: platform.BasePlatform{
ImageFormat: platform.FORMAT_RAW,
},
}
x86_64.addImageTypes(
ec2X86Platform,
amiImgTypeX86_64(rd),
)
bareMetalX86Platform := &platform.X86{
BasePlatform: platform.BasePlatform{
FirmwarePackages: []string{
"microcode_ctl", // ??
"iwl1000-firmware",
"iwl100-firmware",
"iwl105-firmware",
"iwl135-firmware",
"iwl2000-firmware",
"iwl2030-firmware",
"iwl3160-firmware",
"iwl5000-firmware",
"iwl5150-firmware",
"iwl6050-firmware",
},
},
BIOS: true,
UEFIVendor: rd.vendor,
}
x86_64.addImageTypes(
bareMetalX86Platform,
edgeOCIImgType(rd),
edgeCommitImgType(rd),
edgeInstallerImgType(rd),
imageInstaller(),
)
gceX86Platform := &platform.X86{
UEFIVendor: rd.vendor,
BasePlatform: platform.BasePlatform{
ImageFormat: platform.FORMAT_GCE,
},
}
x86_64.addImageTypes(
gceX86Platform,
gceImgType(rd),
)
x86_64.addImageTypes(
&platform.X86{
BIOS: true,
UEFIVendor: rd.vendor,
BasePlatform: platform.BasePlatform{
ImageFormat: platform.FORMAT_VMDK,
},
},
vmdkImgType(),
)
x86_64.addImageTypes(
&platform.X86{
BIOS: true,
UEFIVendor: rd.vendor,
BasePlatform: platform.BasePlatform{
ImageFormat: platform.FORMAT_OVA,
},
},
ovaImgType(),
)
x86_64.addImageTypes(
&platform.X86{},
tarImgType(),
)
aarch64.addImageTypes(
&platform.Aarch64{
UEFIVendor: rd.vendor,
BasePlatform: platform.BasePlatform{
ImageFormat: platform.FORMAT_QCOW2,
QCOW2Compat: "0.10",
},
},
qcow2ImgType(rd),
)
aarch64.addImageTypes(
&platform.Aarch64{
UEFIVendor: rd.vendor,
BasePlatform: platform.BasePlatform{
ImageFormat: platform.FORMAT_QCOW2,
},
},
openstackImgType(),
)
aarch64.addImageTypes(
&platform.Aarch64{},
tarImgType(),
)
bareMetalAarch64Platform := &platform.Aarch64{
BasePlatform: platform.BasePlatform{},
UEFIVendor: rd.vendor,
}
aarch64.addImageTypes(
bareMetalAarch64Platform,
edgeOCIImgType(rd),
edgeCommitImgType(rd),
edgeInstallerImgType(rd),
imageInstaller(),
)
rawAarch64Platform := &platform.Aarch64{
UEFIVendor: rd.vendor,
BasePlatform: platform.BasePlatform{
ImageFormat: platform.FORMAT_RAW,
},
}
aarch64.addImageTypes(
rawAarch64Platform,
amiImgTypeAarch64(rd),
)
ppc64le.addImageTypes(
&platform.PPC64LE{
BIOS: true,
BasePlatform: platform.BasePlatform{
ImageFormat: platform.FORMAT_QCOW2,
QCOW2Compat: "0.10",
},
},
qcow2ImgType(rd),
)
ppc64le.addImageTypes(
&platform.PPC64LE{},
tarImgType(),
)
s390x.addImageTypes(
&platform.S390X{
Zipl: true,
BasePlatform: platform.BasePlatform{
ImageFormat: platform.FORMAT_QCOW2,
QCOW2Compat: "0.10",
},
},
qcow2ImgType(rd),
)
s390x.addImageTypes(
&platform.S390X{},
tarImgType(),
)
azureX64Platform := &platform.X86{
BIOS: true,
UEFIVendor: rd.vendor,
BasePlatform: platform.BasePlatform{
ImageFormat: platform.FORMAT_VHD,
},
}
azureAarch64Platform := &platform.Aarch64{
UEFIVendor: rd.vendor,
BasePlatform: platform.BasePlatform{
ImageFormat: platform.FORMAT_VHD,
},
}
rawUEFIx86Platform := &platform.X86{
BasePlatform: platform.BasePlatform{
ImageFormat: platform.FORMAT_RAW,
},
BIOS: false,
UEFIVendor: rd.vendor,
}
if rd.isRHEL() {
if !common.VersionLessThan(rd.osVersion, "8.6") {
// image types only available on 8.6 and later on RHEL
// These edge image types require FDO which aren't available on older versions
x86_64.addImageTypes(
bareMetalX86Platform,
edgeRawImgType(),
)
x86_64.addImageTypes(
rawUEFIx86Platform,
edgeSimplifiedInstallerImgType(rd),
)
azureEap := azureEap7RhuiImgType()
x86_64.addImageTypes(azureX64Platform, azureEap)
aarch64.addImageTypes(
rawAarch64Platform,
edgeRawImgType(),
edgeSimplifiedInstallerImgType(rd),
)
// The Azure image types require hyperv-daemons which isn't available on older versions
aarch64.addImageTypes(azureAarch64Platform, azureRhuiImgType(), azureByosImgType())
}
// add azure to RHEL distro only
x86_64.addImageTypes(azureX64Platform, azureRhuiImgType(), azureByosImgType(), azureSapRhuiImgType(rd))
// keep the RHEL EC2 x86_64 images before 8.9 BIOS-only for backward compatibility
if common.VersionLessThan(rd.osVersion, "8.9") {
ec2X86Platform = &platform.X86{
BIOS: true,
BasePlatform: platform.BasePlatform{
ImageFormat: platform.FORMAT_RAW,
},
}
}
// add ec2 image types to RHEL distro only
x86_64.addImageTypes(ec2X86Platform, ec2ImgTypeX86_64(rd), ec2HaImgTypeX86_64(rd))
aarch64.addImageTypes(rawAarch64Platform, ec2ImgTypeAarch64(rd))
if rd.osVersion != "8.5" {
// NOTE: RHEL 8.5 is going away and these image types require some
// work to get working, so we just disable them here until the
// whole distro gets deleted
x86_64.addImageTypes(ec2X86Platform, ec2SapImgTypeX86_64(rd))
}
// add GCE RHUI image to RHEL only
x86_64.addImageTypes(gceX86Platform, gceRhuiImgType(rd))
// add s390x to RHEL distro only
rd.addArches(s390x)
} else {
x86_64.addImageTypes(
bareMetalX86Platform,
edgeRawImgType(),
)
x86_64.addImageTypes(
rawUEFIx86Platform,
edgeSimplifiedInstallerImgType(rd),
)
x86_64.addImageTypes(azureX64Platform, azureImgType())
aarch64.addImageTypes(
rawAarch64Platform,
edgeRawImgType(),
edgeSimplifiedInstallerImgType(rd),
)
aarch64.addImageTypes(azureAarch64Platform, azureImgType())
}
rd.addArches(x86_64, aarch64, ppc64le)
return &rd
}

View file

@ -0,0 +1,381 @@
package rhel8
import (
"fmt"
"github.com/osbuild/images/internal/common"
"github.com/osbuild/images/pkg/distro"
"github.com/osbuild/images/pkg/platform"
"github.com/osbuild/images/pkg/rpmmd"
)
func edgeCommitImgType(rd distribution) imageType {
it := imageType{
name: "edge-commit",
nameAliases: []string{"rhel-edge-commit"},
filename: "commit.tar",
mimeType: "application/x-tar",
packageSets: map[string]packageSetFunc{
osPkgsKey: edgeCommitPackageSet,
},
defaultImageConfig: &distro.ImageConfig{
EnabledServices: edgeServices(rd),
},
rpmOstree: true,
image: edgeCommitImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"os", "ostree-commit", "commit-archive"},
exports: []string{"commit-archive"},
}
return it
}
func edgeOCIImgType(rd distribution) imageType {
it := imageType{
name: "edge-container",
nameAliases: []string{"rhel-edge-container"},
filename: "container.tar",
mimeType: "application/x-tar",
packageSets: map[string]packageSetFunc{
osPkgsKey: edgeCommitPackageSet,
containerPkgsKey: func(t *imageType) rpmmd.PackageSet {
return rpmmd.PackageSet{
Include: []string{"nginx"},
}
},
},
defaultImageConfig: &distro.ImageConfig{
EnabledServices: edgeServices(rd),
},
rpmOstree: true,
bootISO: false,
image: edgeContainerImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"os", "ostree-commit", "container-tree", "container"},
exports: []string{"container"},
}
return it
}
func edgeRawImgType() imageType {
it := imageType{
name: "edge-raw-image",
nameAliases: []string{"rhel-edge-raw-image"},
filename: "image.raw.xz",
compression: "xz",
mimeType: "application/xz",
packageSets: nil,
defaultSize: 10 * common.GibiByte,
rpmOstree: true,
bootable: true,
bootISO: false,
image: edgeRawImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"ostree-deployment", "image", "xz"},
exports: []string{"xz"},
basePartitionTables: edgeBasePartitionTables,
}
return it
}
func edgeInstallerImgType(rd distribution) imageType {
it := imageType{
name: "edge-installer",
nameAliases: []string{"rhel-edge-installer"},
filename: "installer.iso",
mimeType: "application/x-iso9660-image",
packageSets: map[string]packageSetFunc{
// TODO: non-arch-specific package set handling for installers
// This image type requires build packages for installers and
// ostree/edge. For now we only have x86-64 installer build
// package sets defined. When we add installer build package sets
// for other architectures, this will need to be moved to the
// architecture and the merging will happen in the PackageSets()
// method like the other sets.
installerPkgsKey: edgeInstallerPackageSet,
},
defaultImageConfig: &distro.ImageConfig{
EnabledServices: edgeServices(rd),
},
rpmOstree: true,
bootISO: true,
image: edgeInstallerImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"anaconda-tree", "rootfs-image", "efiboot-tree", "bootiso-tree", "bootiso"},
exports: []string{"bootiso"},
}
return it
}
func edgeSimplifiedInstallerImgType(rd distribution) imageType {
it := imageType{
name: "edge-simplified-installer",
nameAliases: []string{"rhel-edge-simplified-installer"},
filename: "simplified-installer.iso",
mimeType: "application/x-iso9660-image",
packageSets: map[string]packageSetFunc{
// TODO: non-arch-specific package set handling for installers
// This image type requires build packages for installers and
// ostree/edge. For now we only have x86-64 installer build
// package sets defined. When we add installer build package sets
// for other architectures, this will need to be moved to the
// architecture and the merging will happen in the PackageSets()
// method like the other sets.
installerPkgsKey: edgeSimplifiedInstallerPackageSet,
},
defaultImageConfig: &distro.ImageConfig{
EnabledServices: edgeServices(rd),
},
defaultSize: 10 * common.GibiByte,
rpmOstree: true,
bootable: true,
bootISO: true,
image: edgeSimplifiedInstallerImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"ostree-deployment", "image", "xz", "coi-tree", "efiboot-tree", "bootiso-tree", "bootiso"},
exports: []string{"bootiso"},
basePartitionTables: edgeBasePartitionTables,
}
return it
}
// edge commit OS package set
func edgeCommitPackageSet(t *imageType) rpmmd.PackageSet {
ps := rpmmd.PackageSet{
Include: []string{
"attr",
"audit",
"basesystem",
"bash",
"bash-completion",
"chrony",
"clevis",
"clevis-dracut",
"clevis-luks",
"container-selinux",
"coreutils",
"criu",
"cryptsetup",
"curl",
"dnsmasq",
"dosfstools",
"dracut-config-generic",
"dracut-network",
"e2fsprogs",
"firewalld",
"fuse-overlayfs",
"fwupd",
"glibc",
"glibc-minimal-langpack",
"gnupg2",
"greenboot",
"gzip",
"hostname",
"ima-evm-utils",
"iproute",
"iptables",
"iputils",
"keyutils",
"less",
"lvm2",
"NetworkManager",
"NetworkManager-wifi",
"NetworkManager-wwan",
"nss-altfiles",
"openssh-clients",
"openssh-server",
"passwd",
"pinentry",
"platform-python",
"podman",
"policycoreutils",
"policycoreutils-python-utils",
"polkit",
"procps-ng",
"redhat-release",
"rootfiles",
"rpm",
"rpm-ostree",
"rsync",
"selinux-policy-targeted",
"setools-console",
"setup",
"shadow-utils",
"shadow-utils",
"skopeo",
"slirp4netns",
"sudo",
"systemd",
"tar",
"tmux",
"traceroute",
"usbguard",
"util-linux",
"vim-minimal",
"wpa_supplicant",
"xz",
},
Exclude: []string{"rng-tools"},
}
switch t.arch.Name() {
case platform.ARCH_X86_64.String():
ps = ps.Append(x8664EdgeCommitPackageSet(t))
case platform.ARCH_AARCH64.String():
ps = ps.Append(aarch64EdgeCommitPackageSet(t))
}
if t.arch.distro.isRHEL() && common.VersionLessThan(t.arch.distro.osVersion, "8.6") {
ps = ps.Append(rpmmd.PackageSet{
Include: []string{
"greenboot-grub2",
"greenboot-reboot",
"greenboot-rpm-ostree-grub2",
"greenboot-status",
},
})
} else {
// 8.6+ and CS8
ps = ps.Append(rpmmd.PackageSet{
Include: []string{
"fdo-client",
"fdo-owner-cli",
"greenboot-default-health-checks",
"sos",
},
})
}
return ps
}
func x8664EdgeCommitPackageSet(t *imageType) rpmmd.PackageSet {
return rpmmd.PackageSet{
Include: []string{
"efibootmgr",
"grub2",
"grub2-efi-x64",
"iwl1000-firmware",
"iwl100-firmware",
"iwl105-firmware",
"iwl135-firmware",
"iwl2000-firmware",
"iwl2030-firmware",
"iwl3160-firmware",
"iwl5000-firmware",
"iwl5150-firmware",
"iwl6000-firmware",
"iwl6050-firmware",
"iwl7260-firmware",
"microcode_ctl",
"shim-x64",
},
Exclude: nil,
}
}
func aarch64EdgeCommitPackageSet(t *imageType) rpmmd.PackageSet {
return rpmmd.PackageSet{
Include: []string{
"efibootmgr",
"grub2-efi-aa64",
"iwl7260-firmware",
"shim-aa64",
},
Exclude: nil,
}
}
func edgeInstallerPackageSet(t *imageType) rpmmd.PackageSet {
return anacondaPackageSet(t)
}
func edgeSimplifiedInstallerPackageSet(t *imageType) rpmmd.PackageSet {
// common installer packages
ps := installerPackageSet(t)
ps = ps.Append(rpmmd.PackageSet{
Include: []string{
"attr",
"basesystem",
"binutils",
"bsdtar",
"clevis-dracut",
"clevis-luks",
"cloud-utils-growpart",
"coreos-installer",
"coreos-installer-dracut",
"coreutils",
"device-mapper-multipath",
"dnsmasq",
"dosfstools",
"dracut-live",
"e2fsprogs",
"fcoe-utils",
"fdo-init",
"gzip",
"ima-evm-utils",
"iproute",
"iptables",
"iputils",
"iscsi-initiator-utils",
"keyutils",
"lldpad",
"lvm2",
"passwd",
"policycoreutils",
"policycoreutils-python-utils",
"procps-ng",
"redhat-logos",
"rootfiles",
"setools-console",
"sudo",
"traceroute",
"util-linux",
},
Exclude: nil,
})
switch t.arch.Name() {
case platform.ARCH_X86_64.String():
ps = ps.Append(x8664EdgeCommitPackageSet(t))
case platform.ARCH_AARCH64.String():
ps = ps.Append(aarch64EdgeCommitPackageSet(t))
default:
panic(fmt.Sprintf("unsupported arch: %s", t.arch.Name()))
}
return ps
}
func edgeServices(rd distribution) []string {
// Common Services
var edgeServices = []string{"NetworkManager.service", "firewalld.service", "sshd.service"}
if rd.osVersion == "8.4" {
// greenboot services aren't enabled by default in 8.4
edgeServices = append(edgeServices,
"greenboot-grub2-set-counter",
"greenboot-grub2-set-success",
"greenboot-healthcheck",
"greenboot-rpm-ostree-grub2-check-fallback",
"greenboot-status",
"greenboot-task-runner",
"redboot-auto-reboot",
"redboot-task-runner")
}
if !(rd.isRHEL() && common.VersionLessThan(rd.osVersion, "8.6")) {
// enable fdo-client only on RHEL 8.6+ and CS8
// TODO(runcom): move fdo-client-linuxapp.service to presets?
edgeServices = append(edgeServices, "fdo-client-linuxapp.service")
}
return edgeServices
}

View file

@ -0,0 +1,299 @@
package rhel8
import (
"github.com/osbuild/images/internal/common"
"github.com/osbuild/images/pkg/distro"
"github.com/osbuild/images/pkg/osbuild"
"github.com/osbuild/images/pkg/rpmmd"
"github.com/osbuild/images/pkg/subscription"
)
const gceKernelOptions = "net.ifnames=0 biosdevname=0 scsi_mod.use_blk_mq=Y crashkernel=auto console=ttyS0,38400n8d"
func gceImgType(rd distribution) imageType {
return imageType{
name: "gce",
filename: "image.tar.gz",
mimeType: "application/gzip",
packageSets: map[string]packageSetFunc{
osPkgsKey: gcePackageSet,
},
defaultImageConfig: defaultGceByosImageConfig(rd),
kernelOptions: gceKernelOptions,
bootable: true,
defaultSize: 20 * common.GibiByte,
image: liveImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"os", "image", "archive"},
exports: []string{"archive"},
// TODO: the base partition table still contains the BIOS boot partition, but the image is UEFI-only
basePartitionTables: defaultBasePartitionTables,
}
}
func gceRhuiImgType(rd distribution) imageType {
return imageType{
name: "gce-rhui",
filename: "image.tar.gz",
mimeType: "application/gzip",
packageSets: map[string]packageSetFunc{
osPkgsKey: gceRhuiPackageSet,
},
defaultImageConfig: defaultGceRhuiImageConfig(rd),
kernelOptions: gceKernelOptions,
bootable: true,
defaultSize: 20 * common.GibiByte,
image: liveImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"os", "image", "archive"},
exports: []string{"archive"},
// TODO: the base partition table still contains the BIOS boot partition, but the image is UEFI-only
basePartitionTables: defaultBasePartitionTables,
}
}
func defaultGceByosImageConfig(rd distribution) *distro.ImageConfig {
ic := &distro.ImageConfig{
Timezone: common.ToPtr("UTC"),
TimeSynchronization: &osbuild.ChronyStageOptions{
Servers: []osbuild.ChronyConfigServer{{Hostname: "metadata.google.internal"}},
},
Firewall: &osbuild.FirewallStageOptions{
DefaultZone: "trusted",
},
EnabledServices: []string{
"sshd",
"rngd",
"dnf-automatic.timer",
},
DisabledServices: []string{
"sshd-keygen@",
"reboot.target",
},
DefaultTarget: common.ToPtr("multi-user.target"),
Locale: common.ToPtr("en_US.UTF-8"),
Keyboard: &osbuild.KeymapStageOptions{
Keymap: "us",
},
DNFConfig: []*osbuild.DNFConfigStageOptions{
{
Config: &osbuild.DNFConfig{
Main: &osbuild.DNFConfigMain{
IPResolve: "4",
},
},
},
},
DNFAutomaticConfig: &osbuild.DNFAutomaticConfigStageOptions{
Config: &osbuild.DNFAutomaticConfig{
Commands: &osbuild.DNFAutomaticConfigCommands{
ApplyUpdates: common.ToPtr(true),
UpgradeType: osbuild.DNFAutomaticUpgradeTypeSecurity,
},
},
},
YUMRepos: []*osbuild.YumReposStageOptions{
{
Filename: "google-cloud.repo",
Repos: []osbuild.YumRepository{
{
Id: "google-compute-engine",
Name: "Google Compute Engine",
BaseURLs: []string{"https://packages.cloud.google.com/yum/repos/google-compute-engine-el8-x86_64-stable"},
Enabled: common.ToPtr(true),
GPGCheck: common.ToPtr(true),
RepoGPGCheck: common.ToPtr(false),
GPGKey: []string{
"https://packages.cloud.google.com/yum/doc/yum-key.gpg",
"https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg",
},
},
},
},
},
SshdConfig: &osbuild.SshdConfigStageOptions{
Config: osbuild.SshdConfigConfig{
PasswordAuthentication: common.ToPtr(false),
ClientAliveInterval: common.ToPtr(420),
PermitRootLogin: osbuild.PermitRootLoginValueNo,
},
},
Sysconfig: []*osbuild.SysconfigStageOptions{
{
Kernel: &osbuild.SysconfigKernelOptions{
DefaultKernel: "kernel-core",
UpdateDefault: true,
},
},
},
Modprobe: []*osbuild.ModprobeStageOptions{
{
Filename: "blacklist-floppy.conf",
Commands: osbuild.ModprobeConfigCmdList{
osbuild.NewModprobeConfigCmdBlacklist("floppy"),
},
},
},
GCPGuestAgentConfig: &osbuild.GcpGuestAgentConfigOptions{
ConfigScope: osbuild.GcpGuestAgentConfigScopeDistro,
Config: &osbuild.GcpGuestAgentConfig{
InstanceSetup: &osbuild.GcpGuestAgentConfigInstanceSetup{
SetBotoConfig: common.ToPtr(false),
},
},
},
}
if rd.osVersion == "8.4" {
// NOTE(akoutsou): these are enabled in the package preset, but for
// some reason do not get enabled on 8.4.
// the reason is unknown and deeply mysterious
ic.EnabledServices = append(ic.EnabledServices,
"google-oslogin-cache.timer",
"google-guest-agent.service",
"google-shutdown-scripts.service",
"google-startup-scripts.service",
"google-osconfig-agent.service",
)
}
if rd.isRHEL() {
ic.RHSMConfig = map[subscription.RHSMStatus]*osbuild.RHSMStageOptions{
subscription.RHSMConfigNoSubscription: {
SubMan: &osbuild.RHSMStageOptionsSubMan{
Rhsmcertd: &osbuild.SubManConfigRHSMCERTDSection{
AutoRegistration: common.ToPtr(true),
},
// Don't disable RHSM redhat.repo management on the GCE
// image, which is BYOS and does not use RHUI for content.
// Otherwise subscribing the system manually after booting
// it would result in empty redhat.repo. Without RHUI, such
// system would have no way to get Red Hat content, but
// enable the repo management manually, which would be very
// confusing.
},
},
subscription.RHSMConfigWithSubscription: {
SubMan: &osbuild.RHSMStageOptionsSubMan{
Rhsmcertd: &osbuild.SubManConfigRHSMCERTDSection{
AutoRegistration: common.ToPtr(true),
},
// do not disable the redhat.repo management if the user
// explicitly request the system to be subscribed
},
},
}
}
return ic
}
func defaultGceRhuiImageConfig(rd distribution) *distro.ImageConfig {
ic := &distro.ImageConfig{
RHSMConfig: map[subscription.RHSMStatus]*osbuild.RHSMStageOptions{
subscription.RHSMConfigNoSubscription: {
SubMan: &osbuild.RHSMStageOptionsSubMan{
Rhsmcertd: &osbuild.SubManConfigRHSMCERTDSection{
AutoRegistration: common.ToPtr(true),
},
Rhsm: &osbuild.SubManConfigRHSMSection{
ManageRepos: common.ToPtr(false),
},
},
},
subscription.RHSMConfigWithSubscription: {
SubMan: &osbuild.RHSMStageOptionsSubMan{
Rhsmcertd: &osbuild.SubManConfigRHSMCERTDSection{
AutoRegistration: common.ToPtr(true),
},
// do not disable the redhat.repo management if the user
// explicitly request the system to be subscribed
},
},
},
}
ic = ic.InheritFrom(defaultGceByosImageConfig(rd))
return ic
}
// common GCE image
func gceCommonPackageSet(t *imageType) rpmmd.PackageSet {
return rpmmd.PackageSet{
Include: []string{
"@core",
"langpacks-en", // not in Google's KS
"acpid",
"dhcp-client",
"dnf-automatic",
"net-tools",
//"openssh-server", included in core
"python3",
"rng-tools",
"tar",
"vim",
// GCE guest tools
"google-compute-engine",
"google-osconfig-agent",
"gce-disk-expand",
// Not explicitly included in GCP kickstart, but present on the image
// for time synchronization
"chrony",
"timedatex",
// EFI
"grub2-tools-efi",
},
Exclude: []string{
"alsa-utils",
"b43-fwcutter",
"dmraid",
"eject",
"gpm",
"irqbalance",
"microcode_ctl",
"smartmontools",
"aic94xx-firmware",
"atmel-firmware",
"b43-openfwwf",
"bfa-firmware",
"ipw2100-firmware",
"ipw2200-firmware",
"ivtv-firmware",
"iwl100-firmware",
"iwl1000-firmware",
"iwl3945-firmware",
"iwl4965-firmware",
"iwl5000-firmware",
"iwl5150-firmware",
"iwl6000-firmware",
"iwl6000g2a-firmware",
"iwl6050-firmware",
"kernel-firmware",
"libertas-usb8388-firmware",
"ql2100-firmware",
"ql2200-firmware",
"ql23xx-firmware",
"ql2400-firmware",
"ql2500-firmware",
"rt61pci-firmware",
"rt73usb-firmware",
"xorg-x11-drv-ati-firmware",
"zd1211-firmware",
// RHBZ#2075815
"qemu-guest-agent",
},
}.Append(distroSpecificPackageSet(t))
}
// GCE BYOS image
func gcePackageSet(t *imageType) rpmmd.PackageSet {
return gceCommonPackageSet(t)
}
// GCE RHUI image
func gceRhuiPackageSet(t *imageType) rpmmd.PackageSet {
return rpmmd.PackageSet{
Include: []string{
"google-rhui-client-rhel8",
},
}.Append(gceCommonPackageSet(t))
}

View file

@ -0,0 +1,573 @@
package rhel8
import (
"fmt"
"math/rand"
"github.com/osbuild/images/internal/fdo"
"github.com/osbuild/images/internal/ignition"
"github.com/osbuild/images/internal/oscap"
"github.com/osbuild/images/internal/users"
"github.com/osbuild/images/internal/workload"
"github.com/osbuild/images/pkg/blueprint"
"github.com/osbuild/images/pkg/container"
"github.com/osbuild/images/pkg/distro"
"github.com/osbuild/images/pkg/image"
"github.com/osbuild/images/pkg/manifest"
"github.com/osbuild/images/pkg/osbuild"
"github.com/osbuild/images/pkg/ostree"
"github.com/osbuild/images/pkg/platform"
"github.com/osbuild/images/pkg/rpmmd"
)
func osCustomizations(
t *imageType,
osPackageSet rpmmd.PackageSet,
options distro.ImageOptions,
containers []container.SourceSpec,
c *blueprint.Customizations,
) manifest.OSCustomizations {
imageConfig := t.getDefaultImageConfig()
osc := manifest.OSCustomizations{}
if t.bootable || t.rpmOstree {
osc.KernelName = c.GetKernel().Name
var kernelOptions []string
if t.kernelOptions != "" {
kernelOptions = append(kernelOptions, t.kernelOptions)
}
if bpKernel := c.GetKernel(); bpKernel.Append != "" {
kernelOptions = append(kernelOptions, bpKernel.Append)
}
osc.KernelOptionsAppend = kernelOptions
if t.platform.GetArch() != platform.ARCH_S390X {
osc.KernelOptionsBootloader = true
}
}
osc.ExtraBasePackages = osPackageSet.Include
osc.ExcludeBasePackages = osPackageSet.Exclude
osc.ExtraBaseRepos = osPackageSet.Repositories
osc.Containers = containers
osc.GPGKeyFiles = imageConfig.GPGKeyFiles
if imageConfig.ExcludeDocs != nil {
osc.ExcludeDocs = *imageConfig.ExcludeDocs
}
if !t.bootISO {
// don't put users and groups in the payload of an installer
// add them via kickstart instead
osc.Groups = users.GroupsFromBP(c.GetGroups())
osc.Users = users.UsersFromBP(c.GetUsers())
}
osc.EnabledServices = imageConfig.EnabledServices
osc.DisabledServices = imageConfig.DisabledServices
if imageConfig.DefaultTarget != nil {
osc.DefaultTarget = *imageConfig.DefaultTarget
}
osc.Firewall = imageConfig.Firewall
if fw := c.GetFirewall(); fw != nil {
options := osbuild.FirewallStageOptions{
Ports: fw.Ports,
}
if fw.Services != nil {
options.EnabledServices = fw.Services.Enabled
options.DisabledServices = fw.Services.Disabled
}
if fw.Zones != nil {
for _, z := range fw.Zones {
options.Zones = append(options.Zones, osbuild.FirewallZone{
Name: *z.Name,
Sources: z.Sources,
})
}
}
osc.Firewall = &options
}
language, keyboard := c.GetPrimaryLocale()
if language != nil {
osc.Language = *language
} else if imageConfig.Locale != nil {
osc.Language = *imageConfig.Locale
}
if keyboard != nil {
osc.Keyboard = keyboard
} else if imageConfig.Keyboard != nil {
osc.Keyboard = &imageConfig.Keyboard.Keymap
if imageConfig.Keyboard.X11Keymap != nil {
osc.X11KeymapLayouts = imageConfig.Keyboard.X11Keymap.Layouts
}
}
if hostname := c.GetHostname(); hostname != nil {
osc.Hostname = *hostname
}
timezone, ntpServers := c.GetTimezoneSettings()
if timezone != nil {
osc.Timezone = *timezone
} else if imageConfig.Timezone != nil {
osc.Timezone = *imageConfig.Timezone
}
if len(ntpServers) > 0 {
for _, server := range ntpServers {
osc.NTPServers = append(osc.NTPServers, osbuild.ChronyConfigServer{Hostname: server})
}
} else if imageConfig.TimeSynchronization != nil {
osc.NTPServers = imageConfig.TimeSynchronization.Servers
osc.LeapSecTZ = imageConfig.TimeSynchronization.LeapsecTz
}
// Relabel the tree, unless the `NoSElinux` flag is explicitly set to `true`
if imageConfig.NoSElinux == nil || imageConfig.NoSElinux != nil && !*imageConfig.NoSElinux {
osc.SElinux = "targeted"
}
if oscapConfig := c.GetOpenSCAP(); oscapConfig != nil {
if t.rpmOstree {
panic("unexpected oscap options for ostree image type")
}
var datastream = oscapConfig.DataStream
if datastream == "" {
datastream = oscap.DefaultRHEL8Datastream(t.arch.distro.isRHEL())
}
osc.OpenSCAPConfig = osbuild.NewOscapRemediationStageOptions(
osbuild.OscapConfig{
Datastream: datastream,
ProfileID: oscapConfig.ProfileID,
},
)
}
if t.arch.distro.isRHEL() && options.Facts != nil {
osc.FactAPIType = &options.Facts.APIType
}
var err error
osc.Directories, err = blueprint.DirectoryCustomizationsToFsNodeDirectories(c.GetDirectories())
if err != nil {
// In theory this should never happen, because the blueprint directory customizations
// should have been validated before this point.
panic(fmt.Sprintf("failed to convert directory customizations to fs node directories: %v", err))
}
osc.Files, err = blueprint.FileCustomizationsToFsNodeFiles(c.GetFiles())
if err != nil {
// In theory this should never happen, because the blueprint file customizations
// should have been validated before this point.
panic(fmt.Sprintf("failed to convert file customizations to fs node files: %v", err))
}
// set yum repos first, so it doesn't get overridden by
// imageConfig.YUMRepos
osc.YUMRepos = imageConfig.YUMRepos
customRepos, err := c.GetRepositories()
if err != nil {
// This shouldn't happen and since the repos
// should have already been validated
panic(fmt.Sprintf("failed to get custom repos: %v", err))
}
// This function returns a map of filename and corresponding yum repos
// and a list of fs node files for the inline gpg keys so we can save
// them to disk. This step also swaps the inline gpg key with the path
// to the file in the os file tree
yumRepos, gpgKeyFiles, err := blueprint.RepoCustomizationsToRepoConfigAndGPGKeyFiles(customRepos)
if err != nil {
panic(fmt.Sprintf("failed to convert inline gpgkeys to fs node files: %v", err))
}
// add the gpg key files to the list of files to be added to the tree
if len(gpgKeyFiles) > 0 {
osc.Files = append(osc.Files, gpgKeyFiles...)
}
for filename, repos := range yumRepos {
osc.YUMRepos = append(osc.YUMRepos, osbuild.NewYumReposStageOptions(filename, repos))
}
osc.ShellInit = imageConfig.ShellInit
osc.Grub2Config = imageConfig.Grub2Config
osc.Sysconfig = imageConfig.Sysconfig
osc.SystemdLogind = imageConfig.SystemdLogind
osc.CloudInit = imageConfig.CloudInit
osc.Modprobe = imageConfig.Modprobe
osc.DracutConf = imageConfig.DracutConf
osc.SystemdUnit = imageConfig.SystemdUnit
osc.Authselect = imageConfig.Authselect
osc.SELinuxConfig = imageConfig.SELinuxConfig
osc.Tuned = imageConfig.Tuned
osc.Tmpfilesd = imageConfig.Tmpfilesd
osc.PamLimitsConf = imageConfig.PamLimitsConf
osc.Sysctld = imageConfig.Sysctld
osc.DNFConfig = imageConfig.DNFConfig
osc.DNFAutomaticConfig = imageConfig.DNFAutomaticConfig
osc.SshdConfig = imageConfig.SshdConfig
osc.AuthConfig = imageConfig.Authconfig
osc.PwQuality = imageConfig.PwQuality
osc.RHSMConfig = imageConfig.RHSMConfig
osc.Subscription = options.Subscription
osc.WAAgentConfig = imageConfig.WAAgentConfig
osc.UdevRules = imageConfig.UdevRules
osc.GCPGuestAgentConfig = imageConfig.GCPGuestAgentConfig
return osc
}
func liveImage(workload workload.Workload,
t *imageType,
customizations *blueprint.Customizations,
options distro.ImageOptions,
packageSets map[string]rpmmd.PackageSet,
containers []container.SourceSpec,
rng *rand.Rand) (image.ImageKind, error) {
img := image.NewLiveImage()
img.Platform = t.platform
img.OSCustomizations = osCustomizations(t, packageSets[osPkgsKey], options, containers, customizations)
img.Environment = t.environment
img.Workload = workload
img.Compression = t.compression
// TODO: move generation into LiveImage
pt, err := t.getPartitionTable(customizations.GetFilesystems(), options, rng)
if err != nil {
return nil, err
}
img.PartitionTable = pt
img.Filename = t.Filename()
return img, nil
}
func imageInstallerImage(workload workload.Workload,
t *imageType,
customizations *blueprint.Customizations,
options distro.ImageOptions,
packageSets map[string]rpmmd.PackageSet,
containers []container.SourceSpec,
rng *rand.Rand) (image.ImageKind, error) {
img := image.NewAnacondaTarInstaller()
img.Platform = t.platform
img.Workload = workload
img.OSCustomizations = osCustomizations(t, packageSets[osPkgsKey], options, containers, customizations)
img.ExtraBasePackages = packageSets[installerPkgsKey]
img.Users = users.UsersFromBP(customizations.GetUsers())
img.Groups = users.GroupsFromBP(customizations.GetGroups())
img.AdditionalDracutModules = []string{"prefixdevname", "prefixdevname-tools"}
img.AdditionalAnacondaModules = []string{"org.fedoraproject.Anaconda.Modules.Users"}
img.SquashfsCompression = "xz"
// put the kickstart file in the root of the iso
img.ISORootKickstart = true
d := t.arch.distro
img.ISOLabelTempl = d.isolabelTmpl
img.Product = d.product
img.OSName = "redhat"
img.OSVersion = d.osVersion
img.Release = fmt.Sprintf("%s %s", d.product, d.osVersion)
img.Filename = t.Filename()
return img, nil
}
func tarImage(workload workload.Workload,
t *imageType,
customizations *blueprint.Customizations,
options distro.ImageOptions,
packageSets map[string]rpmmd.PackageSet,
containers []container.SourceSpec,
rng *rand.Rand) (image.ImageKind, error) {
img := image.NewArchive()
img.Platform = t.platform
img.OSCustomizations = osCustomizations(t, packageSets[osPkgsKey], options, containers, customizations)
img.Environment = t.environment
img.Workload = workload
img.Filename = t.Filename()
return img, nil
}
func edgeCommitImage(workload workload.Workload,
t *imageType,
customizations *blueprint.Customizations,
options distro.ImageOptions,
packageSets map[string]rpmmd.PackageSet,
containers []container.SourceSpec,
rng *rand.Rand) (image.ImageKind, error) {
parentCommit, commitRef := makeOSTreeParentCommit(options.OSTree, t.OSTreeRef())
img := image.NewOSTreeArchive(commitRef)
img.Platform = t.platform
img.OSCustomizations = osCustomizations(t, packageSets[osPkgsKey], options, containers, customizations)
img.Environment = t.environment
img.Workload = workload
img.OSTreeParent = parentCommit
img.OSVersion = t.arch.distro.osVersion
img.Filename = t.Filename()
return img, nil
}
func edgeContainerImage(workload workload.Workload,
t *imageType,
customizations *blueprint.Customizations,
options distro.ImageOptions,
packageSets map[string]rpmmd.PackageSet,
containers []container.SourceSpec,
rng *rand.Rand) (image.ImageKind, error) {
parentCommit, commitRef := makeOSTreeParentCommit(options.OSTree, t.OSTreeRef())
img := image.NewOSTreeContainer(commitRef)
img.Platform = t.platform
img.OSCustomizations = osCustomizations(t, packageSets[osPkgsKey], options, containers, customizations)
img.ContainerLanguage = img.OSCustomizations.Language
img.Environment = t.environment
img.Workload = workload
img.OSTreeParent = parentCommit
img.OSVersion = t.arch.distro.osVersion
img.ExtraContainerPackages = packageSets[containerPkgsKey]
img.Filename = t.Filename()
return img, nil
}
func edgeInstallerImage(workload workload.Workload,
t *imageType,
customizations *blueprint.Customizations,
options distro.ImageOptions,
packageSets map[string]rpmmd.PackageSet,
containers []container.SourceSpec,
rng *rand.Rand) (image.ImageKind, error) {
d := t.arch.distro
commit, err := makeOSTreePayloadCommit(options.OSTree, t.OSTreeRef())
if err != nil {
return nil, fmt.Errorf("%s: %s", t.Name(), err.Error())
}
img := image.NewAnacondaOSTreeInstaller(commit)
img.Platform = t.platform
img.ExtraBasePackages = packageSets[installerPkgsKey]
img.Users = users.UsersFromBP(customizations.GetUsers())
img.Groups = users.GroupsFromBP(customizations.GetGroups())
img.SquashfsCompression = "xz"
img.AdditionalDracutModules = []string{"prefixdevname", "prefixdevname-tools"}
if len(img.Users)+len(img.Groups) > 0 {
// only enable the users module if needed
img.AdditionalAnacondaModules = []string{"org.fedoraproject.Anaconda.Modules.Users"}
}
img.ISOLabelTempl = d.isolabelTmpl
img.Product = d.product
img.Variant = "edge"
img.OSName = "rhel"
img.OSVersion = d.osVersion
img.Release = fmt.Sprintf("%s %s", d.product, d.osVersion)
img.Filename = t.Filename()
return img, nil
}
func edgeRawImage(workload workload.Workload,
t *imageType,
customizations *blueprint.Customizations,
options distro.ImageOptions,
packageSets map[string]rpmmd.PackageSet,
containers []container.SourceSpec,
rng *rand.Rand) (image.ImageKind, error) {
commit, err := makeOSTreePayloadCommit(options.OSTree, t.OSTreeRef())
if err != nil {
return nil, fmt.Errorf("%s: %s", t.Name(), err.Error())
}
img := image.NewOSTreeRawImage(commit)
img.Users = users.UsersFromBP(customizations.GetUsers())
img.Groups = users.GroupsFromBP(customizations.GetGroups())
img.KernelOptionsAppend = []string{"modprobe.blacklist=vc4"}
// TODO: move to image config
img.Keyboard = "us"
img.Locale = "C.UTF-8"
img.Platform = t.platform
img.Workload = workload
img.Remote = ostree.Remote{
Name: "rhel-edge",
URL: options.OSTree.URL,
ContentURL: options.OSTree.ContentURL,
}
img.OSName = "redhat"
// TODO: move generation into LiveImage
pt, err := t.getPartitionTable(customizations.GetFilesystems(), options, rng)
if err != nil {
return nil, err
}
img.PartitionTable = pt
img.Filename = t.Filename()
img.Compression = t.compression
return img, nil
}
func edgeSimplifiedInstallerImage(workload workload.Workload,
t *imageType,
customizations *blueprint.Customizations,
options distro.ImageOptions,
packageSets map[string]rpmmd.PackageSet,
containers []container.SourceSpec,
rng *rand.Rand) (image.ImageKind, error) {
commit, err := makeOSTreePayloadCommit(options.OSTree, t.OSTreeRef())
if err != nil {
return nil, fmt.Errorf("%s: %s", t.Name(), err.Error())
}
rawImg := image.NewOSTreeRawImage(commit)
rawImg.Users = users.UsersFromBP(customizations.GetUsers())
rawImg.Groups = users.GroupsFromBP(customizations.GetGroups())
rawImg.KernelOptionsAppend = []string{"modprobe.blacklist=vc4"}
rawImg.Keyboard = "us"
rawImg.Locale = "C.UTF-8"
rawImg.Platform = t.platform
rawImg.Workload = workload
rawImg.Remote = ostree.Remote{
Name: "rhel-edge",
URL: options.OSTree.URL,
ContentURL: options.OSTree.ContentURL,
}
rawImg.OSName = "redhat"
// TODO: move generation into LiveImage
pt, err := t.getPartitionTable(customizations.GetFilesystems(), options, rng)
if err != nil {
return nil, err
}
rawImg.PartitionTable = pt
rawImg.Filename = t.Filename()
img := image.NewOSTreeSimplifiedInstaller(rawImg, customizations.InstallationDevice)
img.ExtraBasePackages = packageSets[installerPkgsKey]
// img.Workload = workload
img.Platform = t.platform
img.Filename = t.Filename()
if bpFDO := customizations.GetFDO(); bpFDO != nil {
img.FDO = fdo.FromBP(*bpFDO)
}
// ignition configs from blueprint
if bpIgnition := customizations.GetIgnition(); bpIgnition != nil {
if bpIgnition.FirstBoot != nil {
img.IgnitionFirstBoot = ignition.FirstbootOptionsFromBP(*bpIgnition.FirstBoot)
}
if bpIgnition.Embedded != nil {
var err error
img.IgnitionEmbedded, err = ignition.EmbeddedOptionsFromBP(*bpIgnition.Embedded)
if err != nil {
return nil, err
}
}
}
d := t.arch.distro
img.ISOLabelTempl = d.isolabelTmpl
img.Product = d.product
img.Variant = "edge"
img.OSName = "redhat"
img.OSVersion = d.osVersion
img.AdditionalDracutModules = []string{"prefixdevname", "prefixdevname-tools"}
return img, nil
}
// Create an ostree SourceSpec to define an ostree parent commit using the user
// options and the default ref for the image type. Additionally returns the
// ref to be used for the new commit to be created.
func makeOSTreeParentCommit(options *ostree.ImageOptions, defaultRef string) (*ostree.SourceSpec, string) {
commitRef := defaultRef
if options == nil {
// nothing to do
return nil, commitRef
}
if options.ImageRef != "" {
// user option overrides default commit ref
commitRef = options.ImageRef
}
var parentCommit *ostree.SourceSpec
if options.URL == "" {
// no parent
return nil, commitRef
}
// ostree URL specified: set source spec for parent commit
parentRef := options.ParentRef
if parentRef == "" {
// parent ref not set: use image ref
parentRef = commitRef
}
parentCommit = &ostree.SourceSpec{
URL: options.URL,
Ref: parentRef,
RHSM: options.RHSM,
}
return parentCommit, commitRef
}
// Create an ostree SourceSpec to define an ostree payload using the user options and the default ref for the image type.
func makeOSTreePayloadCommit(options *ostree.ImageOptions, defaultRef string) (ostree.SourceSpec, error) {
if options == nil || options.URL == "" {
// this should be caught by checkOptions() in distro, but it's good
// to guard against it here as well
return ostree.SourceSpec{}, fmt.Errorf("ostree commit URL required")
}
commitRef := defaultRef
if options.ImageRef != "" {
// user option overrides default commit ref
commitRef = options.ImageRef
}
return ostree.SourceSpec{
URL: options.URL,
Ref: commitRef,
RHSM: options.RHSM,
}, nil
}

View file

@ -0,0 +1,418 @@
package rhel8
import (
"fmt"
"log"
"math/rand"
"strings"
"golang.org/x/exp/slices"
"github.com/osbuild/images/internal/common"
"github.com/osbuild/images/internal/environment"
"github.com/osbuild/images/internal/oscap"
"github.com/osbuild/images/internal/pathpolicy"
"github.com/osbuild/images/internal/workload"
"github.com/osbuild/images/pkg/blueprint"
"github.com/osbuild/images/pkg/container"
"github.com/osbuild/images/pkg/disk"
"github.com/osbuild/images/pkg/distro"
"github.com/osbuild/images/pkg/image"
"github.com/osbuild/images/pkg/manifest"
"github.com/osbuild/images/pkg/ostree"
"github.com/osbuild/images/pkg/platform"
"github.com/osbuild/images/pkg/rpmmd"
)
const (
// package set names
// main/common os image package set name
osPkgsKey = "os"
// container package set name
containerPkgsKey = "container"
// installer package set name
installerPkgsKey = "installer"
// blueprint package set name
blueprintPkgsKey = "blueprint"
)
type imageFunc func(workload workload.Workload, t *imageType, customizations *blueprint.Customizations, options distro.ImageOptions, packageSets map[string]rpmmd.PackageSet, containers []container.SourceSpec, rng *rand.Rand) (image.ImageKind, error)
type packageSetFunc func(t *imageType) rpmmd.PackageSet
type imageType struct {
arch *architecture
platform platform.Platform
environment environment.Environment
workload workload.Workload
name string
nameAliases []string
filename string
compression string // TODO: remove from image definition and make it a transport option
mimeType string
packageSets map[string]packageSetFunc
defaultImageConfig *distro.ImageConfig
kernelOptions string
defaultSize uint64
buildPipelines []string
payloadPipelines []string
exports []string
image imageFunc
// bootISO: installable ISO
bootISO bool
// rpmOstree: edge/ostree
rpmOstree bool
// bootable image
bootable bool
// List of valid arches for the image type
basePartitionTables distro.BasePartitionTableMap
}
func (t *imageType) Name() string {
return t.name
}
func (t *imageType) Arch() distro.Arch {
return t.arch
}
func (t *imageType) Filename() string {
return t.filename
}
func (t *imageType) MIMEType() string {
return t.mimeType
}
func (t *imageType) OSTreeRef() string {
d := t.arch.distro
if t.rpmOstree {
return fmt.Sprintf(d.ostreeRefTmpl, t.Arch().Name())
}
return ""
}
func (t *imageType) Size(size uint64) uint64 {
// Microsoft Azure requires vhd images to be rounded up to the nearest MB
if t.name == "vhd" && size%common.MebiByte != 0 {
size = (size/common.MebiByte + 1) * common.MebiByte
}
if size == 0 {
size = t.defaultSize
}
return size
}
func (t *imageType) BuildPipelines() []string {
return t.buildPipelines
}
func (t *imageType) PayloadPipelines() []string {
return t.payloadPipelines
}
func (t *imageType) PayloadPackageSets() []string {
return []string{blueprintPkgsKey}
}
func (t *imageType) PackageSetsChains() map[string][]string {
return nil
}
func (t *imageType) Exports() []string {
if len(t.exports) > 0 {
return t.exports
}
return []string{"assembler"}
}
func (t *imageType) BootMode() distro.BootMode {
if t.platform.GetUEFIVendor() != "" && t.platform.GetBIOSPlatform() != "" {
return distro.BOOT_HYBRID
} else if t.platform.GetUEFIVendor() != "" {
return distro.BOOT_UEFI
} else if t.platform.GetBIOSPlatform() != "" || t.platform.GetZiplSupport() {
return distro.BOOT_LEGACY
}
return distro.BOOT_NONE
}
func (t *imageType) getPartitionTable(
mountpoints []blueprint.FilesystemCustomization,
options distro.ImageOptions,
rng *rand.Rand,
) (*disk.PartitionTable, error) {
archName := t.arch.Name()
basePartitionTable, exists := t.basePartitionTables[archName]
if !exists {
return nil, fmt.Errorf("no partition table defined for architecture %q for image type %q", archName, t.Name())
}
imageSize := t.Size(options.Size)
lvmify := !t.rpmOstree
return disk.NewPartitionTable(&basePartitionTable, mountpoints, imageSize, lvmify, nil, rng)
}
func (t *imageType) getDefaultImageConfig() *distro.ImageConfig {
// ensure that image always returns non-nil default config
imageConfig := t.defaultImageConfig
if imageConfig == nil {
imageConfig = &distro.ImageConfig{}
}
return imageConfig.InheritFrom(t.arch.distro.getDefaultImageConfig())
}
func (t *imageType) PartitionType() string {
archName := t.arch.Name()
basePartitionTable, exists := t.basePartitionTables[archName]
if !exists {
return ""
}
return basePartitionTable.Type
}
func (t *imageType) Manifest(bp *blueprint.Blueprint,
options distro.ImageOptions,
repos []rpmmd.RepoConfig,
seed int64) (*manifest.Manifest, []string, error) {
warnings, err := t.checkOptions(bp, options)
if err != nil {
return nil, nil, err
}
// merge package sets that appear in the image type with the package sets
// of the same name from the distro and arch
staticPackageSets := make(map[string]rpmmd.PackageSet)
for name, getter := range t.packageSets {
staticPackageSets[name] = getter(t)
}
// amend with repository information and collect payload repos
payloadRepos := make([]rpmmd.RepoConfig, 0)
for _, repo := range repos {
if len(repo.PackageSets) > 0 {
// only apply the repo to the listed package sets
for _, psName := range repo.PackageSets {
if slices.Contains(t.PayloadPackageSets(), psName) {
payloadRepos = append(payloadRepos, repo)
}
ps := staticPackageSets[psName]
ps.Repositories = append(ps.Repositories, repo)
staticPackageSets[psName] = ps
}
}
}
w := t.workload
if w == nil {
cw := &workload.Custom{
BaseWorkload: workload.BaseWorkload{
Repos: payloadRepos,
},
Packages: bp.GetPackagesEx(false),
}
if services := bp.Customizations.GetServices(); services != nil {
cw.Services = services.Enabled
cw.DisabledServices = services.Disabled
}
w = cw
}
containerSources := make([]container.SourceSpec, len(bp.Containers))
for idx := range bp.Containers {
containerSources[idx] = container.SourceSpec(bp.Containers[idx])
}
source := rand.NewSource(seed)
// math/rand is good enough in this case
/* #nosec G404 */
rng := rand.New(source)
if t.image == nil {
return nil, nil, nil
}
img, err := t.image(w, t, bp.Customizations, options, staticPackageSets, containerSources, rng)
if err != nil {
return nil, nil, err
}
mf := manifest.New()
mf.Distro = manifest.DISTRO_EL8
_, err = img.InstantiateManifest(&mf, repos, t.arch.distro.runner, rng)
if err != nil {
return nil, nil, err
}
return &mf, warnings, err
}
// checkOptions checks the validity and compatibility of options and customizations for the image type.
// Returns ([]string, error) where []string, if non-nil, will hold any generated warnings (e.g. deprecation notices).
func (t *imageType) checkOptions(bp *blueprint.Blueprint, options distro.ImageOptions) ([]string, error) {
customizations := bp.Customizations
// holds warnings (e.g. deprecation notices)
var warnings []string
if t.workload != nil {
// For now, if an image type defines its own workload, don't allow any
// user customizations.
// Soon we will have more workflows and each will define its allowed
// set of customizations. The current set of customizations defined in
// the blueprint spec corresponds to the Custom workflow.
if customizations != nil {
return warnings, fmt.Errorf("image type %q does not support customizations", t.name)
}
}
// we do not support embedding containers on ostree-derived images, only on commits themselves
if len(bp.Containers) > 0 && t.rpmOstree && (t.name != "edge-commit" && t.name != "edge-container") {
return warnings, fmt.Errorf("embedding containers is not supported for %s on %s", t.name, t.arch.distro.name)
}
ostreeURL := ""
if options.OSTree != nil {
if options.OSTree.ParentRef != "" && options.OSTree.URL == "" {
// specifying parent ref also requires URL
return nil, ostree.NewParameterComboError("ostree parent ref specified, but no URL to retrieve it")
}
ostreeURL = options.OSTree.URL
}
if t.bootISO && t.rpmOstree {
// ostree-based ISOs require a URL from which to pull a payload commit
if ostreeURL == "" {
return warnings, fmt.Errorf("boot ISO image type %q requires specifying a URL from which to retrieve the OSTree commit", t.name)
}
if t.name == "edge-simplified-installer" {
allowed := []string{"InstallationDevice", "FDO", "User", "Group"}
if err := customizations.CheckAllowed(allowed...); err != nil {
return warnings, fmt.Errorf("unsupported blueprint customizations found for boot ISO image type %q: (allowed: %s)", t.name, strings.Join(allowed, ", "))
}
if customizations.GetInstallationDevice() == "" {
return warnings, fmt.Errorf("boot ISO image type %q requires specifying an installation device to install to", t.name)
}
//making fdo optional so that simplified installer can be composed w/o the FDO section in the blueprint
if customizations.GetFDO() != nil {
if customizations.GetFDO().ManufacturingServerURL == "" {
return warnings, fmt.Errorf("boot ISO image type %q requires specifying FDO.ManufacturingServerURL configuration to install to", t.name)
}
var diunSet int
if customizations.GetFDO().DiunPubKeyHash != "" {
diunSet++
}
if customizations.GetFDO().DiunPubKeyInsecure != "" {
diunSet++
}
if customizations.GetFDO().DiunPubKeyRootCerts != "" {
diunSet++
}
if diunSet != 1 {
return warnings, fmt.Errorf("boot ISO image type %q requires specifying one of [FDO.DiunPubKeyHash,FDO.DiunPubKeyInsecure,FDO.DiunPubKeyRootCerts] configuration to install to", t.name)
}
}
} else if t.name == "edge-installer" {
allowed := []string{"User", "Group"}
if err := customizations.CheckAllowed(allowed...); err != nil {
return warnings, fmt.Errorf("unsupported blueprint customizations found for boot ISO image type %q: (allowed: %s)", t.name, strings.Join(allowed, ", "))
}
}
}
if t.name == "edge-raw-image" {
// ostree-based bootable images require a URL from which to pull a payload commit
if ostreeURL == "" {
return warnings, fmt.Errorf("edge raw images require specifying a URL from which to retrieve the OSTree commit")
}
allowed := []string{"User", "Group"}
if err := customizations.CheckAllowed(allowed...); err != nil {
return warnings, fmt.Errorf("unsupported blueprint customizations found for image type %q: (allowed: %s)", t.name, strings.Join(allowed, ", "))
}
// TODO: consider additional checks, such as those in "edge-simplified-installer"
}
// warn that user & group customizations on edge-commit, edge-container are deprecated
// TODO(edge): directly error if these options are provided when rhel-9.5's time arrives
if t.name == "edge-commit" || t.name == "edge-container" {
if customizations.GetUsers() != nil {
w := fmt.Sprintf("Please note that user customizations on %q image type are deprecated and will be removed in the near future\n", t.name)
log.Print(w)
warnings = append(warnings, w)
}
if customizations.GetGroups() != nil {
w := fmt.Sprintf("Please note that group customizations on %q image type are deprecated and will be removed in the near future\n", t.name)
log.Print(w)
warnings = append(warnings, w)
}
}
if kernelOpts := customizations.GetKernel(); kernelOpts.Append != "" && t.rpmOstree && (!t.bootable || t.bootISO) {
return warnings, fmt.Errorf("kernel boot parameter customizations are not supported for ostree types")
}
mountpoints := customizations.GetFilesystems()
if mountpoints != nil && t.rpmOstree {
return warnings, fmt.Errorf("Custom mountpoints are not supported for ostree types")
}
err := blueprint.CheckMountpointsPolicy(mountpoints, pathpolicy.MountpointPolicies)
if err != nil {
return warnings, err
}
if osc := customizations.GetOpenSCAP(); osc != nil {
// only add support for RHEL 8.7 and above.
if common.VersionLessThan(t.arch.distro.osVersion, "8.7") {
return warnings, fmt.Errorf(fmt.Sprintf("OpenSCAP unsupported os version: %s", t.arch.distro.osVersion))
}
supported := oscap.IsProfileAllowed(osc.ProfileID, oscapProfileAllowList)
if !supported {
return warnings, fmt.Errorf(fmt.Sprintf("OpenSCAP unsupported profile: %s", osc.ProfileID))
}
if t.rpmOstree {
return warnings, fmt.Errorf("OpenSCAP customizations are not supported for ostree types")
}
if osc.ProfileID == "" {
return warnings, fmt.Errorf("OpenSCAP profile cannot be empty")
}
}
// Check Directory/File Customizations are valid
dc := customizations.GetDirectories()
fc := customizations.GetFiles()
err = blueprint.ValidateDirFileCustomizations(dc, fc)
if err != nil {
return warnings, err
}
err = blueprint.CheckDirectoryCustomizationsPolicy(dc, pathpolicy.CustomDirectoriesPolicies)
if err != nil {
return warnings, err
}
err = blueprint.CheckFileCustomizationsPolicy(fc, pathpolicy.CustomFilesPolicies)
if err != nil {
return warnings, err
}
// check if repository customizations are valid
_, err = customizations.GetRepositories()
if err != nil {
return warnings, err
}
return warnings, nil
}

View file

@ -0,0 +1,75 @@
package rhel8
// This file defines package sets that are used by more than one image type.
import (
"fmt"
"github.com/osbuild/images/pkg/platform"
"github.com/osbuild/images/pkg/rpmmd"
)
// installer boot package sets, needed for booting and
// also in the build host
func anacondaBootPackageSet(t *imageType) rpmmd.PackageSet {
ps := rpmmd.PackageSet{}
grubCommon := rpmmd.PackageSet{
Include: []string{
"grub2-tools",
"grub2-tools-extra",
"grub2-tools-minimal",
},
}
efiCommon := rpmmd.PackageSet{
Include: []string{
"efibootmgr",
},
}
switch t.arch.Name() {
case platform.ARCH_X86_64.String():
ps = ps.Append(grubCommon)
ps = ps.Append(efiCommon)
ps = ps.Append(rpmmd.PackageSet{
Include: []string{
"grub2-efi-ia32-cdboot",
"grub2-efi-x64",
"grub2-efi-x64-cdboot",
"grub2-pc",
"grub2-pc-modules",
"shim-ia32",
"shim-x64",
"syslinux",
"syslinux-nonlinux",
},
})
case platform.ARCH_AARCH64.String():
ps = ps.Append(grubCommon)
ps = ps.Append(efiCommon)
ps = ps.Append(rpmmd.PackageSet{
Include: []string{
"grub2-efi-aa64-cdboot",
"grub2-efi-aa64",
"shim-aa64",
},
})
default:
panic(fmt.Sprintf("unsupported arch: %s", t.arch.Name()))
}
return ps
}
// packages that are only in some (sub)-distributions
func distroSpecificPackageSet(t *imageType) rpmmd.PackageSet {
if t.arch.distro.isRHEL() {
return rpmmd.PackageSet{
Include: []string{"insights-client"},
}
}
return rpmmd.PackageSet{}
}

View file

@ -0,0 +1,425 @@
package rhel8
import (
"github.com/osbuild/images/internal/common"
"github.com/osbuild/images/pkg/disk"
"github.com/osbuild/images/pkg/distro"
"github.com/osbuild/images/pkg/platform"
)
var defaultBasePartitionTables = distro.BasePartitionTableMap{
platform.ARCH_X86_64.String(): disk.PartitionTable{
UUID: "D209C89E-EA5E-4FBD-B161-B461CCE297E0",
Type: "gpt",
Partitions: []disk.Partition{
{
Size: 1 * common.MebiByte,
Bootable: true,
Type: disk.BIOSBootPartitionGUID,
UUID: disk.BIOSBootPartitionUUID,
},
{
Size: 100 * common.MebiByte,
Type: disk.EFISystemPartitionGUID,
UUID: disk.EFISystemPartitionUUID,
Payload: &disk.Filesystem{
Type: "vfat",
UUID: disk.EFIFilesystemUUID,
Mountpoint: "/boot/efi",
FSTabOptions: "defaults,uid=0,gid=0,umask=077,shortname=winnt",
FSTabFreq: 0,
FSTabPassNo: 2,
},
},
{
Size: 2 * common.GibiByte, // 2 GiB
Type: disk.FilesystemDataGUID,
UUID: disk.RootPartitionUUID,
Payload: &disk.Filesystem{
Type: "xfs",
Label: "root",
Mountpoint: "/",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
},
},
platform.ARCH_AARCH64.String(): disk.PartitionTable{
UUID: "D209C89E-EA5E-4FBD-B161-B461CCE297E0",
Type: "gpt",
Partitions: []disk.Partition{
{
Size: 100 * common.MebiByte,
Type: disk.EFISystemPartitionGUID,
UUID: disk.EFISystemPartitionUUID,
Payload: &disk.Filesystem{
Type: "vfat",
UUID: disk.EFIFilesystemUUID,
Mountpoint: "/boot/efi",
FSTabOptions: "defaults,uid=0,gid=0,umask=077,shortname=winnt",
FSTabFreq: 0,
FSTabPassNo: 2,
},
},
{
Size: 2 * common.GibiByte, // 2 GiB
Type: disk.FilesystemDataGUID,
UUID: disk.RootPartitionUUID,
Payload: &disk.Filesystem{
Type: "xfs",
Label: "root",
Mountpoint: "/",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
},
},
platform.ARCH_PPC64LE.String(): disk.PartitionTable{
UUID: "0x14fc63d2",
Type: "dos",
Partitions: []disk.Partition{
{
Size: 4 * common.MebiByte,
Type: "41",
Bootable: true,
},
{
Size: 2 * common.GibiByte, // 2 GiB
Payload: &disk.Filesystem{
Type: "xfs",
Mountpoint: "/",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
},
},
platform.ARCH_S390X.String(): disk.PartitionTable{
UUID: "0x14fc63d2",
Type: "dos",
Partitions: []disk.Partition{
{
Size: 2 * common.GibiByte, // 2 GiB
Bootable: true,
Payload: &disk.Filesystem{
Type: "xfs",
Mountpoint: "/",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
},
},
}
var ec2BasePartitionTables = distro.BasePartitionTableMap{
platform.ARCH_X86_64.String(): disk.PartitionTable{
UUID: "D209C89E-EA5E-4FBD-B161-B461CCE297E0",
Type: "gpt",
Partitions: []disk.Partition{
{
Size: 1 * common.MebiByte,
Bootable: true,
Type: disk.BIOSBootPartitionGUID,
UUID: disk.BIOSBootPartitionUUID,
},
{
Size: 200 * common.MebiByte,
Type: disk.EFISystemPartitionGUID,
UUID: disk.EFISystemPartitionUUID,
Payload: &disk.Filesystem{
Type: "vfat",
UUID: disk.EFIFilesystemUUID,
Mountpoint: "/boot/efi",
Label: "EFI-SYSTEM",
FSTabOptions: "defaults,uid=0,gid=0,umask=077,shortname=winnt",
FSTabFreq: 0,
FSTabPassNo: 2,
},
},
{
Size: 500 * common.MebiByte,
Type: disk.XBootLDRPartitionGUID,
UUID: disk.FilesystemDataUUID,
Payload: &disk.Filesystem{
Type: "xfs",
Mountpoint: "/boot",
Label: "boot",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
{
Size: 2 * common.GibiByte,
Type: disk.FilesystemDataGUID,
UUID: disk.RootPartitionUUID,
Payload: &disk.Filesystem{
Type: "xfs",
Label: "root",
Mountpoint: "/",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
},
},
platform.ARCH_AARCH64.String(): disk.PartitionTable{
UUID: "D209C89E-EA5E-4FBD-B161-B461CCE297E0",
Type: "gpt",
Partitions: []disk.Partition{
{
Size: 200 * common.MebiByte,
Type: disk.EFISystemPartitionGUID,
UUID: disk.EFISystemPartitionUUID,
Payload: &disk.Filesystem{
Type: "vfat",
UUID: disk.EFIFilesystemUUID,
Mountpoint: "/boot/efi",
Label: "EFI-SYSTEM",
FSTabOptions: "defaults,uid=0,gid=0,umask=077,shortname=winnt",
FSTabFreq: 0,
FSTabPassNo: 2,
},
},
{
Size: 500 * common.MebiByte,
Type: disk.XBootLDRPartitionGUID,
UUID: disk.FilesystemDataUUID,
Payload: &disk.Filesystem{
Type: "xfs",
Mountpoint: "/boot",
Label: "boot",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
{
Size: 2 * common.GibiByte,
Type: disk.FilesystemDataGUID,
UUID: disk.RootPartitionUUID,
Payload: &disk.Filesystem{
Type: "xfs",
Label: "root",
Mountpoint: "/",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
},
},
}
// ec2LegacyBasePartitionTables is the partition table layout for RHEL EC2
// images prior to 8.9. It is used for backwards compatibility.
var ec2LegacyBasePartitionTables = distro.BasePartitionTableMap{
platform.ARCH_X86_64.String(): disk.PartitionTable{
UUID: "D209C89E-EA5E-4FBD-B161-B461CCE297E0",
Type: "gpt",
Partitions: []disk.Partition{
{
Size: 1 * common.MebiByte,
Bootable: true,
Type: disk.BIOSBootPartitionGUID,
UUID: disk.BIOSBootPartitionUUID,
},
{
Size: 2 * common.GibiByte, // 2 GiB
Type: disk.FilesystemDataGUID,
UUID: disk.RootPartitionUUID,
Payload: &disk.Filesystem{
Type: "xfs",
Label: "root",
Mountpoint: "/",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
},
},
platform.ARCH_AARCH64.String(): disk.PartitionTable{
UUID: "D209C89E-EA5E-4FBD-B161-B461CCE297E0",
Type: "gpt",
Partitions: []disk.Partition{
{
Size: 200 * common.MebiByte,
Type: disk.EFISystemPartitionGUID,
UUID: disk.EFISystemPartitionUUID,
Payload: &disk.Filesystem{
Type: "vfat",
UUID: disk.EFIFilesystemUUID,
Mountpoint: "/boot/efi",
FSTabOptions: "defaults,uid=0,gid=0,umask=077,shortname=winnt",
FSTabFreq: 0,
FSTabPassNo: 2,
},
},
{
Size: 512 * common.MebiByte,
Type: disk.FilesystemDataGUID,
UUID: disk.FilesystemDataUUID,
Payload: &disk.Filesystem{
Type: "xfs",
Mountpoint: "/boot",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
{
Size: 2 * common.GibiByte, // 2 GiB
Type: disk.FilesystemDataGUID,
UUID: disk.RootPartitionUUID,
Payload: &disk.Filesystem{
Type: "xfs",
Label: "root",
Mountpoint: "/",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
},
},
}
var edgeBasePartitionTables = distro.BasePartitionTableMap{
platform.ARCH_X86_64.String(): disk.PartitionTable{
UUID: "D209C89E-EA5E-4FBD-B161-B461CCE297E0",
Type: "gpt",
Partitions: []disk.Partition{
{
Size: 1 * common.MebiByte,
Bootable: true,
Type: disk.BIOSBootPartitionGUID,
UUID: disk.BIOSBootPartitionUUID,
},
{
Size: 127 * common.MebiByte, // 127 MB
Type: disk.EFISystemPartitionGUID,
UUID: disk.EFISystemPartitionUUID,
Payload: &disk.Filesystem{
Type: "vfat",
UUID: disk.EFIFilesystemUUID,
Mountpoint: "/boot/efi",
Label: "EFI-SYSTEM",
FSTabOptions: "defaults,uid=0,gid=0,umask=077,shortname=winnt",
FSTabFreq: 0,
FSTabPassNo: 2,
},
},
{
Size: 384 * common.MebiByte, // 384 MB
Type: disk.FilesystemDataGUID,
UUID: disk.FilesystemDataUUID,
Payload: &disk.Filesystem{
Type: "xfs",
Mountpoint: "/boot",
Label: "boot",
FSTabOptions: "defaults",
FSTabFreq: 1,
FSTabPassNo: 1,
},
},
{
Size: 2 * common.GibiByte, // 2 GiB
Type: disk.FilesystemDataGUID,
UUID: disk.RootPartitionUUID,
Payload: &disk.LUKSContainer{
Label: "crypt_root",
Cipher: "cipher_null",
Passphrase: "osbuild",
PBKDF: disk.Argon2id{
Memory: 32,
Iterations: 4,
Parallelism: 1,
},
Clevis: &disk.ClevisBind{
Pin: "null",
Policy: "{}",
RemovePassphrase: true,
},
Payload: &disk.Filesystem{
Type: "xfs",
Label: "root",
Mountpoint: "/",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
},
},
},
platform.ARCH_AARCH64.String(): disk.PartitionTable{
UUID: "D209C89E-EA5E-4FBD-B161-B461CCE297E0",
Type: "gpt",
Partitions: []disk.Partition{
{
Size: 127 * common.MebiByte, // 127 MB
Type: disk.EFISystemPartitionGUID,
UUID: disk.EFISystemPartitionUUID,
Payload: &disk.Filesystem{
Type: "vfat",
UUID: disk.EFIFilesystemUUID,
Mountpoint: "/boot/efi",
Label: "EFI-SYSTEM",
FSTabOptions: "defaults,uid=0,gid=0,umask=077,shortname=winnt",
FSTabFreq: 0,
FSTabPassNo: 2,
},
},
{
Size: 384 * common.MebiByte, // 384 MB
Type: disk.FilesystemDataGUID,
UUID: disk.FilesystemDataUUID,
Payload: &disk.Filesystem{
Type: "xfs",
Mountpoint: "/boot",
Label: "boot",
FSTabOptions: "defaults",
FSTabFreq: 1,
FSTabPassNo: 1,
},
},
{
Size: 2 * common.GibiByte, // 2 GiB
Type: disk.FilesystemDataGUID,
UUID: disk.RootPartitionUUID,
Payload: &disk.LUKSContainer{
Label: "crypt_root",
Cipher: "cipher_null",
Passphrase: "osbuild",
PBKDF: disk.Argon2id{
Memory: 32,
Iterations: 4,
Parallelism: 1,
},
Clevis: &disk.ClevisBind{
Pin: "null",
Policy: "{}",
RemovePassphrase: true,
},
Payload: &disk.Filesystem{
Type: "xfs",
Label: "root",
Mountpoint: "/",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
},
},
},
}

View file

@ -0,0 +1,168 @@
package rhel8
import (
"github.com/osbuild/images/internal/common"
"github.com/osbuild/images/pkg/distro"
"github.com/osbuild/images/pkg/osbuild"
"github.com/osbuild/images/pkg/rpmmd"
"github.com/osbuild/images/pkg/subscription"
)
func qcow2ImgType(rd distribution) imageType {
it := imageType{
name: "qcow2",
filename: "disk.qcow2",
mimeType: "application/x-qemu-disk",
kernelOptions: "console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=auto",
packageSets: map[string]packageSetFunc{
osPkgsKey: qcow2CommonPackageSet,
},
defaultImageConfig: &distro.ImageConfig{
DefaultTarget: common.ToPtr("multi-user.target"),
},
bootable: true,
defaultSize: 10 * common.GibiByte,
image: liveImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"os", "image", "qcow2"},
exports: []string{"qcow2"},
basePartitionTables: defaultBasePartitionTables,
}
if rd.isRHEL() {
it.defaultImageConfig.RHSMConfig = map[subscription.RHSMStatus]*osbuild.RHSMStageOptions{
subscription.RHSMConfigNoSubscription: {
DnfPlugins: &osbuild.RHSMStageOptionsDnfPlugins{
ProductID: &osbuild.RHSMStageOptionsDnfPlugin{
Enabled: false,
},
SubscriptionManager: &osbuild.RHSMStageOptionsDnfPlugin{
Enabled: false,
},
},
},
}
}
return it
}
func openstackImgType() imageType {
return imageType{
name: "openstack",
filename: "disk.qcow2",
mimeType: "application/x-qemu-disk",
packageSets: map[string]packageSetFunc{
osPkgsKey: openstackCommonPackageSet,
},
kernelOptions: "ro net.ifnames=0",
bootable: true,
defaultSize: 4 * common.GibiByte,
image: liveImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"os", "image", "qcow2"},
exports: []string{"qcow2"},
basePartitionTables: defaultBasePartitionTables,
}
}
func qcow2CommonPackageSet(t *imageType) rpmmd.PackageSet {
ps := rpmmd.PackageSet{
Include: []string{
"@core",
"authselect-compat",
"chrony",
"cloud-init",
"cloud-utils-growpart",
"cockpit-system",
"cockpit-ws",
"dhcp-client",
"dnf",
"dnf-utils",
"dosfstools",
"dracut-norescue",
"net-tools",
"NetworkManager",
"nfs-utils",
"oddjob",
"oddjob-mkhomedir",
"psmisc",
"python3-jsonschema",
"qemu-guest-agent",
"redhat-release",
"redhat-release-eula",
"rsync",
"tar",
"tcpdump",
"yum",
},
Exclude: []string{
"aic94xx-firmware",
"alsa-firmware",
"alsa-lib",
"alsa-tools-firmware",
"biosdevname",
"dnf-plugin-spacewalk",
"dracut-config-rescue",
"fedora-release",
"fedora-repos",
"firewalld",
"fwupd",
"iprutils",
"ivtv-firmware",
"iwl1000-firmware",
"iwl100-firmware",
"iwl105-firmware",
"iwl135-firmware",
"iwl2000-firmware",
"iwl2030-firmware",
"iwl3160-firmware",
"iwl3945-firmware",
"iwl4965-firmware",
"iwl5000-firmware",
"iwl5150-firmware",
"iwl6000-firmware",
"iwl6000g2a-firmware",
"iwl6000g2b-firmware",
"iwl6050-firmware",
"iwl7260-firmware",
"langpacks-*",
"langpacks-en",
"langpacks-en",
"libertas-sd8686-firmware",
"libertas-sd8787-firmware",
"libertas-usb8388-firmware",
"nss",
"plymouth",
"rng-tools",
"udisks2",
},
}.Append(distroSpecificPackageSet(t))
// Ensure to not pull in subscription-manager on non-RHEL distro
if t.arch.distro.isRHEL() {
ps = ps.Append(rpmmd.PackageSet{
Include: []string{
"subscription-manager-cockpit",
},
})
}
return ps
}
func openstackCommonPackageSet(t *imageType) rpmmd.PackageSet {
return rpmmd.PackageSet{
Include: []string{
// Defaults
"@Core", "langpacks-en",
// From the lorax kickstart
"selinux-policy-targeted", "cloud-init", "qemu-guest-agent",
"spice-vdagent",
},
Exclude: []string{
"dracut-config-rescue", "rng-tools",
},
}
}

View file

@ -0,0 +1,174 @@
package rhel8
import (
"github.com/osbuild/images/internal/common"
"github.com/osbuild/images/pkg/distro"
"github.com/osbuild/images/pkg/osbuild"
"github.com/osbuild/images/pkg/rpmmd"
)
// sapImageConfig returns the SAP specific ImageConfig data
func sapImageConfig(rd distribution) *distro.ImageConfig {
return &distro.ImageConfig{
SELinuxConfig: &osbuild.SELinuxConfigStageOptions{
State: osbuild.SELinuxStatePermissive,
},
// RHBZ#1960617
Tuned: osbuild.NewTunedStageOptions("sap-hana"),
// RHBZ#1959979
Tmpfilesd: []*osbuild.TmpfilesdStageOptions{
osbuild.NewTmpfilesdStageOptions("sap.conf",
[]osbuild.TmpfilesdConfigLine{
{
Type: "x",
Path: "/tmp/.sap*",
},
{
Type: "x",
Path: "/tmp/.hdb*lock",
},
{
Type: "x",
Path: "/tmp/.trex*lock",
},
},
),
},
// RHBZ#1959963
PamLimitsConf: []*osbuild.PamLimitsConfStageOptions{
osbuild.NewPamLimitsConfStageOptions("99-sap.conf",
[]osbuild.PamLimitsConfigLine{
{
Domain: "@sapsys",
Type: osbuild.PamLimitsTypeHard,
Item: osbuild.PamLimitsItemNofile,
Value: osbuild.PamLimitsValueInt(1048576),
},
{
Domain: "@sapsys",
Type: osbuild.PamLimitsTypeSoft,
Item: osbuild.PamLimitsItemNofile,
Value: osbuild.PamLimitsValueInt(1048576),
},
{
Domain: "@dba",
Type: osbuild.PamLimitsTypeHard,
Item: osbuild.PamLimitsItemNofile,
Value: osbuild.PamLimitsValueInt(1048576),
},
{
Domain: "@dba",
Type: osbuild.PamLimitsTypeSoft,
Item: osbuild.PamLimitsItemNofile,
Value: osbuild.PamLimitsValueInt(1048576),
},
{
Domain: "@sapsys",
Type: osbuild.PamLimitsTypeHard,
Item: osbuild.PamLimitsItemNproc,
Value: osbuild.PamLimitsValueUnlimited,
},
{
Domain: "@sapsys",
Type: osbuild.PamLimitsTypeSoft,
Item: osbuild.PamLimitsItemNproc,
Value: osbuild.PamLimitsValueUnlimited,
},
{
Domain: "@dba",
Type: osbuild.PamLimitsTypeHard,
Item: osbuild.PamLimitsItemNproc,
Value: osbuild.PamLimitsValueUnlimited,
},
{
Domain: "@dba",
Type: osbuild.PamLimitsTypeSoft,
Item: osbuild.PamLimitsItemNproc,
Value: osbuild.PamLimitsValueUnlimited,
},
},
),
},
// RHBZ#1959962
Sysctld: []*osbuild.SysctldStageOptions{
osbuild.NewSysctldStageOptions("sap.conf",
[]osbuild.SysctldConfigLine{
{
Key: "kernel.pid_max",
Value: "4194304",
},
{
Key: "vm.max_map_count",
Value: "2147483647",
},
},
),
},
// E4S/EUS
DNFConfig: []*osbuild.DNFConfigStageOptions{
osbuild.NewDNFConfigStageOptions(
[]osbuild.DNFVariable{
{
Name: "releasever",
Value: rd.osVersion,
},
},
nil,
),
},
}
}
func SapPackageSet(t *imageType) rpmmd.PackageSet {
packageSet := rpmmd.PackageSet{
Include: []string{
// RHBZ#2074107
"@Server",
// SAP System Roles
// https://access.redhat.com/sites/default/files/attachments/rhel_system_roles_for_sap_1.pdf
"rhel-system-roles-sap",
// RHBZ#1959813
"bind-utils",
"compat-sap-c++-9",
"compat-sap-c++-10", // RHBZ#2074114
"nfs-utils",
"tcsh",
// RHBZ#1959955
"uuidd",
// RHBZ#1959923
"cairo",
"expect",
"graphviz",
"gtk2",
"iptraf-ng",
"krb5-workstation",
"libaio",
"libatomic",
"libcanberra-gtk2",
"libicu",
"libpng12",
"libtool-ltdl",
"lm_sensors",
"net-tools",
"numactl",
"PackageKit-gtk3-module",
"xorg-x11-xauth",
// RHBZ#1960617
"tuned-profiles-sap-hana",
// RHBZ#1961168
"libnsl",
},
}
if common.VersionLessThan(t.arch.distro.osVersion, "8.6") {
packageSet = packageSet.Append(rpmmd.PackageSet{
Include: []string{"ansible"},
})
} else {
// 8.6+ and CS8 (image type does not exist on 8.5)
packageSet = packageSet.Append(rpmmd.PackageSet{
Include: []string{"ansible-core"}, // RHBZ#2077356
})
}
return packageSet
}

View file

@ -0,0 +1,64 @@
package rhel8
import (
"github.com/osbuild/images/internal/common"
"github.com/osbuild/images/pkg/rpmmd"
)
const vmdkKernelOptions = "ro net.ifnames=0"
func vmdkImgType() imageType {
return imageType{
name: "vmdk",
filename: "disk.vmdk",
mimeType: "application/x-vmdk",
packageSets: map[string]packageSetFunc{
osPkgsKey: vmdkCommonPackageSet,
},
kernelOptions: vmdkKernelOptions,
bootable: true,
defaultSize: 4 * common.GibiByte,
image: liveImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"os", "image", "vmdk"},
exports: []string{"vmdk"},
basePartitionTables: defaultBasePartitionTables,
}
}
func ovaImgType() imageType {
return imageType{
name: "ova",
filename: "image.ova",
mimeType: "application/ovf",
packageSets: map[string]packageSetFunc{
osPkgsKey: vmdkCommonPackageSet,
},
kernelOptions: vmdkKernelOptions,
bootable: true,
defaultSize: 4 * common.GibiByte,
image: liveImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"os", "image", "vmdk", "ovf", "archive"},
exports: []string{"archive"},
basePartitionTables: defaultBasePartitionTables,
}
}
func vmdkCommonPackageSet(t *imageType) rpmmd.PackageSet {
return rpmmd.PackageSet{
Include: []string{
"@core",
"chrony",
"cloud-init",
"firewalld",
"langpacks-en",
"open-vm-tools",
"selinux-policy-targeted",
},
Exclude: []string{
"dracut-config-rescue",
"rng-tools",
},
}
}

View file

@ -0,0 +1,26 @@
package rhel8
import "github.com/osbuild/images/internal/workload"
// rhel8Workload is a RHEL-8-specific implementation of the workload interface
// for internal workload variants.
type rhel8Workload struct {
workload.BaseWorkload
packages []string
}
func (w rhel8Workload) GetPackages() []string {
return w.packages
}
func eapWorkload() workload.Workload {
w := rhel8Workload{}
w.packages = []string{
"java-1.8.0-openjdk",
"java-1.8.0-openjdk-devel",
"eap7-wildfly",
"eap7-artemis-native-wildfly",
}
return &w
}

View file

@ -0,0 +1,475 @@
package rhel9
import (
"github.com/osbuild/images/internal/common"
"github.com/osbuild/images/pkg/distro"
"github.com/osbuild/images/pkg/osbuild"
"github.com/osbuild/images/pkg/rpmmd"
"github.com/osbuild/images/pkg/subscription"
)
const amiKernelOptions = "console=ttyS0,115200n8 console=tty0 net.ifnames=0 rd.blacklist=nouveau nvme_core.io_timeout=4294967295"
var (
amiImgTypeX86_64 = imageType{
name: "ami",
filename: "image.raw",
mimeType: "application/octet-stream",
packageSets: map[string]packageSetFunc{
osPkgsKey: ec2CommonPackageSet,
},
kernelOptions: amiKernelOptions,
bootable: true,
defaultSize: 10 * common.GibiByte,
image: liveImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"os", "image"},
exports: []string{"image"},
basePartitionTables: defaultBasePartitionTables,
}
ec2ImgTypeX86_64 = imageType{
name: "ec2",
filename: "image.raw.xz",
mimeType: "application/xz",
compression: "xz",
packageSets: map[string]packageSetFunc{
osPkgsKey: rhelEc2PackageSet,
},
kernelOptions: amiKernelOptions,
bootable: true,
defaultSize: 10 * common.GibiByte,
image: liveImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"os", "image", "xz"},
exports: []string{"xz"},
basePartitionTables: defaultBasePartitionTables,
}
ec2HaImgTypeX86_64 = imageType{
name: "ec2-ha",
filename: "image.raw.xz",
mimeType: "application/xz",
compression: "xz",
packageSets: map[string]packageSetFunc{
buildPkgsKey: ec2BuildPackageSet,
osPkgsKey: rhelEc2HaPackageSet,
},
kernelOptions: amiKernelOptions,
bootable: true,
defaultSize: 10 * common.GibiByte,
image: liveImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"os", "image", "xz"},
exports: []string{"xz"},
basePartitionTables: defaultBasePartitionTables,
}
amiImgTypeAarch64 = imageType{
name: "ami",
filename: "image.raw",
mimeType: "application/octet-stream",
packageSets: map[string]packageSetFunc{
buildPkgsKey: ec2BuildPackageSet,
osPkgsKey: ec2CommonPackageSet,
},
kernelOptions: "console=ttyS0,115200n8 console=tty0 net.ifnames=0 rd.blacklist=nouveau nvme_core.io_timeout=4294967295 iommu.strict=0",
bootable: true,
defaultSize: 10 * common.GibiByte,
image: liveImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"os", "image"},
exports: []string{"image"},
basePartitionTables: defaultBasePartitionTables,
}
ec2ImgTypeAarch64 = imageType{
name: "ec2",
filename: "image.raw.xz",
mimeType: "application/xz",
compression: "xz",
packageSets: map[string]packageSetFunc{
buildPkgsKey: ec2BuildPackageSet,
osPkgsKey: rhelEc2PackageSet,
},
kernelOptions: "console=ttyS0,115200n8 console=tty0 net.ifnames=0 rd.blacklist=nouveau nvme_core.io_timeout=4294967295 iommu.strict=0",
bootable: true,
defaultSize: 10 * common.GibiByte,
image: liveImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"os", "image", "xz"},
exports: []string{"xz"},
basePartitionTables: defaultBasePartitionTables,
}
ec2SapImgTypeX86_64 = imageType{
name: "ec2-sap",
filename: "image.raw.xz",
mimeType: "application/xz",
compression: "xz",
packageSets: map[string]packageSetFunc{
buildPkgsKey: ec2BuildPackageSet,
osPkgsKey: rhelEc2SapPackageSet,
},
kernelOptions: "console=ttyS0,115200n8 console=tty0 net.ifnames=0 rd.blacklist=nouveau nvme_core.io_timeout=4294967295 processor.max_cstate=1 intel_idle.max_cstate=1",
bootable: true,
defaultSize: 10 * common.GibiByte,
image: liveImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"os", "image", "xz"},
exports: []string{"xz"},
basePartitionTables: defaultBasePartitionTables,
}
)
// default EC2 images config (common for all architectures)
func baseEc2ImageConfig() *distro.ImageConfig {
return &distro.ImageConfig{
Locale: common.ToPtr("en_US.UTF-8"),
Timezone: common.ToPtr("UTC"),
TimeSynchronization: &osbuild.ChronyStageOptions{
Servers: []osbuild.ChronyConfigServer{
{
Hostname: "169.254.169.123",
Prefer: common.ToPtr(true),
Iburst: common.ToPtr(true),
Minpoll: common.ToPtr(4),
Maxpoll: common.ToPtr(4),
},
},
// empty string will remove any occurrences of the option from the configuration
LeapsecTz: common.ToPtr(""),
},
Keyboard: &osbuild.KeymapStageOptions{
Keymap: "us",
X11Keymap: &osbuild.X11KeymapOptions{
Layouts: []string{"us"},
},
},
EnabledServices: []string{
"sshd",
"NetworkManager",
"nm-cloud-setup.service",
"nm-cloud-setup.timer",
"cloud-init",
"cloud-init-local",
"cloud-config",
"cloud-final",
"reboot.target",
"tuned",
},
DefaultTarget: common.ToPtr("multi-user.target"),
Sysconfig: []*osbuild.SysconfigStageOptions{
{
Kernel: &osbuild.SysconfigKernelOptions{
UpdateDefault: true,
DefaultKernel: "kernel",
},
Network: &osbuild.SysconfigNetworkOptions{
Networking: true,
NoZeroConf: true,
},
NetworkScripts: &osbuild.NetworkScriptsOptions{
IfcfgFiles: map[string]osbuild.IfcfgFile{
"eth0": {
Device: "eth0",
Bootproto: osbuild.IfcfgBootprotoDHCP,
OnBoot: common.ToPtr(true),
Type: osbuild.IfcfgTypeEthernet,
UserCtl: common.ToPtr(true),
PeerDNS: common.ToPtr(true),
IPv6Init: common.ToPtr(false),
},
},
},
},
},
SystemdLogind: []*osbuild.SystemdLogindStageOptions{
{
Filename: "00-getty-fixes.conf",
Config: osbuild.SystemdLogindConfigDropin{
Login: osbuild.SystemdLogindConfigLoginSection{
NAutoVTs: common.ToPtr(0),
},
},
},
},
CloudInit: []*osbuild.CloudInitStageOptions{
{
Filename: "00-rhel-default-user.cfg",
Config: osbuild.CloudInitConfigFile{
SystemInfo: &osbuild.CloudInitConfigSystemInfo{
DefaultUser: &osbuild.CloudInitConfigDefaultUser{
Name: "ec2-user",
},
},
},
},
},
Modprobe: []*osbuild.ModprobeStageOptions{
{
Filename: "blacklist-nouveau.conf",
Commands: osbuild.ModprobeConfigCmdList{
osbuild.NewModprobeConfigCmdBlacklist("nouveau"),
},
},
{
Filename: "blacklist-amdgpu.conf",
Commands: osbuild.ModprobeConfigCmdList{
osbuild.NewModprobeConfigCmdBlacklist("amdgpu"),
},
},
},
// COMPOSER-1807
DracutConf: []*osbuild.DracutConfStageOptions{
{
Filename: "sgdisk.conf",
Config: osbuild.DracutConfigFile{
Install: []string{"sgdisk"},
},
},
},
SystemdUnit: []*osbuild.SystemdUnitStageOptions{
// RHBZ#1822863
{
Unit: "nm-cloud-setup.service",
Dropin: "10-rh-enable-for-ec2.conf",
Config: osbuild.SystemdServiceUnitDropin{
Service: &osbuild.SystemdUnitServiceSection{
Environment: "NM_CLOUD_SETUP_EC2=yes",
},
},
},
},
Authselect: &osbuild.AuthselectStageOptions{
Profile: "sssd",
},
SshdConfig: &osbuild.SshdConfigStageOptions{
Config: osbuild.SshdConfigConfig{
PasswordAuthentication: common.ToPtr(false),
},
},
}
}
func defaultEc2ImageConfig(osVersion string, rhsm bool) *distro.ImageConfig {
ic := baseEc2ImageConfig()
if rhsm && common.VersionLessThan(osVersion, "9.1") {
ic = appendRHSM(ic)
// Disable RHSM redhat.repo management
rhsmConf := ic.RHSMConfig[subscription.RHSMConfigNoSubscription]
rhsmConf.SubMan.Rhsm = &osbuild.SubManConfigRHSMSection{ManageRepos: common.ToPtr(false)}
ic.RHSMConfig[subscription.RHSMConfigNoSubscription] = rhsmConf
}
return ic
}
// default AMI (EC2 BYOS) images config
func defaultAMIImageConfig(osVersion string, rhsm bool) *distro.ImageConfig {
ic := defaultEc2ImageConfig(osVersion, rhsm)
if rhsm {
// defaultAMIImageConfig() adds the rhsm options only for RHEL < 9.1
// Add it unconditionally for AMI
ic = appendRHSM(ic)
}
return ic
}
func defaultEc2ImageConfigX86_64(osVersion string, rhsm bool) *distro.ImageConfig {
ic := defaultEc2ImageConfig(osVersion, rhsm)
return appendEC2DracutX86_64(ic)
}
func defaultAMIImageConfigX86_64(osVersion string, rhsm bool) *distro.ImageConfig {
ic := defaultAMIImageConfig(osVersion, rhsm).InheritFrom(defaultEc2ImageConfigX86_64(osVersion, rhsm))
return appendEC2DracutX86_64(ic)
}
// common ec2 image build package set
func ec2BuildPackageSet(t *imageType) rpmmd.PackageSet {
return distroBuildPackageSet(t).Append(
rpmmd.PackageSet{
Include: []string{
"python3-pyyaml",
},
})
}
func ec2CommonPackageSet(t *imageType) rpmmd.PackageSet {
return rpmmd.PackageSet{
Include: []string{
"authselect-compat",
"chrony",
"cloud-init",
"cloud-utils-growpart",
"dhcp-client",
"yum-utils",
"dracut-config-generic",
"gdisk",
"grub2",
"langpacks-en",
"NetworkManager-cloud-setup",
"redhat-release",
"redhat-release-eula",
"rsync",
"tar",
},
Exclude: []string{
"aic94xx-firmware",
"alsa-firmware",
"alsa-tools-firmware",
"biosdevname",
"iprutils",
"ivtv-firmware",
"libertas-sd8787-firmware",
"plymouth",
// RHBZ#2064087
"dracut-config-rescue",
// RHBZ#2075815
"qemu-guest-agent",
},
}.Append(coreOsCommonPackageSet(t)).Append(distroSpecificPackageSet(t))
}
// common rhel ec2 RHUI image package set
func rhelEc2CommonPackageSet(t *imageType) rpmmd.PackageSet {
ps := ec2CommonPackageSet(t)
// Include "redhat-cloud-client-configuration" on 9.1+ (COMPOSER-1805)
if !common.VersionLessThan(t.arch.distro.osVersion, "9.1") {
ps.Include = append(ps.Include, "redhat-cloud-client-configuration")
}
return ps
}
// rhel-ec2 image package set
func rhelEc2PackageSet(t *imageType) rpmmd.PackageSet {
ec2PackageSet := rhelEc2CommonPackageSet(t)
ec2PackageSet = ec2PackageSet.Append(rpmmd.PackageSet{
Include: []string{
"rh-amazon-rhui-client",
},
Exclude: []string{
"alsa-lib",
},
})
return ec2PackageSet
}
// rhel-ha-ec2 image package set
func rhelEc2HaPackageSet(t *imageType) rpmmd.PackageSet {
ec2HaPackageSet := rhelEc2CommonPackageSet(t)
ec2HaPackageSet = ec2HaPackageSet.Append(rpmmd.PackageSet{
Include: []string{
"fence-agents-all",
"pacemaker",
"pcs",
"rh-amazon-rhui-client-ha",
},
Exclude: []string{
"alsa-lib",
},
})
return ec2HaPackageSet
}
// rhel-sap-ec2 image package set
// Includes the common ec2 package set, the common SAP packages, and
// the amazon rhui sap package
func rhelEc2SapPackageSet(t *imageType) rpmmd.PackageSet {
return rpmmd.PackageSet{
Include: []string{
"rh-amazon-rhui-client-sap-bundle-e4s",
},
}.Append(rhelEc2CommonPackageSet(t)).Append(SapPackageSet(t))
}
func mkEc2ImgTypeX86_64(osVersion string, rhsm bool) imageType {
it := ec2ImgTypeX86_64
ic := defaultEc2ImageConfigX86_64(osVersion, rhsm)
it.defaultImageConfig = ic
return it
}
func mkAMIImgTypeX86_64(osVersion string, rhsm bool) imageType {
it := amiImgTypeX86_64
ic := defaultAMIImageConfigX86_64(osVersion, rhsm)
it.defaultImageConfig = ic
return it
}
func mkEC2SapImgTypeX86_64(osVersion string, rhsm bool) imageType {
it := ec2SapImgTypeX86_64
it.defaultImageConfig = sapImageConfig(osVersion).InheritFrom(defaultEc2ImageConfigX86_64(osVersion, rhsm))
return it
}
func mkEc2HaImgTypeX86_64(osVersion string, rhsm bool) imageType {
it := ec2HaImgTypeX86_64
ic := defaultEc2ImageConfigX86_64(osVersion, rhsm)
it.defaultImageConfig = ic
return it
}
func mkAMIImgTypeAarch64(osVersion string, rhsm bool) imageType {
it := amiImgTypeAarch64
ic := defaultAMIImageConfig(osVersion, rhsm)
it.defaultImageConfig = ic
return it
}
func mkEC2ImgTypeAarch64(osVersion string, rhsm bool) imageType {
it := ec2ImgTypeAarch64
ic := defaultEc2ImageConfig(osVersion, rhsm)
it.defaultImageConfig = ic
return it
}
// Add RHSM config options to ImageConfig.
// Used for RHEL distros.
func appendRHSM(ic *distro.ImageConfig) *distro.ImageConfig {
rhsm := &distro.ImageConfig{
RHSMConfig: map[subscription.RHSMStatus]*osbuild.RHSMStageOptions{
subscription.RHSMConfigNoSubscription: {
// RHBZ#1932802
SubMan: &osbuild.RHSMStageOptionsSubMan{
Rhsmcertd: &osbuild.SubManConfigRHSMCERTDSection{
AutoRegistration: common.ToPtr(true),
},
// Don't disable RHSM redhat.repo management on the AMI
// image, which is BYOS and does not use RHUI for content.
// Otherwise subscribing the system manually after booting
// it would result in empty redhat.repo. Without RHUI, such
// system would have no way to get Red Hat content, but
// enable the repo management manually, which would be very
// confusing.
},
},
subscription.RHSMConfigWithSubscription: {
// RHBZ#1932802
SubMan: &osbuild.RHSMStageOptionsSubMan{
Rhsmcertd: &osbuild.SubManConfigRHSMCERTDSection{
AutoRegistration: common.ToPtr(true),
},
// do not disable the redhat.repo management if the user
// explicitly request the system to be subscribed
},
},
},
}
return rhsm.InheritFrom(ic)
}
func appendEC2DracutX86_64(ic *distro.ImageConfig) *distro.ImageConfig {
ic.DracutConf = append(ic.DracutConf,
&osbuild.DracutConfStageOptions{
Filename: "ec2.conf",
Config: osbuild.DracutConfigFile{
AddDrivers: []string{
"nvme",
"xen-blkfront",
},
},
})
return ic
}

View file

@ -0,0 +1,70 @@
package rhel9
import (
"errors"
"fmt"
"sort"
"github.com/osbuild/images/pkg/distro"
"github.com/osbuild/images/pkg/platform"
)
type architecture struct {
distro *distribution
name string
imageTypes map[string]distro.ImageType
imageTypeAliases map[string]string
}
func (a *architecture) Name() string {
return a.name
}
func (a *architecture) ListImageTypes() []string {
itNames := make([]string, 0, len(a.imageTypes))
for name := range a.imageTypes {
itNames = append(itNames, name)
}
sort.Strings(itNames)
return itNames
}
func (a *architecture) GetImageType(name string) (distro.ImageType, error) {
t, exists := a.imageTypes[name]
if !exists {
aliasForName, exists := a.imageTypeAliases[name]
if !exists {
return nil, errors.New("invalid image type: " + name)
}
t, exists = a.imageTypes[aliasForName]
if !exists {
panic(fmt.Sprintf("image type '%s' is an alias to a non-existing image type '%s'", name, aliasForName))
}
}
return t, nil
}
func (a *architecture) addImageTypes(platform platform.Platform, imageTypes ...imageType) {
if a.imageTypes == nil {
a.imageTypes = map[string]distro.ImageType{}
}
for idx := range imageTypes {
it := imageTypes[idx]
it.arch = a
it.platform = platform
a.imageTypes[it.name] = &it
for _, alias := range it.nameAliases {
if a.imageTypeAliases == nil {
a.imageTypeAliases = map[string]string{}
}
if existingAliasFor, exists := a.imageTypeAliases[alias]; exists {
panic(fmt.Sprintf("image type alias '%s' for '%s' is already defined for another image type '%s'", alias, it.name, existingAliasFor))
}
a.imageTypeAliases[alias] = it.name
}
}
}
func (a *architecture) Distro() distro.Distro {
return a.distro
}

View file

@ -0,0 +1,596 @@
package rhel9
import (
"github.com/osbuild/images/internal/common"
"github.com/osbuild/images/pkg/disk"
"github.com/osbuild/images/pkg/distro"
"github.com/osbuild/images/pkg/osbuild"
"github.com/osbuild/images/pkg/platform"
"github.com/osbuild/images/pkg/rpmmd"
"github.com/osbuild/images/pkg/subscription"
)
var (
// Azure non-RHEL image type
azureImgType = imageType{
name: "vhd",
filename: "disk.vhd",
mimeType: "application/x-vhd",
packageSets: map[string]packageSetFunc{
osPkgsKey: azurePackageSet,
},
defaultImageConfig: defaultAzureImageConfig,
kernelOptions: defaultAzureKernelOptions,
bootable: true,
defaultSize: 4 * common.GibiByte,
image: liveImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"os", "image", "vpc"},
exports: []string{"vpc"},
basePartitionTables: defaultBasePartitionTables,
}
// Azure BYOS image type
azureByosImgType = imageType{
name: "vhd",
filename: "disk.vhd",
mimeType: "application/x-vhd",
packageSets: map[string]packageSetFunc{
osPkgsKey: azurePackageSet,
},
defaultImageConfig: defaultAzureByosImageConfig.InheritFrom(defaultAzureImageConfig),
kernelOptions: defaultAzureKernelOptions,
bootable: true,
defaultSize: 4 * common.GibiByte,
image: liveImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"os", "image", "vpc"},
exports: []string{"vpc"},
basePartitionTables: defaultBasePartitionTables,
}
// Azure RHUI image type
azureRhuiImgType = imageType{
name: "azure-rhui",
filename: "disk.vhd.xz",
mimeType: "application/xz",
compression: "xz",
packageSets: map[string]packageSetFunc{
osPkgsKey: azureRhuiPackageSet,
},
defaultImageConfig: defaultAzureRhuiImageConfig.InheritFrom(defaultAzureImageConfig),
kernelOptions: defaultAzureKernelOptions,
bootable: true,
defaultSize: 64 * common.GibiByte,
image: liveImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"os", "image", "vpc", "xz"},
exports: []string{"xz"},
basePartitionTables: azureRhuiBasePartitionTables,
}
)
// PACKAGE SETS
// Common Azure image package set
func azureCommonPackageSet(t *imageType) rpmmd.PackageSet {
ps := rpmmd.PackageSet{
Include: []string{
"@Server",
"bzip2",
"cloud-init",
"cloud-utils-growpart",
"dracut-config-generic",
"efibootmgr",
"gdisk",
"hyperv-daemons",
"kernel-core",
"kernel-modules",
"kernel",
"langpacks-en",
"lvm2",
"NetworkManager",
"NetworkManager-cloud-setup",
"nvme-cli",
"patch",
"rng-tools",
"selinux-policy-targeted",
"uuid",
"WALinuxAgent",
"yum-utils",
},
Exclude: []string{
"aic94xx-firmware",
"alsa-firmware",
"alsa-lib",
"alsa-sof-firmware",
"alsa-tools-firmware",
"biosdevname",
"bolt",
"buildah",
"cockpit-podman",
"containernetworking-plugins",
"dnf-plugin-spacewalk",
"dracut-config-rescue",
"glibc-all-langpacks",
"iprutils",
"ivtv-firmware",
"iwl100-firmware",
"iwl1000-firmware",
"iwl105-firmware",
"iwl135-firmware",
"iwl2000-firmware",
"iwl2030-firmware",
"iwl3160-firmware",
"iwl3945-firmware",
"iwl4965-firmware",
"iwl5000-firmware",
"iwl5150-firmware",
"iwl6000-firmware",
"iwl6000g2a-firmware",
"iwl6000g2b-firmware",
"iwl6050-firmware",
"iwl7260-firmware",
"libertas-sd8686-firmware",
"libertas-sd8787-firmware",
"libertas-usb8388-firmware",
"NetworkManager-config-server",
"plymouth",
"podman",
"python3-dnf-plugin-spacewalk",
"python3-hwdata",
"python3-rhnlib",
"rhn-check",
"rhn-client-tools",
"rhn-setup",
"rhnlib",
"rhnsd",
"usb_modeswitch",
},
}.Append(distroSpecificPackageSet(t))
if t.arch.distro.isRHEL() {
ps.Append(rpmmd.PackageSet{
Include: []string{
"rhc",
},
})
}
return ps
}
// Azure BYOS image package set
func azurePackageSet(t *imageType) rpmmd.PackageSet {
return azureCommonPackageSet(t)
}
// Azure RHUI image package set
func azureRhuiPackageSet(t *imageType) rpmmd.PackageSet {
return rpmmd.PackageSet{
Include: []string{
"rhui-azure-rhel9",
},
}.Append(azureCommonPackageSet(t))
}
// PARTITION TABLES
var azureRhuiBasePartitionTables = distro.BasePartitionTableMap{
platform.ARCH_X86_64.String(): disk.PartitionTable{
UUID: "D209C89E-EA5E-4FBD-B161-B461CCE297E0",
Type: "gpt",
Size: 64 * common.GibiByte,
Partitions: []disk.Partition{
{
Size: 500 * common.MebiByte,
Type: disk.EFISystemPartitionGUID,
UUID: disk.EFISystemPartitionUUID,
Payload: &disk.Filesystem{
Type: "vfat",
UUID: disk.EFIFilesystemUUID,
Mountpoint: "/boot/efi",
FSTabOptions: "defaults,uid=0,gid=0,umask=077,shortname=winnt",
FSTabFreq: 0,
FSTabPassNo: 2,
},
},
{
Size: 500 * common.MebiByte,
Type: disk.FilesystemDataGUID,
UUID: disk.FilesystemDataUUID,
Payload: &disk.Filesystem{
Type: "xfs",
Mountpoint: "/boot",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
{
Size: 2 * common.MebiByte,
Bootable: true,
Type: disk.BIOSBootPartitionGUID,
UUID: disk.BIOSBootPartitionUUID,
},
{
Type: disk.LVMPartitionGUID,
UUID: disk.RootPartitionUUID,
Payload: &disk.LVMVolumeGroup{
Name: "rootvg",
Description: "built with lvm2 and osbuild",
LogicalVolumes: []disk.LVMLogicalVolume{
{
Size: 1 * common.GibiByte,
Name: "homelv",
Payload: &disk.Filesystem{
Type: "xfs",
Label: "home",
Mountpoint: "/home",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
{
Size: 2 * common.GibiByte,
Name: "rootlv",
Payload: &disk.Filesystem{
Type: "xfs",
Label: "root",
Mountpoint: "/",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
{
Size: 2 * common.GibiByte,
Name: "tmplv",
Payload: &disk.Filesystem{
Type: "xfs",
Label: "tmp",
Mountpoint: "/tmp",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
{
Size: 10 * common.GibiByte,
Name: "usrlv",
Payload: &disk.Filesystem{
Type: "xfs",
Label: "usr",
Mountpoint: "/usr",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
{
Size: 10 * common.GibiByte,
Name: "varlv",
Payload: &disk.Filesystem{
Type: "xfs",
Label: "var",
Mountpoint: "/var",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
},
},
},
},
},
platform.ARCH_AARCH64.String(): disk.PartitionTable{
UUID: "D209C89E-EA5E-4FBD-B161-B461CCE297E0",
Type: "gpt",
Size: 64 * common.GibiByte,
Partitions: []disk.Partition{
{
Size: 500 * common.MebiByte,
Type: disk.EFISystemPartitionGUID,
UUID: disk.EFISystemPartitionUUID,
Payload: &disk.Filesystem{
Type: "vfat",
UUID: disk.EFIFilesystemUUID,
Mountpoint: "/boot/efi",
FSTabOptions: "defaults,uid=0,gid=0,umask=077,shortname=winnt",
FSTabFreq: 0,
FSTabPassNo: 2,
},
},
{
Size: 500 * common.MebiByte,
Type: disk.FilesystemDataGUID,
UUID: disk.FilesystemDataUUID,
Payload: &disk.Filesystem{
Type: "xfs",
Mountpoint: "/boot",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
{
Type: disk.LVMPartitionGUID,
UUID: disk.RootPartitionUUID,
Payload: &disk.LVMVolumeGroup{
Name: "rootvg",
Description: "built with lvm2 and osbuild",
LogicalVolumes: []disk.LVMLogicalVolume{
{
Size: 1 * common.GibiByte,
Name: "homelv",
Payload: &disk.Filesystem{
Type: "xfs",
Label: "home",
Mountpoint: "/home",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
{
Size: 2 * common.GibiByte,
Name: "rootlv",
Payload: &disk.Filesystem{
Type: "xfs",
Label: "root",
Mountpoint: "/",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
{
Size: 2 * common.GibiByte,
Name: "tmplv",
Payload: &disk.Filesystem{
Type: "xfs",
Label: "tmp",
Mountpoint: "/tmp",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
{
Size: 10 * common.GibiByte,
Name: "usrlv",
Payload: &disk.Filesystem{
Type: "xfs",
Label: "usr",
Mountpoint: "/usr",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
{
Size: 10 * common.GibiByte,
Name: "varlv",
Payload: &disk.Filesystem{
Type: "xfs",
Label: "var",
Mountpoint: "/var",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
},
},
},
},
},
}
var defaultAzureKernelOptions = "ro console=tty1 console=ttyS0 earlyprintk=ttyS0 rootdelay=300"
var defaultAzureImageConfig = &distro.ImageConfig{
Timezone: common.ToPtr("Etc/UTC"),
Locale: common.ToPtr("en_US.UTF-8"),
Keyboard: &osbuild.KeymapStageOptions{
Keymap: "us",
X11Keymap: &osbuild.X11KeymapOptions{
Layouts: []string{"us"},
},
},
Sysconfig: []*osbuild.SysconfigStageOptions{
{
Kernel: &osbuild.SysconfigKernelOptions{
UpdateDefault: true,
DefaultKernel: "kernel-core",
},
Network: &osbuild.SysconfigNetworkOptions{
Networking: true,
NoZeroConf: true,
},
},
},
EnabledServices: []string{
"firewalld",
"nm-cloud-setup.service",
"nm-cloud-setup.timer",
"sshd",
"waagent",
},
SshdConfig: &osbuild.SshdConfigStageOptions{
Config: osbuild.SshdConfigConfig{
ClientAliveInterval: common.ToPtr(180),
},
},
Modprobe: []*osbuild.ModprobeStageOptions{
{
Filename: "blacklist-amdgpu.conf",
Commands: osbuild.ModprobeConfigCmdList{
osbuild.NewModprobeConfigCmdBlacklist("amdgpu"),
},
},
{
Filename: "blacklist-floppy.conf",
Commands: osbuild.ModprobeConfigCmdList{
osbuild.NewModprobeConfigCmdBlacklist("floppy"),
},
},
{
Filename: "blacklist-nouveau.conf",
Commands: osbuild.ModprobeConfigCmdList{
osbuild.NewModprobeConfigCmdBlacklist("nouveau"),
osbuild.NewModprobeConfigCmdBlacklist("lbm-nouveau"),
},
},
},
CloudInit: []*osbuild.CloudInitStageOptions{
{
Filename: "10-azure-kvp.cfg",
Config: osbuild.CloudInitConfigFile{
Reporting: &osbuild.CloudInitConfigReporting{
Logging: &osbuild.CloudInitConfigReportingHandlers{
Type: "log",
},
Telemetry: &osbuild.CloudInitConfigReportingHandlers{
Type: "hyperv",
},
},
},
},
{
Filename: "91-azure_datasource.cfg",
Config: osbuild.CloudInitConfigFile{
Datasource: &osbuild.CloudInitConfigDatasource{
Azure: &osbuild.CloudInitConfigDatasourceAzure{
ApplyNetworkConfig: false,
},
},
DatasourceList: []string{
"Azure",
},
},
},
},
PwQuality: &osbuild.PwqualityConfStageOptions{
Config: osbuild.PwqualityConfConfig{
Minlen: common.ToPtr(6),
Minclass: common.ToPtr(3),
Dcredit: common.ToPtr(0),
Ucredit: common.ToPtr(0),
Lcredit: common.ToPtr(0),
Ocredit: common.ToPtr(0),
},
},
WAAgentConfig: &osbuild.WAAgentConfStageOptions{
Config: osbuild.WAAgentConfig{
RDFormat: common.ToPtr(false),
RDEnableSwap: common.ToPtr(false),
},
},
Grub2Config: &osbuild.GRUB2Config{
TerminalInput: []string{"serial", "console"},
TerminalOutput: []string{"serial", "console"},
Serial: "serial --speed=115200 --unit=0 --word=8 --parity=no --stop=1",
Timeout: 10,
},
UdevRules: &osbuild.UdevRulesStageOptions{
Filename: "/etc/udev/rules.d/68-azure-sriov-nm-unmanaged.rules",
Rules: osbuild.UdevRules{
osbuild.UdevRuleComment{
Comment: []string{
"Accelerated Networking on Azure exposes a new SRIOV interface to the VM.",
"This interface is transparently bonded to the synthetic interface,",
"so NetworkManager should just ignore any SRIOV interfaces.",
},
},
osbuild.NewUdevRule(
[]osbuild.UdevKV{
{K: "SUBSYSTEM", O: "==", V: "net"},
{K: "DRIVERS", O: "==", V: "hv_pci"},
{K: "ACTION", O: "==", V: "add"},
{K: "ENV", A: "NM_UNMANAGED", O: "=", V: "1"},
},
),
},
},
SystemdUnit: []*osbuild.SystemdUnitStageOptions{
{
Unit: "nm-cloud-setup.service",
Dropin: "10-rh-enable-for-azure.conf",
Config: osbuild.SystemdServiceUnitDropin{
Service: &osbuild.SystemdUnitServiceSection{
Environment: "NM_CLOUD_SETUP_AZURE=yes",
},
},
},
},
DefaultTarget: common.ToPtr("multi-user.target"),
}
// Diff of the default Image Config compare to the `defaultAzureImageConfig`
var defaultAzureByosImageConfig = &distro.ImageConfig{
GPGKeyFiles: []string{
"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release",
},
RHSMConfig: map[subscription.RHSMStatus]*osbuild.RHSMStageOptions{
subscription.RHSMConfigNoSubscription: {
SubMan: &osbuild.RHSMStageOptionsSubMan{
Rhsmcertd: &osbuild.SubManConfigRHSMCERTDSection{
AutoRegistration: common.ToPtr(true),
},
// Don't disable RHSM redhat.repo management on the GCE
// image, which is BYOS and does not use RHUI for content.
// Otherwise subscribing the system manually after booting
// it would result in empty redhat.repo. Without RHUI, such
// system would have no way to get Red Hat content, but
// enable the repo management manually, which would be very
// confusing.
},
},
subscription.RHSMConfigWithSubscription: {
SubMan: &osbuild.RHSMStageOptionsSubMan{
Rhsmcertd: &osbuild.SubManConfigRHSMCERTDSection{
AutoRegistration: common.ToPtr(true),
},
// do not disable the redhat.repo management if the user
// explicitly request the system to be subscribed
},
},
},
}
// Diff of the default Image Config compare to the `defaultAzureImageConfig`
var defaultAzureRhuiImageConfig = &distro.ImageConfig{
GPGKeyFiles: []string{
"/etc/pki/rpm-gpg/RPM-GPG-KEY-microsoft-azure-release",
"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release",
},
RHSMConfig: map[subscription.RHSMStatus]*osbuild.RHSMStageOptions{
subscription.RHSMConfigNoSubscription: {
DnfPlugins: &osbuild.RHSMStageOptionsDnfPlugins{
SubscriptionManager: &osbuild.RHSMStageOptionsDnfPlugin{
Enabled: false,
},
},
SubMan: &osbuild.RHSMStageOptionsSubMan{
Rhsmcertd: &osbuild.SubManConfigRHSMCERTDSection{
AutoRegistration: common.ToPtr(true),
},
Rhsm: &osbuild.SubManConfigRHSMSection{
ManageRepos: common.ToPtr(false),
},
},
},
subscription.RHSMConfigWithSubscription: {
SubMan: &osbuild.RHSMStageOptionsSubMan{
Rhsmcertd: &osbuild.SubManConfigRHSMCERTDSection{
AutoRegistration: common.ToPtr(true),
},
// do not disable the redhat.repo management if the user
// explicitly request the system to be subscribed
},
},
},
}

View file

@ -0,0 +1,317 @@
package rhel9
import (
"fmt"
"github.com/osbuild/images/pkg/platform"
"github.com/osbuild/images/pkg/rpmmd"
)
var (
tarImgType = imageType{
name: "tar",
filename: "root.tar.xz",
mimeType: "application/x-tar",
packageSets: map[string]packageSetFunc{
osPkgsKey: func(t *imageType) rpmmd.PackageSet {
return rpmmd.PackageSet{
Include: []string{"policycoreutils", "selinux-policy-targeted"},
Exclude: []string{"rng-tools"},
}
},
},
image: tarImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"os", "archive"},
exports: []string{"archive"},
}
imageInstaller = imageType{
name: "image-installer",
filename: "installer.iso",
mimeType: "application/x-iso9660-image",
packageSets: map[string]packageSetFunc{
osPkgsKey: bareMetalPackageSet,
installerPkgsKey: anacondaPackageSet,
},
rpmOstree: false,
bootISO: true,
bootable: true,
image: imageInstallerImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"anaconda-tree", "rootfs-image", "efiboot-tree", "os", "bootiso-tree", "bootiso"},
exports: []string{"bootiso"},
}
)
func bareMetalPackageSet(t *imageType) rpmmd.PackageSet {
ps := rpmmd.PackageSet{
Include: []string{
"authselect-compat",
"chrony",
"cockpit-system",
"cockpit-ws",
"dhcp-client",
"dnf-utils",
"dosfstools",
"firewalld",
"iwl1000-firmware",
"iwl100-firmware",
"iwl105-firmware",
"iwl135-firmware",
"iwl2000-firmware",
"iwl2030-firmware",
"iwl3160-firmware",
"iwl5000-firmware",
"iwl5150-firmware",
"iwl6000g2a-firmware",
"iwl6000g2b-firmware",
"iwl6050-firmware",
"iwl7260-firmware",
"lvm2",
"net-tools",
"nfs-utils",
"oddjob",
"oddjob-mkhomedir",
"policycoreutils",
"psmisc",
"python3-jsonschema",
"qemu-guest-agent",
"redhat-release",
"redhat-release-eula",
"rsync",
"tar",
"tcpdump",
},
}.Append(coreOsCommonPackageSet(t)).Append(distroBuildPackageSet(t))
// Ensure to not pull in subscription-manager on non-RHEL distro
if t.arch.distro.isRHEL() {
ps = ps.Append(rpmmd.PackageSet{
Include: []string{
"subscription-manager-cockpit",
},
})
}
return ps
}
func installerPackageSet(t *imageType) rpmmd.PackageSet {
ps := rpmmd.PackageSet{
Include: []string{
"anaconda-dracut",
"curl",
"dracut-config-generic",
"dracut-network",
"hostname",
"iwl100-firmware",
"iwl1000-firmware",
"iwl105-firmware",
"iwl135-firmware",
"iwl2000-firmware",
"iwl2030-firmware",
"iwl3160-firmware",
"iwl5000-firmware",
"iwl5150-firmware",
"iwl6050-firmware",
"iwl7260-firmware",
"kernel",
"less",
"nfs-utils",
"openssh-clients",
"ostree",
"plymouth",
"prefixdevname",
"rng-tools",
"rpcbind",
"selinux-policy-targeted",
"systemd",
"tar",
"xfsprogs",
"xz",
},
}
switch t.arch.Name() {
case platform.ARCH_X86_64.String():
ps = ps.Append(rpmmd.PackageSet{
Include: []string{
"biosdevname",
},
})
}
return ps
}
func anacondaPackageSet(t *imageType) rpmmd.PackageSet {
// common installer packages
ps := installerPackageSet(t)
ps = ps.Append(rpmmd.PackageSet{
Include: []string{
"aajohan-comfortaa-fonts",
"abattis-cantarell-fonts",
"alsa-firmware",
"alsa-tools-firmware",
"anaconda",
"anaconda-dracut",
"anaconda-install-env-deps",
"anaconda-widgets",
"audit",
"bind-utils",
"bitmap-fangsongti-fonts",
"bzip2",
"cryptsetup",
"curl",
"dbus-x11",
"dejavu-sans-fonts",
"dejavu-sans-mono-fonts",
"device-mapper-persistent-data",
"dmidecode",
"dnf",
"dracut-config-generic",
"dracut-network",
"efibootmgr",
"ethtool",
"fcoe-utils",
"ftp",
"gdb-gdbserver",
"gdisk",
"glibc-all-langpacks",
"gnome-kiosk",
"google-noto-sans-cjk-ttc-fonts",
"grub2-tools",
"grub2-tools-extra",
"grub2-tools-minimal",
"grubby",
"gsettings-desktop-schemas",
"hdparm",
"hexedit",
"hostname",
"initscripts",
"ipmitool",
"iwl1000-firmware",
"iwl100-firmware",
"iwl105-firmware",
"iwl135-firmware",
"iwl2000-firmware",
"iwl2030-firmware",
"iwl3160-firmware",
"iwl5000-firmware",
"iwl5150-firmware",
"iwl6000g2a-firmware",
"iwl6000g2b-firmware",
"iwl6050-firmware",
"iwl7260-firmware",
"jomolhari-fonts",
"kacst-farsi-fonts",
"kacst-qurn-fonts",
"kbd",
"kbd-misc",
"kdump-anaconda-addon",
"kernel",
"khmeros-base-fonts",
"less",
"libblockdev-lvm-dbus",
"libibverbs",
"libreport-plugin-bugzilla",
"libreport-plugin-reportuploader",
"librsvg2",
"linux-firmware",
"lklug-fonts",
"lldpad",
"lohit-assamese-fonts",
"lohit-bengali-fonts",
"lohit-devanagari-fonts",
"lohit-gujarati-fonts",
"lohit-gurmukhi-fonts",
"lohit-kannada-fonts",
"lohit-odia-fonts",
"lohit-tamil-fonts",
"lohit-telugu-fonts",
"lsof",
"madan-fonts",
"mtr",
"mt-st",
"net-tools",
"nfs-utils",
"nmap-ncat",
"nm-connection-editor",
"nss-tools",
"openssh-clients",
"openssh-server",
"oscap-anaconda-addon",
"ostree",
"pciutils",
"perl-interpreter",
"pigz",
"plymouth",
"prefixdevname",
"python3-pyatspi",
"rdma-core",
"redhat-release-eula",
"rng-tools",
"rpcbind",
"rpm-ostree",
"rsync",
"rsyslog",
"selinux-policy-targeted",
"sg3_utils",
"sil-abyssinica-fonts",
"sil-padauk-fonts",
"sil-scheherazade-fonts",
"smartmontools",
"smc-meera-fonts",
"spice-vdagent",
"strace",
"systemd",
"tar",
"thai-scalable-waree-fonts",
"tigervnc-server-minimal",
"tigervnc-server-module",
"udisks2",
"udisks2-iscsi",
"usbutils",
"vim-minimal",
"volume_key",
"wget",
"xfsdump",
"xfsprogs",
"xorg-x11-drivers",
"xorg-x11-fonts-misc",
"xorg-x11-server-utils",
"xorg-x11-server-Xorg",
"xorg-x11-xauth",
"xz",
},
})
ps = ps.Append(anacondaBootPackageSet(t))
switch t.arch.Name() {
case platform.ARCH_X86_64.String():
ps = ps.Append(rpmmd.PackageSet{
Include: []string{
"biosdevname",
"dmidecode",
"grub2-tools-efi",
"memtest86+",
},
})
case platform.ARCH_AARCH64.String():
ps = ps.Append(rpmmd.PackageSet{
Include: []string{
"dmidecode",
},
})
default:
panic(fmt.Sprintf("unsupported arch: %s", t.arch.Name()))
}
return ps
}

View file

@ -0,0 +1,463 @@
package rhel9
import (
"errors"
"fmt"
"sort"
"strings"
"github.com/osbuild/images/internal/common"
"github.com/osbuild/images/internal/oscap"
"github.com/osbuild/images/pkg/distro"
"github.com/osbuild/images/pkg/osbuild"
"github.com/osbuild/images/pkg/platform"
"github.com/osbuild/images/pkg/runner"
)
var (
// rhel9 & cs9 share the same list
// of allowed profiles so a single
// allow list can be used
oscapProfileAllowList = []oscap.Profile{
oscap.AnssiBp28Enhanced,
oscap.AnssiBp28High,
oscap.AnssiBp28Intermediary,
oscap.AnssiBp28Minimal,
oscap.Cis,
oscap.CisServerL1,
oscap.CisWorkstationL1,
oscap.CisWorkstationL2,
oscap.Cui,
oscap.E8,
oscap.Hippa,
oscap.IsmO,
oscap.Ospp,
oscap.PciDss,
oscap.Stig,
oscap.StigGui,
}
)
type distribution struct {
name string
product string
osVersion string
releaseVersion string
modulePlatformID string
vendor string
ostreeRefTmpl string
isolabelTmpl string
runner runner.Runner
arches map[string]distro.Arch
defaultImageConfig *distro.ImageConfig
}
// CentOS- and RHEL-based OS image configuration defaults
var defaultDistroImageConfig = &distro.ImageConfig{
Timezone: common.ToPtr("America/New_York"),
Locale: common.ToPtr("C.UTF-8"),
Sysconfig: []*osbuild.SysconfigStageOptions{
{
Kernel: &osbuild.SysconfigKernelOptions{
UpdateDefault: true,
DefaultKernel: "kernel",
},
Network: &osbuild.SysconfigNetworkOptions{
Networking: true,
NoZeroConf: true,
},
},
},
}
func (d *distribution) Name() string {
return d.name
}
func (d *distribution) Releasever() string {
return d.releaseVersion
}
func (d *distribution) ModulePlatformID() string {
return d.modulePlatformID
}
func (d *distribution) OSTreeRef() string {
return d.ostreeRefTmpl
}
func (d *distribution) ListArches() []string {
archNames := make([]string, 0, len(d.arches))
for name := range d.arches {
archNames = append(archNames, name)
}
sort.Strings(archNames)
return archNames
}
func (d *distribution) GetArch(name string) (distro.Arch, error) {
arch, exists := d.arches[name]
if !exists {
return nil, errors.New("invalid architecture: " + name)
}
return arch, nil
}
func (d *distribution) addArches(arches ...architecture) {
if d.arches == nil {
d.arches = map[string]distro.Arch{}
}
// Do not make copies of architectures, as opposed to image types,
// because architecture definitions are not used by more than a single
// distro definition.
for idx := range arches {
d.arches[arches[idx].name] = &arches[idx]
}
}
func (d *distribution) isRHEL() bool {
return strings.HasPrefix(d.name, "rhel")
}
func (d *distribution) getDefaultImageConfig() *distro.ImageConfig {
return d.defaultImageConfig
}
func New() distro.Distro {
// default minor: create default minor version (current GA) and rename it
d := newDistro("rhel", 1)
d.name = "rhel-9"
return d
}
func NewCentOS9() distro.Distro {
return newDistro("centos", 0)
}
func NewRHEL90() distro.Distro {
return newDistro("rhel", 0)
}
func NewRHEL91() distro.Distro {
return newDistro("rhel", 1)
}
func NewRHEL92() distro.Distro {
return newDistro("rhel", 2)
}
func NewRHEL93() distro.Distro {
return newDistro("rhel", 3)
}
func newDistro(name string, minor int) *distribution {
var rd distribution
switch name {
case "rhel":
rd = distribution{
name: fmt.Sprintf("rhel-9%d", minor),
product: "Red Hat Enterprise Linux",
osVersion: fmt.Sprintf("9.%d", minor),
releaseVersion: "9",
modulePlatformID: "platform:el9",
vendor: "redhat",
ostreeRefTmpl: "rhel/9/%s/edge",
isolabelTmpl: fmt.Sprintf("RHEL-9-%d-0-BaseOS-%%s", minor),
runner: &runner.RHEL{Major: uint64(9), Minor: uint64(minor)},
defaultImageConfig: defaultDistroImageConfig,
}
case "centos":
rd = distribution{
name: "centos-9",
product: "CentOS Stream",
osVersion: "9-stream",
releaseVersion: "9",
modulePlatformID: "platform:el9",
vendor: "centos",
ostreeRefTmpl: "centos/9/%s/edge",
isolabelTmpl: "CentOS-Stream-9-BaseOS-%s",
runner: &runner.CentOS{Version: uint64(9)},
defaultImageConfig: defaultDistroImageConfig,
}
default:
panic(fmt.Sprintf("unknown distro name: %s", name))
}
// Architecture definitions
x86_64 := architecture{
name: platform.ARCH_X86_64.String(),
distro: &rd,
}
aarch64 := architecture{
name: platform.ARCH_AARCH64.String(),
distro: &rd,
}
ppc64le := architecture{
distro: &rd,
name: platform.ARCH_PPC64LE.String(),
}
s390x := architecture{
distro: &rd,
name: platform.ARCH_S390X.String(),
}
qcow2ImgType := mkQcow2ImgType(rd)
ociImgType := qcow2ImgType
ociImgType.name = "oci"
x86_64.addImageTypes(
&platform.X86{
BIOS: true,
UEFIVendor: rd.vendor,
BasePlatform: platform.BasePlatform{
ImageFormat: platform.FORMAT_QCOW2,
QCOW2Compat: "1.1",
},
},
qcow2ImgType,
ociImgType,
)
x86_64.addImageTypes(
&platform.X86{
BIOS: true,
UEFIVendor: rd.vendor,
BasePlatform: platform.BasePlatform{
ImageFormat: platform.FORMAT_QCOW2,
},
},
openstackImgType,
)
azureX64Platform := &platform.X86{
BIOS: true,
UEFIVendor: rd.vendor,
BasePlatform: platform.BasePlatform{
ImageFormat: platform.FORMAT_VHD,
},
}
azureAarch64Platform := &platform.Aarch64{
UEFIVendor: rd.vendor,
BasePlatform: platform.BasePlatform{
ImageFormat: platform.FORMAT_VHD,
},
}
x86_64.addImageTypes(
&platform.X86{
BIOS: true,
UEFIVendor: rd.vendor,
BasePlatform: platform.BasePlatform{
ImageFormat: platform.FORMAT_VMDK,
},
},
vmdkImgType,
)
x86_64.addImageTypes(
&platform.X86{
BIOS: true,
UEFIVendor: rd.vendor,
BasePlatform: platform.BasePlatform{
ImageFormat: platform.FORMAT_OVA,
},
},
ovaImgType,
)
ec2X86Platform := &platform.X86{
BIOS: true,
UEFIVendor: rd.vendor,
BasePlatform: platform.BasePlatform{
ImageFormat: platform.FORMAT_RAW,
},
}
x86_64.addImageTypes(
ec2X86Platform,
mkAMIImgTypeX86_64(rd.osVersion, rd.isRHEL()),
)
gceX86Platform := &platform.X86{
UEFIVendor: rd.vendor,
BasePlatform: platform.BasePlatform{
ImageFormat: platform.FORMAT_GCE,
},
}
x86_64.addImageTypes(
gceX86Platform,
mkGCEImageType(rd.isRHEL()),
)
x86_64.addImageTypes(
&platform.X86{
BasePlatform: platform.BasePlatform{
FirmwarePackages: []string{
"microcode_ctl", // ??
"iwl1000-firmware",
"iwl100-firmware",
"iwl105-firmware",
"iwl135-firmware",
"iwl2000-firmware",
"iwl2030-firmware",
"iwl3160-firmware",
"iwl5000-firmware",
"iwl5150-firmware",
"iwl6050-firmware",
},
},
BIOS: true,
UEFIVendor: rd.vendor,
},
edgeOCIImgType,
edgeCommitImgType,
edgeInstallerImgType,
edgeRawImgType,
imageInstaller,
edgeAMIImgType,
)
x86_64.addImageTypes(
&platform.X86{
BasePlatform: platform.BasePlatform{
ImageFormat: platform.FORMAT_RAW,
},
BIOS: false,
UEFIVendor: rd.vendor,
},
edgeSimplifiedInstallerImgType,
)
x86_64.addImageTypes(
&platform.X86{},
tarImgType,
)
aarch64.addImageTypes(
&platform.Aarch64{
UEFIVendor: rd.vendor,
BasePlatform: platform.BasePlatform{
ImageFormat: platform.FORMAT_QCOW2,
},
},
openstackImgType,
)
aarch64.addImageTypes(
&platform.Aarch64{},
tarImgType,
)
aarch64.addImageTypes(
&platform.Aarch64{
BasePlatform: platform.BasePlatform{},
UEFIVendor: rd.vendor,
},
edgeCommitImgType,
edgeOCIImgType,
edgeInstallerImgType,
edgeSimplifiedInstallerImgType,
imageInstaller,
edgeAMIImgType,
)
aarch64.addImageTypes(
&platform.Aarch64{
BasePlatform: platform.BasePlatform{
ImageFormat: platform.FORMAT_RAW,
},
UEFIVendor: rd.vendor,
},
edgeRawImgType,
)
aarch64.addImageTypes(
&platform.Aarch64{
UEFIVendor: rd.vendor,
BasePlatform: platform.BasePlatform{
ImageFormat: platform.FORMAT_QCOW2,
QCOW2Compat: "1.1",
},
},
qcow2ImgType,
)
aarch64.addImageTypes(
&platform.Aarch64{
UEFIVendor: rd.vendor,
BasePlatform: platform.BasePlatform{
ImageFormat: platform.FORMAT_RAW,
},
},
mkAMIImgTypeAarch64(rd.osVersion, rd.isRHEL()),
)
ppc64le.addImageTypes(
&platform.PPC64LE{
BIOS: true,
BasePlatform: platform.BasePlatform{
ImageFormat: platform.FORMAT_QCOW2,
QCOW2Compat: "1.1",
},
},
qcow2ImgType,
)
ppc64le.addImageTypes(
&platform.PPC64LE{},
tarImgType,
)
s390x.addImageTypes(
&platform.S390X{
Zipl: true,
BasePlatform: platform.BasePlatform{
ImageFormat: platform.FORMAT_QCOW2,
QCOW2Compat: "1.1",
},
},
qcow2ImgType,
)
s390x.addImageTypes(
&platform.S390X{},
tarImgType,
)
if rd.isRHEL() {
// add azure to RHEL distro only
x86_64.addImageTypes(azureX64Platform, azureRhuiImgType, azureByosImgType)
aarch64.addImageTypes(azureAarch64Platform, azureRhuiImgType, azureByosImgType)
// keep the RHEL EC2 x86_64 images before 9.3 BIOS-only for backward compatibility
if common.VersionLessThan(rd.osVersion, "9.3") {
ec2X86Platform = &platform.X86{
BIOS: true,
BasePlatform: platform.BasePlatform{
ImageFormat: platform.FORMAT_RAW,
},
}
}
// add ec2 image types to RHEL distro only
x86_64.addImageTypes(ec2X86Platform, mkEc2ImgTypeX86_64(rd.osVersion, rd.isRHEL()), mkEc2HaImgTypeX86_64(rd.osVersion, rd.isRHEL()), mkEC2SapImgTypeX86_64(rd.osVersion, rd.isRHEL()))
aarch64.addImageTypes(
&platform.Aarch64{
UEFIVendor: rd.vendor,
BasePlatform: platform.BasePlatform{
ImageFormat: platform.FORMAT_RAW,
},
},
mkEC2ImgTypeAarch64(rd.osVersion, rd.isRHEL()),
)
// add GCE RHUI image to RHEL only
x86_64.addImageTypes(gceX86Platform, mkGCERHUIImageType(rd.isRHEL()))
} else {
x86_64.addImageTypes(azureX64Platform, azureImgType)
aarch64.addImageTypes(azureAarch64Platform, azureImgType)
}
rd.addArches(x86_64, aarch64, ppc64le, s390x)
return &rd
}

View file

@ -0,0 +1,511 @@
package rhel9
import (
"fmt"
"github.com/osbuild/images/internal/common"
"github.com/osbuild/images/internal/environment"
"github.com/osbuild/images/pkg/disk"
"github.com/osbuild/images/pkg/distro"
"github.com/osbuild/images/pkg/platform"
"github.com/osbuild/images/pkg/rpmmd"
)
var (
// Image Definitions
edgeCommitImgType = imageType{
name: "edge-commit",
nameAliases: []string{"rhel-edge-commit"},
filename: "commit.tar",
mimeType: "application/x-tar",
packageSets: map[string]packageSetFunc{
osPkgsKey: edgeCommitPackageSet,
},
defaultImageConfig: &distro.ImageConfig{
EnabledServices: edgeServices,
},
rpmOstree: true,
image: edgeCommitImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"os", "ostree-commit", "commit-archive"},
exports: []string{"commit-archive"},
}
edgeOCIImgType = imageType{
name: "edge-container",
nameAliases: []string{"rhel-edge-container"},
filename: "container.tar",
mimeType: "application/x-tar",
packageSets: map[string]packageSetFunc{
osPkgsKey: edgeCommitPackageSet,
containerPkgsKey: func(t *imageType) rpmmd.PackageSet {
return rpmmd.PackageSet{
Include: []string{"nginx"}, // FIXME: this has no effect
}
},
},
defaultImageConfig: &distro.ImageConfig{
EnabledServices: edgeServices,
},
rpmOstree: true,
bootISO: false,
image: edgeContainerImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"os", "ostree-commit", "container-tree", "container"},
exports: []string{"container"},
}
edgeRawImgType = imageType{
name: "edge-raw-image",
nameAliases: []string{"rhel-edge-raw-image"},
filename: "image.raw.xz",
compression: "xz",
mimeType: "application/xz",
packageSets: nil,
defaultImageConfig: &distro.ImageConfig{
Locale: common.ToPtr("en_US.UTF-8"),
},
defaultSize: 10 * common.GibiByte,
rpmOstree: true,
bootable: true,
bootISO: false,
image: edgeRawImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"ostree-deployment", "image", "xz"},
exports: []string{"xz"},
basePartitionTables: edgeBasePartitionTables,
}
edgeInstallerImgType = imageType{
name: "edge-installer",
nameAliases: []string{"rhel-edge-installer"},
filename: "installer.iso",
mimeType: "application/x-iso9660-image",
packageSets: map[string]packageSetFunc{
// TODO: non-arch-specific package set handling for installers
// This image type requires build packages for installers and
// ostree/edge. For now we only have x86-64 installer build
// package sets defined. When we add installer build package sets
// for other architectures, this will need to be moved to the
// architecture and the merging will happen in the PackageSets()
// method like the other sets.
installerPkgsKey: edgeInstallerPackageSet,
},
defaultImageConfig: &distro.ImageConfig{
Locale: common.ToPtr("en_US.UTF-8"),
EnabledServices: edgeServices,
},
rpmOstree: true,
bootISO: true,
image: edgeInstallerImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"anaconda-tree", "rootfs-image", "efiboot-tree", "bootiso-tree", "bootiso"},
exports: []string{"bootiso"},
}
edgeSimplifiedInstallerImgType = imageType{
name: "edge-simplified-installer",
nameAliases: []string{"rhel-edge-simplified-installer"},
filename: "simplified-installer.iso",
mimeType: "application/x-iso9660-image",
packageSets: map[string]packageSetFunc{
// TODO: non-arch-specific package set handling for installers
// This image type requires build packages for installers and
// ostree/edge. For now we only have x86-64 installer build
// package sets defined. When we add installer build package sets
// for other architectures, this will need to be moved to the
// architecture and the merging will happen in the PackageSets()
// method like the other sets.
installerPkgsKey: edgeSimplifiedInstallerPackageSet,
},
defaultImageConfig: &distro.ImageConfig{
EnabledServices: edgeServices,
},
defaultSize: 10 * common.GibiByte,
rpmOstree: true,
bootable: true,
bootISO: true,
image: edgeSimplifiedInstallerImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"ostree-deployment", "image", "xz", "coi-tree", "efiboot-tree", "bootiso-tree", "bootiso"},
exports: []string{"bootiso"},
basePartitionTables: edgeBasePartitionTables,
}
edgeAMIImgType = imageType{
name: "edge-ami",
filename: "image.raw",
mimeType: "application/octet-stream",
packageSets: nil,
defaultImageConfig: &distro.ImageConfig{
Locale: common.ToPtr("en_US.UTF-8"),
},
defaultSize: 10 * common.GibiByte,
rpmOstree: true,
bootable: true,
bootISO: false,
image: edgeRawImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"ostree-deployment", "image"},
exports: []string{"image"},
basePartitionTables: edgeBasePartitionTables,
environment: &environment.EC2{},
}
// Shared Services
edgeServices = []string{
// TODO(runcom): move fdo-client-linuxapp.service to presets?
"NetworkManager.service", "firewalld.service", "sshd.service", "fdo-client-linuxapp.service",
}
// Partition tables
edgeBasePartitionTables = distro.BasePartitionTableMap{
platform.ARCH_X86_64.String(): disk.PartitionTable{
UUID: "D209C89E-EA5E-4FBD-B161-B461CCE297E0",
Type: "gpt",
Partitions: []disk.Partition{
{
Size: 1 * common.MebiByte, // 1MB
Bootable: true,
Type: disk.BIOSBootPartitionGUID,
UUID: disk.BIOSBootPartitionUUID,
},
{
Size: 127 * common.MebiByte, // 127 MB
Type: disk.EFISystemPartitionGUID,
UUID: disk.EFISystemPartitionUUID,
Payload: &disk.Filesystem{
Type: "vfat",
UUID: disk.EFIFilesystemUUID,
Mountpoint: "/boot/efi",
Label: "EFI-SYSTEM",
FSTabOptions: "defaults,uid=0,gid=0,umask=077,shortname=winnt",
FSTabFreq: 0,
FSTabPassNo: 2,
},
},
{
Size: 384 * common.MebiByte, // 384 MB
Type: disk.XBootLDRPartitionGUID,
UUID: disk.FilesystemDataUUID,
Payload: &disk.Filesystem{
Type: "xfs",
Mountpoint: "/boot",
Label: "boot",
FSTabOptions: "defaults",
FSTabFreq: 1,
FSTabPassNo: 1,
},
},
{
Type: disk.FilesystemDataGUID,
UUID: disk.RootPartitionUUID,
Payload: &disk.LUKSContainer{
Label: "crypt_root",
Cipher: "cipher_null",
Passphrase: "osbuild",
PBKDF: disk.Argon2id{
Memory: 32,
Iterations: 4,
Parallelism: 1,
},
Clevis: &disk.ClevisBind{
Pin: "null",
Policy: "{}",
RemovePassphrase: true,
},
Payload: &disk.LVMVolumeGroup{
Name: "rootvg",
Description: "built with lvm2 and osbuild",
LogicalVolumes: []disk.LVMLogicalVolume{
{
Size: 9 * 1024 * 1024 * 1024, // 9 GB
Name: "rootlv",
Payload: &disk.Filesystem{
Type: "xfs",
Label: "root",
Mountpoint: "/",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
},
},
},
},
},
},
platform.ARCH_AARCH64.String(): disk.PartitionTable{
UUID: "D209C89E-EA5E-4FBD-B161-B461CCE297E0",
Type: "gpt",
Partitions: []disk.Partition{
{
Size: 127 * common.MebiByte, // 127 MB
Type: disk.EFISystemPartitionGUID,
UUID: disk.EFISystemPartitionUUID,
Payload: &disk.Filesystem{
Type: "vfat",
UUID: disk.EFIFilesystemUUID,
Mountpoint: "/boot/efi",
Label: "EFI-SYSTEM",
FSTabOptions: "defaults,uid=0,gid=0,umask=077,shortname=winnt",
FSTabFreq: 0,
FSTabPassNo: 2,
},
},
{
Size: 384 * common.MebiByte, // 384 MB
Type: disk.XBootLDRPartitionGUID,
UUID: disk.FilesystemDataUUID,
Payload: &disk.Filesystem{
Type: "xfs",
Mountpoint: "/boot",
Label: "boot",
FSTabOptions: "defaults",
FSTabFreq: 1,
FSTabPassNo: 1,
},
},
{
Type: disk.FilesystemDataGUID,
UUID: disk.RootPartitionUUID,
Payload: &disk.LUKSContainer{
Label: "crypt_root",
Cipher: "cipher_null",
Passphrase: "osbuild",
PBKDF: disk.Argon2id{
Memory: 32,
Iterations: 4,
Parallelism: 1,
},
Clevis: &disk.ClevisBind{
Pin: "null",
Policy: "{}",
RemovePassphrase: true,
},
Payload: &disk.LVMVolumeGroup{
Name: "rootvg",
Description: "built with lvm2 and osbuild",
LogicalVolumes: []disk.LVMLogicalVolume{
{
Size: 9 * 1024 * 1024 * 1024, // 9 GB
Name: "rootlv",
Payload: &disk.Filesystem{
Type: "xfs",
Label: "root",
Mountpoint: "/",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
},
},
},
},
},
},
}
)
// Package Sets
// edge commit OS package set
func edgeCommitPackageSet(t *imageType) rpmmd.PackageSet {
ps := rpmmd.PackageSet{
Include: []string{
"redhat-release",
"glibc",
"glibc-minimal-langpack",
"nss-altfiles",
"dracut-config-generic",
"dracut-network",
"basesystem",
"bash",
"platform-python",
"shadow-utils",
"chrony",
"setup",
"shadow-utils",
"sudo",
"systemd",
"coreutils",
"util-linux",
"curl",
"vim-minimal",
"rpm",
"rpm-ostree",
"polkit",
"lvm2",
"cryptsetup",
"pinentry",
"e2fsprogs",
"dosfstools",
"keyutils",
"gnupg2",
"attr",
"xz",
"gzip",
"firewalld",
"iptables",
"NetworkManager",
"NetworkManager-wifi",
"NetworkManager-wwan",
"wpa_supplicant",
"dnsmasq",
"traceroute",
"hostname",
"iproute",
"iputils",
"openssh-clients",
"procps-ng",
"rootfiles",
"openssh-server",
"passwd",
"policycoreutils",
"policycoreutils-python-utils",
"selinux-policy-targeted",
"setools-console",
"less",
"tar",
"rsync",
"usbguard",
"bash-completion",
"tmux",
"ima-evm-utils",
"audit",
"podman",
"containernetworking-plugins", // required for cni networks but not a hard dependency of podman >= 4.2.0 (rhbz#2123210)
"container-selinux",
"skopeo",
"criu",
"slirp4netns",
"fuse-overlayfs",
"clevis",
"clevis-dracut",
"clevis-luks",
"greenboot",
"greenboot-default-health-checks",
"fdo-client",
"fdo-owner-cli",
"sos",
},
Exclude: []string{
"rng-tools",
},
}
switch t.arch.Name() {
case platform.ARCH_X86_64.String():
ps = ps.Append(x8664EdgeCommitPackageSet(t))
case platform.ARCH_AARCH64.String():
ps = ps.Append(aarch64EdgeCommitPackageSet(t))
}
if !common.VersionLessThan(t.arch.distro.osVersion, "9.2") || !common.VersionLessThan(t.arch.distro.osVersion, "9-stream") {
ps.Include = append(ps.Include, "ignition", "ignition-edge", "ssh-key-dir")
}
return ps
}
func x8664EdgeCommitPackageSet(t *imageType) rpmmd.PackageSet {
return rpmmd.PackageSet{
Include: []string{
"grub2",
"grub2-efi-x64",
"efibootmgr",
"shim-x64",
"microcode_ctl",
"iwl1000-firmware",
"iwl100-firmware",
"iwl105-firmware",
"iwl135-firmware",
"iwl2000-firmware",
"iwl2030-firmware",
"iwl3160-firmware",
"iwl5000-firmware",
"iwl5150-firmware",
"iwl6050-firmware",
"iwl7260-firmware",
},
}
}
func aarch64EdgeCommitPackageSet(t *imageType) rpmmd.PackageSet {
return rpmmd.PackageSet{
Include: []string{
"grub2-efi-aa64",
"efibootmgr",
"shim-aa64",
"iwl7260-firmware",
},
}
}
func edgeInstallerPackageSet(t *imageType) rpmmd.PackageSet {
return anacondaPackageSet(t)
}
func edgeSimplifiedInstallerPackageSet(t *imageType) rpmmd.PackageSet {
// common installer packages
ps := installerPackageSet(t)
ps = ps.Append(rpmmd.PackageSet{
Include: []string{
"attr",
"basesystem",
"binutils",
"bsdtar",
"clevis-dracut",
"clevis-luks",
"cloud-utils-growpart",
"coreos-installer",
"coreos-installer-dracut",
"coreutils",
"device-mapper-multipath",
"dnsmasq",
"dosfstools",
"dracut-live",
"e2fsprogs",
"fcoe-utils",
"fdo-init",
"gzip",
"ima-evm-utils",
"iproute",
"iptables",
"iputils",
"iscsi-initiator-utils",
"keyutils",
"lldpad",
"lvm2",
"passwd",
"policycoreutils",
"policycoreutils-python-utils",
"procps-ng",
"redhat-logos",
"rootfiles",
"setools-console",
"sudo",
"traceroute",
"util-linux",
},
})
switch t.arch.Name() {
case platform.ARCH_X86_64.String():
ps = ps.Append(x8664EdgeCommitPackageSet(t))
case platform.ARCH_AARCH64.String():
ps = ps.Append(aarch64EdgeCommitPackageSet(t))
default:
panic(fmt.Sprintf("unsupported arch: %s", t.arch.Name()))
}
return ps
}

View file

@ -0,0 +1,300 @@
package rhel9
import (
"github.com/osbuild/images/internal/common"
"github.com/osbuild/images/pkg/distro"
"github.com/osbuild/images/pkg/osbuild"
"github.com/osbuild/images/pkg/rpmmd"
"github.com/osbuild/images/pkg/subscription"
)
const gceKernelOptions = "net.ifnames=0 biosdevname=0 scsi_mod.use_blk_mq=Y console=ttyS0,38400n8d"
var (
gceImgType = imageType{
name: "gce",
filename: "image.tar.gz",
mimeType: "application/gzip",
packageSets: map[string]packageSetFunc{
osPkgsKey: gcePackageSet,
},
kernelOptions: gceKernelOptions,
bootable: true,
defaultSize: 20 * common.GibiByte,
image: liveImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"os", "image", "archive"},
exports: []string{"archive"},
// TODO: the base partition table still contains the BIOS boot partition, but the image is UEFI-only
basePartitionTables: defaultBasePartitionTables,
}
gceRhuiImgType = imageType{
name: "gce-rhui",
filename: "image.tar.gz",
mimeType: "application/gzip",
packageSets: map[string]packageSetFunc{
osPkgsKey: gceRhuiPackageSet,
},
kernelOptions: gceKernelOptions,
bootable: true,
defaultSize: 20 * common.GibiByte,
image: liveImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"os", "image", "archive"},
exports: []string{"archive"},
// TODO: the base partition table still contains the BIOS boot partition, but the image is UEFI-only
basePartitionTables: defaultBasePartitionTables,
}
)
func mkGCEImageType(rhsm bool) imageType {
it := gceImgType
it.defaultImageConfig = baseGCEImageConfig(rhsm)
return it
}
func mkGCERHUIImageType(rhsm bool) imageType {
it := gceRhuiImgType
it.defaultImageConfig = defaultGceRhuiImageConfig(rhsm)
return it
}
func baseGCEImageConfig(rhsm bool) *distro.ImageConfig {
ic := &distro.ImageConfig{
Timezone: common.ToPtr("UTC"),
TimeSynchronization: &osbuild.ChronyStageOptions{
Servers: []osbuild.ChronyConfigServer{{Hostname: "metadata.google.internal"}},
},
Firewall: &osbuild.FirewallStageOptions{
DefaultZone: "trusted",
},
EnabledServices: []string{
"sshd",
"rngd",
"dnf-automatic.timer",
},
DisabledServices: []string{
"sshd-keygen@",
"reboot.target",
},
DefaultTarget: common.ToPtr("multi-user.target"),
Locale: common.ToPtr("en_US.UTF-8"),
Keyboard: &osbuild.KeymapStageOptions{
Keymap: "us",
},
DNFConfig: []*osbuild.DNFConfigStageOptions{
{
Config: &osbuild.DNFConfig{
Main: &osbuild.DNFConfigMain{
IPResolve: "4",
},
},
},
},
DNFAutomaticConfig: &osbuild.DNFAutomaticConfigStageOptions{
Config: &osbuild.DNFAutomaticConfig{
Commands: &osbuild.DNFAutomaticConfigCommands{
ApplyUpdates: common.ToPtr(true),
UpgradeType: osbuild.DNFAutomaticUpgradeTypeSecurity,
},
},
},
YUMRepos: []*osbuild.YumReposStageOptions{
{
Filename: "google-cloud.repo",
Repos: []osbuild.YumRepository{
{
Id: "google-compute-engine",
Name: "Google Compute Engine",
BaseURLs: []string{"https://packages.cloud.google.com/yum/repos/google-compute-engine-el9-x86_64-stable"},
Enabled: common.ToPtr(true),
// TODO: enable GPG check once Google stops using SHA-1 in their keys
// https://issuetracker.google.com/issues/223626963
GPGCheck: common.ToPtr(false),
RepoGPGCheck: common.ToPtr(false),
GPGKey: []string{
"https://packages.cloud.google.com/yum/doc/yum-key.gpg",
"https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg",
},
},
},
},
},
SshdConfig: &osbuild.SshdConfigStageOptions{
Config: osbuild.SshdConfigConfig{
PasswordAuthentication: common.ToPtr(false),
ClientAliveInterval: common.ToPtr(420),
PermitRootLogin: osbuild.PermitRootLoginValueNo,
},
},
Sysconfig: []*osbuild.SysconfigStageOptions{
{
Kernel: &osbuild.SysconfigKernelOptions{
DefaultKernel: "kernel-core",
UpdateDefault: true,
},
},
},
Modprobe: []*osbuild.ModprobeStageOptions{
{
Filename: "blacklist-floppy.conf",
Commands: osbuild.ModprobeConfigCmdList{
osbuild.NewModprobeConfigCmdBlacklist("floppy"),
},
},
},
GCPGuestAgentConfig: &osbuild.GcpGuestAgentConfigOptions{
ConfigScope: osbuild.GcpGuestAgentConfigScopeDistro,
Config: &osbuild.GcpGuestAgentConfig{
InstanceSetup: &osbuild.GcpGuestAgentConfigInstanceSetup{
SetBotoConfig: common.ToPtr(false),
},
},
},
}
if rhsm {
ic.RHSMConfig = map[subscription.RHSMStatus]*osbuild.RHSMStageOptions{
subscription.RHSMConfigNoSubscription: {
SubMan: &osbuild.RHSMStageOptionsSubMan{
Rhsmcertd: &osbuild.SubManConfigRHSMCERTDSection{
AutoRegistration: common.ToPtr(true),
},
// Don't disable RHSM redhat.repo management on the GCE
// image, which is BYOS and does not use RHUI for content.
// Otherwise subscribing the system manually after booting
// it would result in empty redhat.repo. Without RHUI, such
// system would have no way to get Red Hat content, but
// enable the repo management manually, which would be very
// confusing.
},
},
subscription.RHSMConfigWithSubscription: {
SubMan: &osbuild.RHSMStageOptionsSubMan{
Rhsmcertd: &osbuild.SubManConfigRHSMCERTDSection{
AutoRegistration: common.ToPtr(true),
},
// do not disable the redhat.repo management if the user
// explicitly request the system to be subscribed
},
},
}
}
return ic
}
func defaultGceRhuiImageConfig(rhsm bool) *distro.ImageConfig {
ic := &distro.ImageConfig{
RHSMConfig: map[subscription.RHSMStatus]*osbuild.RHSMStageOptions{
subscription.RHSMConfigNoSubscription: {
SubMan: &osbuild.RHSMStageOptionsSubMan{
Rhsmcertd: &osbuild.SubManConfigRHSMCERTDSection{
AutoRegistration: common.ToPtr(true),
},
Rhsm: &osbuild.SubManConfigRHSMSection{
ManageRepos: common.ToPtr(false),
},
},
},
subscription.RHSMConfigWithSubscription: {
SubMan: &osbuild.RHSMStageOptionsSubMan{
Rhsmcertd: &osbuild.SubManConfigRHSMCERTDSection{
AutoRegistration: common.ToPtr(true),
},
// do not disable the redhat.repo management if the user
// explicitly request the system to be subscribed
},
},
},
}
return ic.InheritFrom(baseGCEImageConfig(rhsm))
}
func gceCommonPackageSet(t *imageType) rpmmd.PackageSet {
ps := rpmmd.PackageSet{
Include: []string{
"langpacks-en", // not in Google's KS
"acpid",
"dhcp-client",
"dnf-automatic",
"net-tools",
//"openssh-server", included in core
"python3",
"rng-tools",
"tar",
"vim",
// GCE guest tools
"google-compute-engine",
"google-osconfig-agent",
"gce-disk-expand",
// Not explicitly included in GCP kickstart, but present on the image
// for time synchronization
"chrony",
"timedatex",
// EFI
"grub2-tools-efi",
"firewalld", // not pulled in any more as on RHEL-8
},
Exclude: []string{
"alsa-utils",
"b43-fwcutter",
"dmraid",
"eject",
"gpm",
"irqbalance",
"microcode_ctl",
"smartmontools",
"aic94xx-firmware",
"atmel-firmware",
"b43-openfwwf",
"bfa-firmware",
"ipw2100-firmware",
"ipw2200-firmware",
"ivtv-firmware",
"iwl100-firmware",
"iwl1000-firmware",
"iwl3945-firmware",
"iwl4965-firmware",
"iwl5000-firmware",
"iwl5150-firmware",
"iwl6000-firmware",
"iwl6000g2a-firmware",
"iwl6050-firmware",
"kernel-firmware",
"libertas-usb8388-firmware",
"ql2100-firmware",
"ql2200-firmware",
"ql23xx-firmware",
"ql2400-firmware",
"ql2500-firmware",
"rt61pci-firmware",
"rt73usb-firmware",
"xorg-x11-drv-ati-firmware",
"zd1211-firmware",
// RHBZ#2075815
"qemu-guest-agent",
},
}.Append(coreOsCommonPackageSet(t)).Append(distroSpecificPackageSet(t))
// Some excluded packages are part of the @core group package set returned
// by coreOsCommonPackageSet(). Ensure that the conflicting packages are
// returned from the list of `Include` packages.
return ps.ResolveConflictsExclude()
}
// GCE BYOS image
func gcePackageSet(t *imageType) rpmmd.PackageSet {
return gceCommonPackageSet(t)
}
// GCE RHUI image
func gceRhuiPackageSet(t *imageType) rpmmd.PackageSet {
return rpmmd.PackageSet{
Include: []string{
"google-rhui-client-rhel9",
},
}.Append(gceCommonPackageSet(t))
}

View file

@ -0,0 +1,627 @@
package rhel9
import (
"fmt"
"math/rand"
"github.com/osbuild/images/internal/common"
"github.com/osbuild/images/internal/fdo"
"github.com/osbuild/images/internal/ignition"
"github.com/osbuild/images/internal/oscap"
"github.com/osbuild/images/internal/users"
"github.com/osbuild/images/internal/workload"
"github.com/osbuild/images/pkg/blueprint"
"github.com/osbuild/images/pkg/container"
"github.com/osbuild/images/pkg/distro"
"github.com/osbuild/images/pkg/image"
"github.com/osbuild/images/pkg/manifest"
"github.com/osbuild/images/pkg/osbuild"
"github.com/osbuild/images/pkg/ostree"
"github.com/osbuild/images/pkg/rpmmd"
)
func osCustomizations(
t *imageType,
osPackageSet rpmmd.PackageSet,
options distro.ImageOptions,
containers []container.SourceSpec,
c *blueprint.Customizations,
) manifest.OSCustomizations {
imageConfig := t.getDefaultImageConfig()
osc := manifest.OSCustomizations{}
if t.bootable || t.rpmOstree {
osc.KernelName = c.GetKernel().Name
var kernelOptions []string
if t.kernelOptions != "" {
kernelOptions = append(kernelOptions, t.kernelOptions)
}
if bpKernel := c.GetKernel(); bpKernel.Append != "" {
kernelOptions = append(kernelOptions, bpKernel.Append)
}
osc.KernelOptionsAppend = kernelOptions
}
osc.ExtraBasePackages = osPackageSet.Include
osc.ExcludeBasePackages = osPackageSet.Exclude
osc.ExtraBaseRepos = osPackageSet.Repositories
osc.Containers = containers
osc.GPGKeyFiles = imageConfig.GPGKeyFiles
if imageConfig.ExcludeDocs != nil {
osc.ExcludeDocs = *imageConfig.ExcludeDocs
}
if !t.bootISO {
// don't put users and groups in the payload of an installer
// add them via kickstart instead
osc.Groups = users.GroupsFromBP(c.GetGroups())
osc.Users = users.UsersFromBP(c.GetUsers())
}
osc.EnabledServices = imageConfig.EnabledServices
osc.DisabledServices = imageConfig.DisabledServices
if imageConfig.DefaultTarget != nil {
osc.DefaultTarget = *imageConfig.DefaultTarget
}
osc.Firewall = imageConfig.Firewall
if fw := c.GetFirewall(); fw != nil {
options := osbuild.FirewallStageOptions{
Ports: fw.Ports,
}
if fw.Services != nil {
options.EnabledServices = fw.Services.Enabled
options.DisabledServices = fw.Services.Disabled
}
if fw.Zones != nil {
for _, z := range fw.Zones {
options.Zones = append(options.Zones, osbuild.FirewallZone{
Name: *z.Name,
Sources: z.Sources,
})
}
}
osc.Firewall = &options
}
language, keyboard := c.GetPrimaryLocale()
if language != nil {
osc.Language = *language
} else if imageConfig.Locale != nil {
osc.Language = *imageConfig.Locale
}
if keyboard != nil {
osc.Keyboard = keyboard
} else if imageConfig.Keyboard != nil {
osc.Keyboard = &imageConfig.Keyboard.Keymap
if imageConfig.Keyboard.X11Keymap != nil {
osc.X11KeymapLayouts = imageConfig.Keyboard.X11Keymap.Layouts
}
}
if hostname := c.GetHostname(); hostname != nil {
osc.Hostname = *hostname
}
timezone, ntpServers := c.GetTimezoneSettings()
if timezone != nil {
osc.Timezone = *timezone
} else if imageConfig.Timezone != nil {
osc.Timezone = *imageConfig.Timezone
}
if len(ntpServers) > 0 {
for _, server := range ntpServers {
osc.NTPServers = append(osc.NTPServers, osbuild.ChronyConfigServer{Hostname: server})
}
} else if imageConfig.TimeSynchronization != nil {
osc.NTPServers = imageConfig.TimeSynchronization.Servers
osc.LeapSecTZ = imageConfig.TimeSynchronization.LeapsecTz
}
// Relabel the tree, unless the `NoSElinux` flag is explicitly set to `true`
if imageConfig.NoSElinux == nil || imageConfig.NoSElinux != nil && !*imageConfig.NoSElinux {
osc.SElinux = "targeted"
}
if oscapConfig := c.GetOpenSCAP(); oscapConfig != nil {
if t.rpmOstree {
panic("unexpected oscap options for ostree image type")
}
var datastream = oscapConfig.DataStream
if datastream == "" {
datastream = oscap.DefaultRHEL9Datastream(t.arch.distro.isRHEL())
}
osc.OpenSCAPConfig = osbuild.NewOscapRemediationStageOptions(
osbuild.OscapConfig{
Datastream: datastream,
ProfileID: oscapConfig.ProfileID,
},
)
}
if t.arch.distro.isRHEL() && options.Facts != nil {
osc.FactAPIType = &options.Facts.APIType
}
var err error
osc.Directories, err = blueprint.DirectoryCustomizationsToFsNodeDirectories(c.GetDirectories())
if err != nil {
// In theory this should never happen, because the blueprint directory customizations
// should have been validated before this point.
panic(fmt.Sprintf("failed to convert directory customizations to fs node directories: %v", err))
}
osc.Files, err = blueprint.FileCustomizationsToFsNodeFiles(c.GetFiles())
if err != nil {
// In theory this should never happen, because the blueprint file customizations
// should have been validated before this point.
panic(fmt.Sprintf("failed to convert file customizations to fs node files: %v", err))
}
// set yum repos first, so it doesn't get overridden by
// imageConfig.YUMRepos
osc.YUMRepos = imageConfig.YUMRepos
customRepos, err := c.GetRepositories()
if err != nil {
// This shouldn't happen and since the repos
// should have already been validated
panic(fmt.Sprintf("failed to get custom repos: %v", err))
}
// This function returns a map of filename and corresponding yum repos
// and a list of fs node files for the inline gpg keys so we can save
// them to disk. This step also swaps the inline gpg key with the path
// to the file in the os file tree
yumRepos, gpgKeyFiles, err := blueprint.RepoCustomizationsToRepoConfigAndGPGKeyFiles(customRepos)
if err != nil {
panic(fmt.Sprintf("failed to convert inline gpgkeys to fs node files: %v", err))
}
// add the gpg key files to the list of files to be added to the tree
if len(gpgKeyFiles) > 0 {
osc.Files = append(osc.Files, gpgKeyFiles...)
}
for filename, repos := range yumRepos {
osc.YUMRepos = append(osc.YUMRepos, osbuild.NewYumReposStageOptions(filename, repos))
}
osc.ShellInit = imageConfig.ShellInit
osc.Grub2Config = imageConfig.Grub2Config
osc.Sysconfig = imageConfig.Sysconfig
osc.SystemdLogind = imageConfig.SystemdLogind
osc.CloudInit = imageConfig.CloudInit
osc.Modprobe = imageConfig.Modprobe
osc.DracutConf = imageConfig.DracutConf
osc.SystemdUnit = imageConfig.SystemdUnit
osc.Authselect = imageConfig.Authselect
osc.SELinuxConfig = imageConfig.SELinuxConfig
osc.Tuned = imageConfig.Tuned
osc.Tmpfilesd = imageConfig.Tmpfilesd
osc.PamLimitsConf = imageConfig.PamLimitsConf
osc.Sysctld = imageConfig.Sysctld
osc.DNFConfig = imageConfig.DNFConfig
osc.DNFAutomaticConfig = imageConfig.DNFAutomaticConfig
osc.SshdConfig = imageConfig.SshdConfig
osc.AuthConfig = imageConfig.Authconfig
osc.PwQuality = imageConfig.PwQuality
osc.RHSMConfig = imageConfig.RHSMConfig
osc.Subscription = options.Subscription
osc.WAAgentConfig = imageConfig.WAAgentConfig
osc.UdevRules = imageConfig.UdevRules
osc.GCPGuestAgentConfig = imageConfig.GCPGuestAgentConfig
return osc
}
func liveImage(workload workload.Workload,
t *imageType,
customizations *blueprint.Customizations,
options distro.ImageOptions,
packageSets map[string]rpmmd.PackageSet,
containers []container.SourceSpec,
rng *rand.Rand) (image.ImageKind, error) {
img := image.NewLiveImage()
img.Platform = t.platform
img.OSCustomizations = osCustomizations(t, packageSets[osPkgsKey], options, containers, customizations)
img.Environment = t.environment
img.Workload = workload
img.Compression = t.compression
// TODO: move generation into LiveImage
pt, err := t.getPartitionTable(customizations.GetFilesystems(), options, rng)
if err != nil {
return nil, err
}
img.PartitionTable = pt
img.Filename = t.Filename()
return img, nil
}
func edgeCommitImage(workload workload.Workload,
t *imageType,
customizations *blueprint.Customizations,
options distro.ImageOptions,
packageSets map[string]rpmmd.PackageSet,
containers []container.SourceSpec,
rng *rand.Rand) (image.ImageKind, error) {
parentCommit, commitRef := makeOSTreeParentCommit(options.OSTree, t.OSTreeRef())
img := image.NewOSTreeArchive(commitRef)
img.Platform = t.platform
img.OSCustomizations = osCustomizations(t, packageSets[osPkgsKey], options, containers, customizations)
img.Environment = t.environment
img.Workload = workload
img.OSTreeParent = parentCommit
img.OSVersion = t.arch.distro.osVersion
img.Filename = t.Filename()
if !common.VersionLessThan(t.arch.distro.osVersion, "9.2") || t.arch.distro.osVersion == "9-stream" {
img.OSCustomizations.EnabledServices = append(img.OSCustomizations.EnabledServices, "ignition-firstboot-complete.service", "coreos-ignition-write-issues.service")
}
img.Environment = t.environment
img.Workload = workload
img.OSVersion = t.arch.distro.osVersion
img.Filename = t.Filename()
return img, nil
}
func edgeContainerImage(workload workload.Workload,
t *imageType,
customizations *blueprint.Customizations,
options distro.ImageOptions,
packageSets map[string]rpmmd.PackageSet,
containers []container.SourceSpec,
rng *rand.Rand) (image.ImageKind, error) {
parentCommit, commitRef := makeOSTreeParentCommit(options.OSTree, t.OSTreeRef())
img := image.NewOSTreeContainer(commitRef)
img.Platform = t.platform
img.OSCustomizations = osCustomizations(t, packageSets[osPkgsKey], options, containers, customizations)
img.ContainerLanguage = img.OSCustomizations.Language
img.Environment = t.environment
img.Workload = workload
img.OSTreeParent = parentCommit
img.OSVersion = t.arch.distro.osVersion
img.ExtraContainerPackages = packageSets[containerPkgsKey]
img.Filename = t.Filename()
if !common.VersionLessThan(t.arch.distro.osVersion, "9.2") || t.arch.distro.osVersion == "9-stream" {
img.OSCustomizations.EnabledServices = append(img.OSCustomizations.EnabledServices, "ignition-firstboot-complete.service", "coreos-ignition-write-issues.service")
}
return img, nil
}
func edgeInstallerImage(workload workload.Workload,
t *imageType,
customizations *blueprint.Customizations,
options distro.ImageOptions,
packageSets map[string]rpmmd.PackageSet,
containers []container.SourceSpec,
rng *rand.Rand) (image.ImageKind, error) {
d := t.arch.distro
commit, err := makeOSTreePayloadCommit(options.OSTree, t.OSTreeRef())
if err != nil {
return nil, fmt.Errorf("%s: %s", t.Name(), err.Error())
}
img := image.NewAnacondaOSTreeInstaller(commit)
img.Platform = t.platform
img.ExtraBasePackages = packageSets[installerPkgsKey]
img.Users = users.UsersFromBP(customizations.GetUsers())
img.Groups = users.GroupsFromBP(customizations.GetGroups())
img.SquashfsCompression = "xz"
img.AdditionalDracutModules = []string{
"nvdimm", // non-volatile DIMM firmware (provides nfit, cuse, and nd_e820)
"prefixdevname",
"prefixdevname-tools",
}
img.AdditionalDrivers = []string{"cuse", "ipmi_devintf", "ipmi_msghandler"}
if len(img.Users)+len(img.Groups) > 0 {
// only enable the users module if needed
img.AdditionalAnacondaModules = []string{"org.fedoraproject.Anaconda.Modules.Users"}
}
img.ISOLabelTempl = d.isolabelTmpl
img.Product = d.product
img.Variant = "edge"
img.OSName = "rhel"
img.OSVersion = d.osVersion
img.Release = fmt.Sprintf("%s %s", d.product, d.osVersion)
img.Filename = t.Filename()
return img, nil
}
func edgeRawImage(workload workload.Workload,
t *imageType,
customizations *blueprint.Customizations,
options distro.ImageOptions,
packageSets map[string]rpmmd.PackageSet,
containers []container.SourceSpec,
rng *rand.Rand) (image.ImageKind, error) {
commit, err := makeOSTreePayloadCommit(options.OSTree, t.OSTreeRef())
if err != nil {
return nil, fmt.Errorf("%s: %s", t.Name(), err.Error())
}
img := image.NewOSTreeRawImage(commit)
if !common.VersionLessThan(t.arch.distro.osVersion, "9.2") || t.arch.distro.osVersion == "9-stream" {
img.Ignition = true
}
img.Users = users.UsersFromBP(customizations.GetUsers())
img.Groups = users.GroupsFromBP(customizations.GetGroups())
// "rw" kernel option is required when /sysroot is mounted read-only to
// keep stateful parts of the filesystem writeable (/var/ and /etc)
img.KernelOptionsAppend = []string{"modprobe.blacklist=vc4"}
img.Keyboard = "us"
img.Locale = "C.UTF-8"
if !common.VersionLessThan(t.arch.distro.osVersion, "9.2") || t.arch.distro.osVersion == "9-stream" {
img.SysrootReadOnly = true
img.KernelOptionsAppend = append(img.KernelOptionsAppend, "rw")
}
img.Platform = t.platform
img.Workload = workload
img.Remote = ostree.Remote{
Name: "rhel-edge",
URL: options.OSTree.URL,
ContentURL: options.OSTree.ContentURL,
}
img.OSName = "redhat"
if bpIgnition := customizations.GetIgnition(); bpIgnition != nil && bpIgnition.FirstBoot != nil && bpIgnition.FirstBoot.ProvisioningURL != "" {
img.KernelOptionsAppend = append(img.KernelOptionsAppend, "ignition.config.url="+bpIgnition.FirstBoot.ProvisioningURL)
}
// 92+ only
if kopts := customizations.GetKernel(); kopts != nil && kopts.Append != "" {
img.KernelOptionsAppend = append(img.KernelOptionsAppend, kopts.Append)
}
// TODO: move generation into LiveImage
pt, err := t.getPartitionTable(customizations.GetFilesystems(), options, rng)
if err != nil {
return nil, err
}
img.PartitionTable = pt
img.Filename = t.Filename()
img.Compression = t.compression
return img, nil
}
func edgeSimplifiedInstallerImage(workload workload.Workload,
t *imageType,
customizations *blueprint.Customizations,
options distro.ImageOptions,
packageSets map[string]rpmmd.PackageSet,
containers []container.SourceSpec,
rng *rand.Rand) (image.ImageKind, error) {
commit, err := makeOSTreePayloadCommit(options.OSTree, t.OSTreeRef())
if err != nil {
return nil, fmt.Errorf("%s: %s", t.Name(), err.Error())
}
rawImg := image.NewOSTreeRawImage(commit)
if !common.VersionLessThan(t.arch.distro.osVersion, "9.2") || t.arch.distro.osVersion == "9-stream" {
rawImg.Ignition = true
}
rawImg.Users = users.UsersFromBP(customizations.GetUsers())
rawImg.Groups = users.GroupsFromBP(customizations.GetGroups())
// "rw" kernel option is required when /sysroot is mounted read-only to
// keep stateful parts of the filesystem writeable (/var/ and /etc)
rawImg.KernelOptionsAppend = []string{"modprobe.blacklist=vc4"}
rawImg.Keyboard = "us"
rawImg.Locale = "C.UTF-8"
if !common.VersionLessThan(t.arch.distro.osVersion, "9.2") || t.arch.distro.osVersion == "9-stream" {
rawImg.SysrootReadOnly = true
rawImg.KernelOptionsAppend = append(rawImg.KernelOptionsAppend, "rw")
}
rawImg.Platform = t.platform
rawImg.Workload = workload
rawImg.Remote = ostree.Remote{
Name: "rhel-edge",
URL: options.OSTree.URL,
ContentURL: options.OSTree.ContentURL,
}
rawImg.OSName = "redhat"
// TODO: move generation into LiveImage
pt, err := t.getPartitionTable(customizations.GetFilesystems(), options, rng)
if err != nil {
return nil, err
}
rawImg.PartitionTable = pt
rawImg.Filename = t.Filename()
if bpIgnition := customizations.GetIgnition(); bpIgnition != nil && bpIgnition.FirstBoot != nil && bpIgnition.FirstBoot.ProvisioningURL != "" {
rawImg.KernelOptionsAppend = append(rawImg.KernelOptionsAppend, "ignition.config.url="+bpIgnition.FirstBoot.ProvisioningURL)
}
// 92+ only
if kopts := customizations.GetKernel(); kopts != nil && kopts.Append != "" {
rawImg.KernelOptionsAppend = append(rawImg.KernelOptionsAppend, kopts.Append)
}
img := image.NewOSTreeSimplifiedInstaller(rawImg, customizations.InstallationDevice)
img.ExtraBasePackages = packageSets[installerPkgsKey]
// img.Workload = workload
img.Platform = t.platform
img.Filename = t.Filename()
if bpFDO := customizations.GetFDO(); bpFDO != nil {
img.FDO = fdo.FromBP(*bpFDO)
}
// ignition configs from blueprint
if bpIgnition := customizations.GetIgnition(); bpIgnition != nil {
if bpIgnition.Embedded != nil {
var err error
img.IgnitionEmbedded, err = ignition.EmbeddedOptionsFromBP(*bpIgnition.Embedded)
if err != nil {
return nil, err
}
}
}
d := t.arch.distro
img.ISOLabelTempl = d.isolabelTmpl
img.Product = d.product
img.Variant = "edge"
img.OSName = "redhat"
img.OSVersion = d.osVersion
img.AdditionalDracutModules = []string{"prefixdevname", "prefixdevname-tools"}
return img, nil
}
func imageInstallerImage(workload workload.Workload,
t *imageType,
customizations *blueprint.Customizations,
options distro.ImageOptions,
packageSets map[string]rpmmd.PackageSet,
containers []container.SourceSpec,
rng *rand.Rand) (image.ImageKind, error) {
img := image.NewAnacondaTarInstaller()
img.Platform = t.platform
img.Workload = workload
img.OSCustomizations = osCustomizations(t, packageSets[osPkgsKey], options, containers, customizations)
img.ExtraBasePackages = packageSets[installerPkgsKey]
img.Users = users.UsersFromBP(customizations.GetUsers())
img.Groups = users.GroupsFromBP(customizations.GetGroups())
img.AdditionalDracutModules = []string{
"nvdimm", // non-volatile DIMM firmware (provides nfit, cuse, and nd_e820)
"prefixdevname",
"prefixdevname-tools",
}
img.AdditionalDrivers = []string{"cuse", "ipmi_devintf", "ipmi_msghandler"}
img.AdditionalAnacondaModules = []string{"org.fedoraproject.Anaconda.Modules.Users"}
img.SquashfsCompression = "xz"
// put the kickstart file in the root of the iso
img.ISORootKickstart = true
d := t.arch.distro
img.ISOLabelTempl = d.isolabelTmpl
img.Product = d.product
img.OSName = "redhat"
img.OSVersion = d.osVersion
img.Release = fmt.Sprintf("%s %s", d.product, d.osVersion)
img.Filename = t.Filename()
return img, nil
}
func tarImage(workload workload.Workload,
t *imageType,
customizations *blueprint.Customizations,
options distro.ImageOptions,
packageSets map[string]rpmmd.PackageSet,
containers []container.SourceSpec,
rng *rand.Rand) (image.ImageKind, error) {
img := image.NewArchive()
img.Platform = t.platform
img.OSCustomizations = osCustomizations(t, packageSets[osPkgsKey], options, containers, customizations)
img.Environment = t.environment
img.Workload = workload
img.Filename = t.Filename()
return img, nil
}
// Create an ostree SourceSpec to define an ostree parent commit using the user
// options and the default ref for the image type. Additionally returns the
// ref to be used for the new commit to be created.
func makeOSTreeParentCommit(options *ostree.ImageOptions, defaultRef string) (*ostree.SourceSpec, string) {
commitRef := defaultRef
if options == nil {
// nothing to do
return nil, commitRef
}
if options.ImageRef != "" {
// user option overrides default commit ref
commitRef = options.ImageRef
}
var parentCommit *ostree.SourceSpec
if options.URL == "" {
// no parent
return nil, commitRef
}
// ostree URL specified: set source spec for parent commit
parentRef := options.ParentRef
if parentRef == "" {
// parent ref not set: use image ref
parentRef = commitRef
}
parentCommit = &ostree.SourceSpec{
URL: options.URL,
Ref: parentRef,
RHSM: options.RHSM,
}
return parentCommit, commitRef
}
// Create an ostree SourceSpec to define an ostree payload using the user options and the default ref for the image type.
func makeOSTreePayloadCommit(options *ostree.ImageOptions, defaultRef string) (ostree.SourceSpec, error) {
if options == nil || options.URL == "" {
// this should be caught by checkOptions() in distro, but it's good
// to guard against it here as well
return ostree.SourceSpec{}, fmt.Errorf("ostree commit URL required")
}
commitRef := defaultRef
if options.ImageRef != "" {
// user option overrides default commit ref
commitRef = options.ImageRef
}
return ostree.SourceSpec{
URL: options.URL,
Ref: commitRef,
RHSM: options.RHSM,
}, nil
}

View file

@ -0,0 +1,429 @@
package rhel9
import (
"fmt"
"log"
"math/rand"
"strings"
"golang.org/x/exp/slices"
"github.com/osbuild/images/internal/common"
"github.com/osbuild/images/internal/environment"
"github.com/osbuild/images/internal/oscap"
"github.com/osbuild/images/internal/pathpolicy"
"github.com/osbuild/images/internal/workload"
"github.com/osbuild/images/pkg/blueprint"
"github.com/osbuild/images/pkg/container"
"github.com/osbuild/images/pkg/disk"
"github.com/osbuild/images/pkg/distro"
"github.com/osbuild/images/pkg/image"
"github.com/osbuild/images/pkg/manifest"
"github.com/osbuild/images/pkg/ostree"
"github.com/osbuild/images/pkg/platform"
"github.com/osbuild/images/pkg/rpmmd"
)
const (
// package set names
// build package set name
buildPkgsKey = "build"
// main/common os image package set name
osPkgsKey = "os"
// container package set name
containerPkgsKey = "container"
// installer package set name
installerPkgsKey = "installer"
// blueprint package set name
blueprintPkgsKey = "blueprint"
)
type imageFunc func(workload workload.Workload, t *imageType, customizations *blueprint.Customizations, options distro.ImageOptions, packageSets map[string]rpmmd.PackageSet, containers []container.SourceSpec, rng *rand.Rand) (image.ImageKind, error)
type packageSetFunc func(t *imageType) rpmmd.PackageSet
type imageType struct {
arch *architecture
platform platform.Platform
environment environment.Environment
workload workload.Workload
name string
nameAliases []string
filename string
compression string // TODO: remove from image definition and make it a transport option
mimeType string
packageSets map[string]packageSetFunc
defaultImageConfig *distro.ImageConfig
kernelOptions string
defaultSize uint64
buildPipelines []string
payloadPipelines []string
exports []string
image imageFunc
// bootISO: installable ISO
bootISO bool
// rpmOstree: edge/ostree
rpmOstree bool
// bootable image
bootable bool
// List of valid arches for the image type
basePartitionTables distro.BasePartitionTableMap
}
func (t *imageType) Name() string {
return t.name
}
func (t *imageType) Arch() distro.Arch {
return t.arch
}
func (t *imageType) Filename() string {
return t.filename
}
func (t *imageType) MIMEType() string {
return t.mimeType
}
func (t *imageType) OSTreeRef() string {
d := t.arch.distro
if t.rpmOstree {
return fmt.Sprintf(d.ostreeRefTmpl, t.Arch().Name())
}
return ""
}
func (t *imageType) Size(size uint64) uint64 {
// Microsoft Azure requires vhd images to be rounded up to the nearest MB
if t.name == "vhd" && size%common.MebiByte != 0 {
size = (size/common.MebiByte + 1) * common.MebiByte
}
if size == 0 {
size = t.defaultSize
}
return size
}
func (t *imageType) BuildPipelines() []string {
return t.buildPipelines
}
func (t *imageType) PayloadPipelines() []string {
return t.payloadPipelines
}
func (t *imageType) PayloadPackageSets() []string {
return []string{blueprintPkgsKey}
}
func (t *imageType) PackageSetsChains() map[string][]string {
return nil
}
func (t *imageType) Exports() []string {
if len(t.exports) > 0 {
return t.exports
}
return []string{"assembler"}
}
func (t *imageType) BootMode() distro.BootMode {
if t.platform.GetUEFIVendor() != "" && t.platform.GetBIOSPlatform() != "" {
return distro.BOOT_HYBRID
} else if t.platform.GetUEFIVendor() != "" {
return distro.BOOT_UEFI
} else if t.platform.GetBIOSPlatform() != "" || t.platform.GetZiplSupport() {
return distro.BOOT_LEGACY
}
return distro.BOOT_NONE
}
func (t *imageType) getPartitionTable(
mountpoints []blueprint.FilesystemCustomization,
options distro.ImageOptions,
rng *rand.Rand,
) (*disk.PartitionTable, error) {
archName := t.arch.Name()
basePartitionTable, exists := t.basePartitionTables[archName]
if !exists {
return nil, fmt.Errorf("no partition table defined for architecture %q for image type %q", archName, t.Name())
}
imageSize := t.Size(options.Size)
lvmify := !t.rpmOstree
return disk.NewPartitionTable(&basePartitionTable, mountpoints, imageSize, lvmify, nil, rng)
}
func (t *imageType) getDefaultImageConfig() *distro.ImageConfig {
// ensure that image always returns non-nil default config
imageConfig := t.defaultImageConfig
if imageConfig == nil {
imageConfig = &distro.ImageConfig{}
}
return imageConfig.InheritFrom(t.arch.distro.getDefaultImageConfig())
}
func (t *imageType) PartitionType() string {
archName := t.arch.Name()
basePartitionTable, exists := t.basePartitionTables[archName]
if !exists {
return ""
}
return basePartitionTable.Type
}
func (t *imageType) Manifest(bp *blueprint.Blueprint,
options distro.ImageOptions,
repos []rpmmd.RepoConfig,
seed int64) (*manifest.Manifest, []string, error) {
warnings, err := t.checkOptions(bp, options)
if err != nil {
return nil, nil, err
}
// merge package sets that appear in the image type with the package sets
// of the same name from the distro and arch
staticPackageSets := make(map[string]rpmmd.PackageSet)
for name, getter := range t.packageSets {
staticPackageSets[name] = getter(t)
}
// amend with repository information and collect payload repos
payloadRepos := make([]rpmmd.RepoConfig, 0)
for _, repo := range repos {
if len(repo.PackageSets) > 0 {
// only apply the repo to the listed package sets
for _, psName := range repo.PackageSets {
if slices.Contains(t.PayloadPackageSets(), psName) {
payloadRepos = append(payloadRepos, repo)
}
ps := staticPackageSets[psName]
ps.Repositories = append(ps.Repositories, repo)
staticPackageSets[psName] = ps
}
}
}
w := t.workload
if w == nil {
cw := &workload.Custom{
BaseWorkload: workload.BaseWorkload{
Repos: payloadRepos,
},
Packages: bp.GetPackagesEx(false),
}
if services := bp.Customizations.GetServices(); services != nil {
cw.Services = services.Enabled
cw.DisabledServices = services.Disabled
}
w = cw
}
containerSources := make([]container.SourceSpec, len(bp.Containers))
for idx := range bp.Containers {
containerSources[idx] = container.SourceSpec(bp.Containers[idx])
}
source := rand.NewSource(seed)
// math/rand is good enough in this case
/* #nosec G404 */
rng := rand.New(source)
img, err := t.image(w, t, bp.Customizations, options, staticPackageSets, containerSources, rng)
if err != nil {
return nil, nil, err
}
mf := manifest.New()
mf.Distro = manifest.DISTRO_EL9
_, err = img.InstantiateManifest(&mf, repos, t.arch.distro.runner, rng)
if err != nil {
return nil, nil, err
}
return &mf, warnings, err
}
// checkOptions checks the validity and compatibility of options and customizations for the image type.
// Returns ([]string, error) where []string, if non-nil, will hold any generated warnings (e.g. deprecation notices).
func (t *imageType) checkOptions(bp *blueprint.Blueprint, options distro.ImageOptions) ([]string, error) {
customizations := bp.Customizations
// holds warnings (e.g. deprecation notices)
var warnings []string
if t.workload != nil {
// For now, if an image type defines its own workload, don't allow any
// user customizations.
// Soon we will have more workflows and each will define its allowed
// set of customizations. The current set of customizations defined in
// the blueprint spec corresponds to the Custom workflow.
if customizations != nil {
return warnings, fmt.Errorf("image type %q does not support customizations", t.name)
}
}
// we do not support embedding containers on ostree-derived images, only on commits themselves
if len(bp.Containers) > 0 && t.rpmOstree && (t.name != "edge-commit" && t.name != "edge-container") {
return warnings, fmt.Errorf("embedding containers is not supported for %s on %s", t.name, t.arch.distro.name)
}
ostreeURL := ""
if options.OSTree != nil {
if options.OSTree.ParentRef != "" && options.OSTree.URL == "" {
// specifying parent ref also requires URL
return nil, ostree.NewParameterComboError("ostree parent ref specified, but no URL to retrieve it")
}
ostreeURL = options.OSTree.URL
}
if t.bootISO && t.rpmOstree {
// ostree-based ISOs require a URL from which to pull a payload commit
if ostreeURL == "" {
return warnings, fmt.Errorf("boot ISO image type %q requires specifying a URL from which to retrieve the OSTree commit", t.name)
}
if t.name == "edge-simplified-installer" {
allowed := []string{"InstallationDevice", "FDO", "Ignition", "Kernel", "User", "Group"}
if err := customizations.CheckAllowed(allowed...); err != nil {
return warnings, fmt.Errorf("unsupported blueprint customizations found for boot ISO image type %q: (allowed: %s)", t.name, strings.Join(allowed, ", "))
}
if customizations.GetInstallationDevice() == "" {
return warnings, fmt.Errorf("boot ISO image type %q requires specifying an installation device to install to", t.name)
}
// FDO is optional, but when specified has some restrictions
if customizations.GetFDO() != nil {
if customizations.GetFDO().ManufacturingServerURL == "" {
return warnings, fmt.Errorf("boot ISO image type %q requires specifying FDO.ManufacturingServerURL configuration to install to when using FDO", t.name)
}
var diunSet int
if customizations.GetFDO().DiunPubKeyHash != "" {
diunSet++
}
if customizations.GetFDO().DiunPubKeyInsecure != "" {
diunSet++
}
if customizations.GetFDO().DiunPubKeyRootCerts != "" {
diunSet++
}
if diunSet != 1 {
return warnings, fmt.Errorf("boot ISO image type %q requires specifying one of [FDO.DiunPubKeyHash,FDO.DiunPubKeyInsecure,FDO.DiunPubKeyRootCerts] configuration to install to when using FDO", t.name)
}
}
// ignition is optional, we might be using FDO
if customizations.GetIgnition() != nil {
if customizations.GetIgnition().Embedded != nil && customizations.GetIgnition().FirstBoot != nil {
return warnings, fmt.Errorf("both ignition embedded and firstboot configurations found")
}
if customizations.GetIgnition().FirstBoot != nil && customizations.GetIgnition().FirstBoot.ProvisioningURL == "" {
return warnings, fmt.Errorf("ignition.firstboot requires a provisioning url")
}
}
} else if t.name == "edge-installer" {
allowed := []string{"User", "Group"}
if err := customizations.CheckAllowed(allowed...); err != nil {
return warnings, fmt.Errorf("unsupported blueprint customizations found for boot ISO image type %q: (allowed: %s)", t.name, strings.Join(allowed, ", "))
}
}
}
if t.name == "edge-raw-image" || t.name == "edge-ami" {
// ostree-based bootable images require a URL from which to pull a payload commit
if ostreeURL == "" {
return warnings, fmt.Errorf("%q images require specifying a URL from which to retrieve the OSTree commit", t.name)
}
allowed := []string{"Ignition", "Kernel", "User", "Group"}
if err := customizations.CheckAllowed(allowed...); err != nil {
return warnings, fmt.Errorf("unsupported blueprint customizations found for image type %q: (allowed: %s)", t.name, strings.Join(allowed, ", "))
}
// TODO: consider additional checks, such as those in "edge-simplified-installer"
}
// warn that user & group customizations on edge-commit, edge-container are deprecated
// TODO(edge): directly error if these options are provided when rhel-9.5's time arrives
if t.name == "edge-commit" || t.name == "edge-container" {
if customizations.GetUsers() != nil {
w := fmt.Sprintf("Please note that user customizations on %q image type are deprecated and will be removed in the near future\n", t.name)
log.Print(w)
warnings = append(warnings, w)
}
if customizations.GetGroups() != nil {
w := fmt.Sprintf("Please note that group customizations on %q image type are deprecated and will be removed in the near future\n", t.name)
log.Print(w)
warnings = append(warnings, w)
}
}
if kernelOpts := customizations.GetKernel(); kernelOpts.Append != "" && t.rpmOstree && t.name != "edge-raw-image" && t.name != "edge-simplified-installer" {
return warnings, fmt.Errorf("kernel boot parameter customizations are not supported for ostree types")
}
mountpoints := customizations.GetFilesystems()
if mountpoints != nil && t.rpmOstree {
return warnings, fmt.Errorf("Custom mountpoints are not supported for ostree types")
}
err := blueprint.CheckMountpointsPolicy(mountpoints, pathpolicy.MountpointPolicies)
if err != nil {
return warnings, err
}
if osc := customizations.GetOpenSCAP(); osc != nil {
if t.arch.distro.osVersion == "9.0" {
return warnings, fmt.Errorf(fmt.Sprintf("OpenSCAP unsupported os version: %s", t.arch.distro.osVersion))
}
if !oscap.IsProfileAllowed(osc.ProfileID, oscapProfileAllowList) {
return warnings, fmt.Errorf(fmt.Sprintf("OpenSCAP unsupported profile: %s", osc.ProfileID))
}
if t.rpmOstree {
return warnings, fmt.Errorf("OpenSCAP customizations are not supported for ostree types")
}
if osc.ProfileID == "" {
return warnings, fmt.Errorf("OpenSCAP profile cannot be empty")
}
}
// Check Directory/File Customizations are valid
dc := customizations.GetDirectories()
fc := customizations.GetFiles()
err = blueprint.ValidateDirFileCustomizations(dc, fc)
if err != nil {
return warnings, err
}
err = blueprint.CheckDirectoryCustomizationsPolicy(dc, pathpolicy.CustomDirectoriesPolicies)
if err != nil {
return warnings, err
}
err = blueprint.CheckFileCustomizationsPolicy(fc, pathpolicy.CustomFilesPolicies)
if err != nil {
return warnings, err
}
// check if repository customizations are valid
_, err = customizations.GetRepositories()
if err != nil {
return warnings, err
}
return warnings, nil
}

View file

@ -0,0 +1,247 @@
package rhel9
// This file defines package sets that are used by more than one image type.
import (
"fmt"
"github.com/osbuild/images/pkg/platform"
"github.com/osbuild/images/pkg/rpmmd"
)
// BUILD PACKAGE SETS
// distro-wide build package set
func distroBuildPackageSet(t *imageType) rpmmd.PackageSet {
ps := rpmmd.PackageSet{
Include: []string{
"dnf",
"dosfstools",
"e2fsprogs",
"glibc",
"lorax-templates-generic",
"lorax-templates-rhel",
"lvm2",
"policycoreutils",
"python3-iniparse",
"qemu-img",
"selinux-policy-targeted",
"systemd",
"tar",
"xfsprogs",
"xz",
},
}
switch t.arch.Name() {
case platform.ARCH_X86_64.String():
ps = ps.Append(x8664BuildPackageSet(t))
case platform.ARCH_PPC64LE.String():
ps = ps.Append(ppc64leBuildPackageSet(t))
}
return ps
}
// x86_64 build package set
func x8664BuildPackageSet(t *imageType) rpmmd.PackageSet {
return rpmmd.PackageSet{
Include: []string{
"grub2-pc",
},
}
}
// ppc64le build package set
func ppc64leBuildPackageSet(t *imageType) rpmmd.PackageSet {
return rpmmd.PackageSet{
Include: []string{
"grub2-ppc64le",
"grub2-ppc64le-modules",
},
}
}
// installer boot package sets, needed for booting and
// also in the build host
func anacondaBootPackageSet(t *imageType) rpmmd.PackageSet {
ps := rpmmd.PackageSet{}
grubCommon := rpmmd.PackageSet{
Include: []string{
"grub2-tools",
"grub2-tools-extra",
"grub2-tools-minimal",
},
}
efiCommon := rpmmd.PackageSet{
Include: []string{
"efibootmgr",
},
}
switch t.arch.Name() {
case platform.ARCH_X86_64.String():
ps = ps.Append(grubCommon)
ps = ps.Append(efiCommon)
ps = ps.Append(rpmmd.PackageSet{
Include: []string{
"grub2-efi-x64",
"grub2-efi-x64-cdboot",
"grub2-pc",
"grub2-pc-modules",
"shim-x64",
"syslinux",
"syslinux-nonlinux",
},
})
case platform.ARCH_AARCH64.String():
ps = ps.Append(grubCommon)
ps = ps.Append(efiCommon)
ps = ps.Append(rpmmd.PackageSet{
Include: []string{
"grub2-efi-aa64-cdboot",
"grub2-efi-aa64",
"shim-aa64",
},
})
default:
panic(fmt.Sprintf("unsupported arch: %s", t.arch.Name()))
}
return ps
}
// OS package sets
// Replacement of the previously used @core package group
func coreOsCommonPackageSet(t *imageType) rpmmd.PackageSet {
ps := rpmmd.PackageSet{
Include: []string{
"audit",
"basesystem",
"bash",
"coreutils",
"cronie",
"crypto-policies",
"crypto-policies-scripts",
"curl",
"dnf",
"yum",
"e2fsprogs",
"filesystem",
"glibc",
"grubby",
"hostname",
"iproute",
"iproute-tc",
"iputils",
"kbd",
"kexec-tools",
"less",
"logrotate",
"man-db",
"ncurses",
"openssh-clients",
"openssh-server",
"p11-kit",
"parted",
"passwd",
"policycoreutils",
"procps-ng",
"rootfiles",
"rpm",
"rpm-plugin-audit",
"rsyslog",
"selinux-policy-targeted",
"setup",
"shadow-utils",
"sssd-common",
"sssd-kcm",
"sudo",
"systemd",
"tuned",
"util-linux",
"vim-minimal",
"xfsprogs",
"authselect",
"prefixdevname",
"dnf-plugins-core",
"NetworkManager",
"NetworkManager-team",
"NetworkManager-tui",
"libsysfs",
"linux-firmware",
"lshw",
"lsscsi",
"kernel-tools",
"sg3_utils",
"sg3_utils-libs",
"python3-libselinux",
},
}
// Do not include this in the distroSpecificPackageSet for now,
// because it includes 'insights-client' which is not installed
// by default on all RHEL images (although it would probably make sense).
if t.arch.distro.isRHEL() {
ps = ps.Append(rpmmd.PackageSet{
Include: []string{
"subscription-manager",
},
})
}
switch t.arch.Name() {
case platform.ARCH_X86_64.String():
ps = ps.Append(rpmmd.PackageSet{
Include: []string{
"irqbalance",
"microcode_ctl",
},
})
case platform.ARCH_AARCH64.String():
ps = ps.Append(rpmmd.PackageSet{
Include: []string{
"irqbalance",
},
})
case platform.ARCH_PPC64LE.String():
ps = ps.Append(rpmmd.PackageSet{
Include: []string{
"irqbalance",
"opal-prd",
"ppc64-diag-rtas",
"powerpc-utils-core",
"lsvpd",
},
})
case platform.ARCH_S390X.String():
ps = ps.Append(rpmmd.PackageSet{
Include: []string{
"s390utils-core",
},
})
}
return ps
}
// packages that are only in some (sub)-distributions
func distroSpecificPackageSet(t *imageType) rpmmd.PackageSet {
if t.arch.distro.isRHEL() {
return rpmmd.PackageSet{
Include: []string{
"insights-client",
},
}
}
return rpmmd.PackageSet{}
}

View file

@ -0,0 +1,169 @@
package rhel9
import (
"github.com/osbuild/images/internal/common"
"github.com/osbuild/images/pkg/disk"
"github.com/osbuild/images/pkg/distro"
"github.com/osbuild/images/pkg/platform"
)
var defaultBasePartitionTables = distro.BasePartitionTableMap{
platform.ARCH_X86_64.String(): disk.PartitionTable{
UUID: "D209C89E-EA5E-4FBD-B161-B461CCE297E0",
Type: "gpt",
Partitions: []disk.Partition{
{
Size: 1 * common.MebiByte, // 1MB
Bootable: true,
Type: disk.BIOSBootPartitionGUID,
UUID: disk.BIOSBootPartitionUUID,
},
{
Size: 200 * common.MebiByte, // 200 MB
Type: disk.EFISystemPartitionGUID,
UUID: disk.EFISystemPartitionUUID,
Payload: &disk.Filesystem{
Type: "vfat",
UUID: disk.EFIFilesystemUUID,
Mountpoint: "/boot/efi",
Label: "EFI-SYSTEM",
FSTabOptions: "defaults,uid=0,gid=0,umask=077,shortname=winnt",
FSTabFreq: 0,
FSTabPassNo: 2,
},
},
{
Size: 500 * common.MebiByte, // 500 MB
Type: disk.XBootLDRPartitionGUID,
UUID: disk.FilesystemDataUUID,
Payload: &disk.Filesystem{
Type: "xfs",
Mountpoint: "/boot",
Label: "boot",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
{
Size: 2 * common.GibiByte, // 2GiB
Type: disk.FilesystemDataGUID,
UUID: disk.RootPartitionUUID,
Payload: &disk.Filesystem{
Type: "xfs",
Label: "root",
Mountpoint: "/",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
},
},
platform.ARCH_AARCH64.String(): disk.PartitionTable{
UUID: "D209C89E-EA5E-4FBD-B161-B461CCE297E0",
Type: "gpt",
Partitions: []disk.Partition{
{
Size: 200 * common.MebiByte, // 200 MB
Type: disk.EFISystemPartitionGUID,
UUID: disk.EFISystemPartitionUUID,
Payload: &disk.Filesystem{
Type: "vfat",
UUID: disk.EFIFilesystemUUID,
Mountpoint: "/boot/efi",
Label: "EFI-SYSTEM",
FSTabOptions: "defaults,uid=0,gid=0,umask=077,shortname=winnt",
FSTabFreq: 0,
FSTabPassNo: 2,
},
},
{
Size: 500 * common.MebiByte, // 500 MB
Type: disk.XBootLDRPartitionGUID,
UUID: disk.FilesystemDataUUID,
Payload: &disk.Filesystem{
Type: "xfs",
Mountpoint: "/boot",
Label: "boot",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
{
Size: 2 * common.GibiByte, // 2GiB
Type: disk.FilesystemDataGUID,
UUID: disk.RootPartitionUUID,
Payload: &disk.Filesystem{
Type: "xfs",
Label: "root",
Mountpoint: "/",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
},
},
platform.ARCH_PPC64LE.String(): disk.PartitionTable{
UUID: "0x14fc63d2",
Type: "dos",
Partitions: []disk.Partition{
{
Size: 4 * common.MebiByte,
Type: "41",
Bootable: true,
},
{
Size: 500 * common.MebiByte, // 500 MB
Payload: &disk.Filesystem{
Type: "xfs",
Mountpoint: "/boot",
Label: "boot",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
{
Size: 2 * common.GibiByte, // 2GiB
Payload: &disk.Filesystem{
Type: "xfs",
Mountpoint: "/",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
},
},
platform.ARCH_S390X.String(): disk.PartitionTable{
UUID: "0x14fc63d2",
Type: "dos",
Partitions: []disk.Partition{
{
Size: 500 * common.MebiByte, // 500 MB
Payload: &disk.Filesystem{
Type: "xfs",
Mountpoint: "/boot",
Label: "boot",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
{
Size: 2 * common.GibiByte, // 2GiB
Bootable: true,
Payload: &disk.Filesystem{
Type: "xfs",
Mountpoint: "/",
FSTabOptions: "defaults",
FSTabFreq: 0,
FSTabPassNo: 0,
},
},
},
},
}

View file

@ -0,0 +1,173 @@
package rhel9
import (
"github.com/osbuild/images/internal/common"
"github.com/osbuild/images/pkg/distro"
"github.com/osbuild/images/pkg/osbuild"
"github.com/osbuild/images/pkg/platform"
"github.com/osbuild/images/pkg/rpmmd"
"github.com/osbuild/images/pkg/subscription"
)
var (
openstackImgType = imageType{
name: "openstack",
filename: "disk.qcow2",
mimeType: "application/x-qemu-disk",
packageSets: map[string]packageSetFunc{
osPkgsKey: openstackCommonPackageSet,
},
defaultImageConfig: &distro.ImageConfig{
Locale: common.ToPtr("en_US.UTF-8"),
},
kernelOptions: "ro net.ifnames=0",
bootable: true,
defaultSize: 4 * common.GibiByte,
image: liveImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"os", "image", "qcow2"},
exports: []string{"qcow2"},
basePartitionTables: defaultBasePartitionTables,
}
)
func qcow2CommonPackageSet(t *imageType) rpmmd.PackageSet {
ps := rpmmd.PackageSet{
Include: []string{
"authselect-compat",
"chrony",
"cloud-init",
"cloud-utils-growpart",
"cockpit-system",
"cockpit-ws",
"dnf-utils",
"dosfstools",
"nfs-utils",
"oddjob",
"oddjob-mkhomedir",
"psmisc",
"python3-jsonschema",
"qemu-guest-agent",
"redhat-release",
"redhat-release-eula",
"rsync",
"tar",
"tcpdump",
},
Exclude: []string{
"aic94xx-firmware",
"alsa-firmware",
"alsa-lib",
"alsa-tools-firmware",
"biosdevname",
"dnf-plugin-spacewalk",
"fedora-release",
"fedora-repos",
"iprutils",
"ivtv-firmware",
"langpacks-*",
"langpacks-en",
"libertas-sd8787-firmware",
"nss",
"plymouth",
"rng-tools",
"udisks2",
},
}.Append(coreOsCommonPackageSet(t)).Append(distroSpecificPackageSet(t))
// Ensure to not pull in subscription-manager on non-RHEL distro
if t.arch.distro.isRHEL() {
ps = ps.Append(rpmmd.PackageSet{
Include: []string{
"subscription-manager-cockpit",
},
})
}
return ps
}
func openstackCommonPackageSet(t *imageType) rpmmd.PackageSet {
ps := rpmmd.PackageSet{
Include: []string{
// Defaults
"langpacks-en",
"firewalld",
// From the lorax kickstart
"cloud-init",
"qemu-guest-agent",
"spice-vdagent",
},
Exclude: []string{
"rng-tools",
},
}.Append(coreOsCommonPackageSet(t))
if t.arch.Name() == platform.ARCH_X86_64.String() {
ps = ps.Append(rpmmd.PackageSet{
Include: []string{
// packages below used to come from @core group and were not excluded
// they may not be needed at all, but kept them here to not need
// to exclude them instead in all other images
"iwl100-firmware",
"iwl105-firmware",
"iwl135-firmware",
"iwl1000-firmware",
"iwl2000-firmware",
"iwl2030-firmware",
"iwl3160-firmware",
"iwl5000-firmware",
"iwl5150-firmware",
"iwl6000g2a-firmware",
"iwl6050-firmware",
"iwl7260-firmware",
},
})
}
return ps
}
func qcowImageConfig(d distribution) *distro.ImageConfig {
ic := &distro.ImageConfig{
DefaultTarget: common.ToPtr("multi-user.target"),
}
if d.isRHEL() {
ic.RHSMConfig = map[subscription.RHSMStatus]*osbuild.RHSMStageOptions{
subscription.RHSMConfigNoSubscription: {
DnfPlugins: &osbuild.RHSMStageOptionsDnfPlugins{
ProductID: &osbuild.RHSMStageOptionsDnfPlugin{
Enabled: false,
},
SubscriptionManager: &osbuild.RHSMStageOptionsDnfPlugin{
Enabled: false,
},
},
},
}
}
return ic
}
func mkQcow2ImgType(d distribution) imageType {
it := imageType{
name: "qcow2",
filename: "disk.qcow2",
mimeType: "application/x-qemu-disk",
kernelOptions: "console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0",
packageSets: map[string]packageSetFunc{
osPkgsKey: qcow2CommonPackageSet,
},
bootable: true,
defaultSize: 10 * common.GibiByte,
image: liveImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"os", "image", "qcow2"},
exports: []string{"qcow2"},
basePartitionTables: defaultBasePartitionTables,
}
it.defaultImageConfig = qcowImageConfig(d)
return it
}

View file

@ -0,0 +1,176 @@
package rhel9
import (
"github.com/osbuild/images/pkg/distro"
"github.com/osbuild/images/pkg/osbuild"
"github.com/osbuild/images/pkg/rpmmd"
)
// sapImageConfig returns the SAP specific ImageConfig data
func sapImageConfig(osVersion string) *distro.ImageConfig {
return &distro.ImageConfig{
SELinuxConfig: &osbuild.SELinuxConfigStageOptions{
State: osbuild.SELinuxStatePermissive,
},
// RHBZ#1960617
Tuned: osbuild.NewTunedStageOptions("sap-hana"),
// RHBZ#1959979
Tmpfilesd: []*osbuild.TmpfilesdStageOptions{
osbuild.NewTmpfilesdStageOptions("sap.conf",
[]osbuild.TmpfilesdConfigLine{
{
Type: "x",
Path: "/tmp/.sap*",
},
{
Type: "x",
Path: "/tmp/.hdb*lock",
},
{
Type: "x",
Path: "/tmp/.trex*lock",
},
},
),
},
// RHBZ#1959963
PamLimitsConf: []*osbuild.PamLimitsConfStageOptions{
osbuild.NewPamLimitsConfStageOptions("99-sap.conf",
[]osbuild.PamLimitsConfigLine{
{
Domain: "@sapsys",
Type: osbuild.PamLimitsTypeHard,
Item: osbuild.PamLimitsItemNofile,
Value: osbuild.PamLimitsValueInt(1048576),
},
{
Domain: "@sapsys",
Type: osbuild.PamLimitsTypeSoft,
Item: osbuild.PamLimitsItemNofile,
Value: osbuild.PamLimitsValueInt(1048576),
},
{
Domain: "@dba",
Type: osbuild.PamLimitsTypeHard,
Item: osbuild.PamLimitsItemNofile,
Value: osbuild.PamLimitsValueInt(1048576),
},
{
Domain: "@dba",
Type: osbuild.PamLimitsTypeSoft,
Item: osbuild.PamLimitsItemNofile,
Value: osbuild.PamLimitsValueInt(1048576),
},
{
Domain: "@sapsys",
Type: osbuild.PamLimitsTypeHard,
Item: osbuild.PamLimitsItemNproc,
Value: osbuild.PamLimitsValueUnlimited,
},
{
Domain: "@sapsys",
Type: osbuild.PamLimitsTypeSoft,
Item: osbuild.PamLimitsItemNproc,
Value: osbuild.PamLimitsValueUnlimited,
},
{
Domain: "@dba",
Type: osbuild.PamLimitsTypeHard,
Item: osbuild.PamLimitsItemNproc,
Value: osbuild.PamLimitsValueUnlimited,
},
{
Domain: "@dba",
Type: osbuild.PamLimitsTypeSoft,
Item: osbuild.PamLimitsItemNproc,
Value: osbuild.PamLimitsValueUnlimited,
},
},
),
},
// RHBZ#1959962
Sysctld: []*osbuild.SysctldStageOptions{
osbuild.NewSysctldStageOptions("sap.conf",
[]osbuild.SysctldConfigLine{
{
Key: "kernel.pid_max",
Value: "4194304",
},
{
Key: "vm.max_map_count",
Value: "2147483647",
},
},
),
},
// E4S/EUS
DNFConfig: []*osbuild.DNFConfigStageOptions{
osbuild.NewDNFConfigStageOptions(
[]osbuild.DNFVariable{
{
Name: "releasever",
Value: osVersion,
},
},
nil,
),
},
}
}
func SapPackageSet(t *imageType) rpmmd.PackageSet {
return rpmmd.PackageSet{
Include: []string{
// RHBZ#2076763
"@Server",
// SAP System Roles
// https://access.redhat.com/sites/default/files/attachments/rhel_system_roles_for_sap_1.pdf
"ansible-core",
"rhel-system-roles-sap",
// RHBZ#1959813
"bind-utils",
"nfs-utils",
"tcsh",
// RHBZ#1959955
"uuidd",
// RHBZ#1959923
"cairo",
"expect",
"graphviz",
"gtk2",
"iptraf-ng",
"krb5-workstation",
"libaio",
"libatomic",
"libcanberra-gtk2",
"libicu",
"libtool-ltdl",
"lm_sensors",
"net-tools",
"numactl",
"PackageKit-gtk3-module",
"xorg-x11-xauth",
// RHBZ#1960617
"tuned-profiles-sap-hana",
// RHBZ#1961168
"libnsl",
},
Exclude: []string{
// COMPOSER-1829
"firewalld",
"iwl1000-firmware",
"iwl100-firmware",
"iwl105-firmware",
"iwl135-firmware",
"iwl2000-firmware",
"iwl2030-firmware",
"iwl3160-firmware",
"iwl5000-firmware",
"iwl5150-firmware",
"iwl6000g2a-firmware",
"iwl6000g2b-firmware",
"iwl6050-firmware",
"iwl7260-firmware",
},
}
}

View file

@ -0,0 +1,89 @@
package rhel9
import (
"github.com/osbuild/images/internal/common"
"github.com/osbuild/images/pkg/distro"
"github.com/osbuild/images/pkg/platform"
"github.com/osbuild/images/pkg/rpmmd"
)
const vmdkKernelOptions = "ro net.ifnames=0"
var vmdkImgType = imageType{
name: "vmdk",
filename: "disk.vmdk",
mimeType: "application/x-vmdk",
packageSets: map[string]packageSetFunc{
osPkgsKey: vmdkCommonPackageSet,
},
defaultImageConfig: &distro.ImageConfig{
Locale: common.ToPtr("en_US.UTF-8"),
},
kernelOptions: vmdkKernelOptions,
bootable: true,
defaultSize: 4 * common.GibiByte,
image: liveImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"os", "image", "vmdk"},
exports: []string{"vmdk"},
basePartitionTables: defaultBasePartitionTables,
}
var ovaImgType = imageType{
name: "ova",
filename: "image.ova",
mimeType: "application/ovf",
packageSets: map[string]packageSetFunc{
osPkgsKey: vmdkCommonPackageSet,
},
defaultImageConfig: &distro.ImageConfig{
Locale: common.ToPtr("en_US.UTF-8"),
},
kernelOptions: vmdkKernelOptions,
bootable: true,
defaultSize: 4 * common.GibiByte,
image: liveImage,
buildPipelines: []string{"build"},
payloadPipelines: []string{"os", "image", "vmdk", "ovf", "archive"},
exports: []string{"archive"},
basePartitionTables: defaultBasePartitionTables,
}
func vmdkCommonPackageSet(t *imageType) rpmmd.PackageSet {
ps := rpmmd.PackageSet{
Include: []string{
"chrony",
"cloud-init",
"firewalld",
"langpacks-en",
"open-vm-tools",
},
Exclude: []string{
"rng-tools",
},
}.Append(coreOsCommonPackageSet(t))
if t.arch.Name() == platform.ARCH_X86_64.String() {
ps = ps.Append(rpmmd.PackageSet{
Include: []string{
// packages below used to come from @core group and were not excluded
// they may not be needed at all, but kept them here to not need
// to exclude them instead in all other images
"iwl100-firmware",
"iwl105-firmware",
"iwl135-firmware",
"iwl1000-firmware",
"iwl2000-firmware",
"iwl2030-firmware",
"iwl3160-firmware",
"iwl5000-firmware",
"iwl5150-firmware",
"iwl6000g2a-firmware",
"iwl6050-firmware",
"iwl7260-firmware",
},
})
}
return ps
}

View file

@ -0,0 +1,435 @@
package test_distro
import (
"crypto/sha256"
"errors"
"fmt"
"sort"
dnfjson_mock "github.com/osbuild/images/internal/mocks/dnfjson"
"github.com/osbuild/images/pkg/blueprint"
"github.com/osbuild/images/pkg/container"
"github.com/osbuild/images/pkg/distro"
"github.com/osbuild/images/pkg/distroregistry"
"github.com/osbuild/images/pkg/manifest"
"github.com/osbuild/images/pkg/ostree"
"github.com/osbuild/images/pkg/rpmmd"
)
const (
// package set names
// build package set name
buildPkgsKey = "build"
// main/common os image package set name
osPkgsKey = "os"
// blueprint package set name
blueprintPkgsKey = "blueprint"
)
type TestDistro struct {
name string
releasever string
modulePlatformID string
ostreeRef string
arches map[string]distro.Arch
}
type TestArch struct {
distribution *TestDistro
name string
imageTypes map[string]distro.ImageType
}
type TestImageType struct {
architecture *TestArch
name string
}
const (
TestDistroName = "test-distro"
TestDistro2Name = "test-distro-2"
TestDistroReleasever = "1"
TestDistro2Releasever = "2"
TestDistroModulePlatformID = "platform:test"
TestDistro2ModulePlatformID = "platform:test-2"
TestArchName = "test_arch"
TestArch2Name = "test_arch2"
TestArch3Name = "test_arch3"
TestImageTypeName = "test_type"
TestImageType2Name = "test_type2"
TestImageTypeOSTree = "test_ostree_type"
// added for cloudapi tests
TestImageTypeAmi = "ami"
TestImageTypeGce = "gce"
TestImageTypeVhd = "vhd"
TestImageTypeEdgeCommit = "rhel-edge-commit"
TestImageTypeEdgeInstaller = "rhel-edge-installer"
TestImageTypeImageInstaller = "image-installer"
TestImageTypeQcow2 = "qcow2"
TestImageTypeVmdk = "vmdk"
)
// TestDistro
func (d *TestDistro) Name() string {
return d.name
}
func (d *TestDistro) Releasever() string {
return d.releasever
}
func (d *TestDistro) ModulePlatformID() string {
return d.modulePlatformID
}
func (d *TestDistro) OSTreeRef() string {
return d.ostreeRef
}
func (d *TestDistro) ListArches() []string {
archs := make([]string, 0, len(d.arches))
for name := range d.arches {
archs = append(archs, name)
}
sort.Strings(archs)
return archs
}
func (d *TestDistro) GetArch(arch string) (distro.Arch, error) {
a, exists := d.arches[arch]
if !exists {
return nil, errors.New("invalid arch: " + arch)
}
return a, nil
}
func (d *TestDistro) addArches(arches ...*TestArch) {
if d.arches == nil {
d.arches = map[string]distro.Arch{}
}
for _, a := range arches {
a.distribution = d
d.arches[a.Name()] = a
}
}
// TestArch
func (a *TestArch) Name() string {
return a.name
}
func (a *TestArch) Distro() distro.Distro {
return a.distribution
}
func (a *TestArch) ListImageTypes() []string {
formats := make([]string, 0, len(a.imageTypes))
for name := range a.imageTypes {
formats = append(formats, name)
}
sort.Strings(formats)
return formats
}
func (a *TestArch) GetImageType(imageType string) (distro.ImageType, error) {
t, exists := a.imageTypes[imageType]
if !exists {
return nil, errors.New("invalid image type: " + imageType)
}
return t, nil
}
func (a *TestArch) addImageTypes(imageTypes ...TestImageType) {
if a.imageTypes == nil {
a.imageTypes = map[string]distro.ImageType{}
}
for idx := range imageTypes {
it := imageTypes[idx]
it.architecture = a
a.imageTypes[it.Name()] = &it
}
}
// TestImageType
func (t *TestImageType) Name() string {
return t.name
}
func (t *TestImageType) Arch() distro.Arch {
return t.architecture
}
func (t *TestImageType) Filename() string {
return "test.img"
}
func (t *TestImageType) MIMEType() string {
return "application/x-test"
}
func (t *TestImageType) OSTreeRef() string {
if t.name == TestImageTypeEdgeCommit || t.name == TestImageTypeEdgeInstaller || t.name == TestImageTypeOSTree {
return t.architecture.distribution.OSTreeRef()
}
return ""
}
func (t *TestImageType) Size(size uint64) uint64 {
return 0
}
func (t *TestImageType) PartitionType() string {
return ""
}
func (t *TestImageType) BootMode() distro.BootMode {
return distro.BOOT_HYBRID
}
func (t *TestImageType) BuildPipelines() []string {
return distro.BuildPipelinesFallback()
}
func (t *TestImageType) PayloadPipelines() []string {
return distro.PayloadPipelinesFallback()
}
func (t *TestImageType) PayloadPackageSets() []string {
return []string{blueprintPkgsKey}
}
func (t *TestImageType) PackageSetsChains() map[string][]string {
return map[string][]string{
osPkgsKey: {osPkgsKey, blueprintPkgsKey},
}
}
func (t *TestImageType) Exports() []string {
return distro.ExportsFallback()
}
func (t *TestImageType) Manifest(b *blueprint.Blueprint, options distro.ImageOptions, repos []rpmmd.RepoConfig, seed int64) (*manifest.Manifest, []string, error) {
var bpPkgs []string
if b != nil {
mountpoints := b.Customizations.GetFilesystems()
invalidMountpoints := []string{}
for _, m := range mountpoints {
if m.Mountpoint != "/" {
invalidMountpoints = append(invalidMountpoints, m.Mountpoint)
}
}
if len(invalidMountpoints) > 0 {
return nil, nil, fmt.Errorf("The following custom mountpoints are not supported %+q", invalidMountpoints)
}
bpPkgs = b.GetPackages()
}
var ostreeSources []ostree.SourceSpec
if defaultRef := t.OSTreeRef(); defaultRef != "" {
// ostree image type
ostreeSource := ostree.SourceSpec{ // init with default
Ref: defaultRef,
}
if ostreeOptions := options.OSTree; ostreeOptions != nil {
// handle the parameter combo error like we do in distros
if ostreeOptions.ParentRef != "" && ostreeOptions.URL == "" {
// specifying parent ref also requires URL
return nil, nil, ostree.NewParameterComboError("ostree parent ref specified, but no URL to retrieve it")
}
if ostreeOptions.ImageRef != "" { // override with ref from image options
ostreeSource.Ref = ostreeOptions.ImageRef
}
if ostreeOptions.ParentRef != "" { // override with parent ref
ostreeSource.Ref = ostreeOptions.ParentRef
}
// copy any other options that might be specified
ostreeSource.URL = options.OSTree.URL
ostreeSource.RHSM = options.OSTree.RHSM
}
ostreeSources = []ostree.SourceSpec{ostreeSource}
}
buildPackages := []rpmmd.PackageSet{{
Include: []string{
"dep-package1",
"dep-package2",
"dep-package3",
},
Repositories: repos,
}}
osPackages := []rpmmd.PackageSet{
{
Include: bpPkgs,
Repositories: repos,
},
{
Include: []string{
"dep-package1",
"dep-package2",
"dep-package3",
},
Repositories: repos,
},
}
m := &manifest.Manifest{}
manifest.NewContentTest(m, buildPkgsKey, buildPackages, nil, nil)
manifest.NewContentTest(m, osPkgsKey, osPackages, nil, ostreeSources)
return m, nil, nil
}
// newTestDistro returns a new instance of TestDistro with the
// given name and modulePlatformID.
//
// It contains two architectures "test_arch" and "test_arch2".
// "test_arch" contains one image type "test_type".
// "test_arch2" contains two image types "test_type" and "test_type2".
func newTestDistro(name, modulePlatformID, releasever string) *TestDistro {
td := TestDistro{
name: name,
releasever: releasever,
modulePlatformID: modulePlatformID,
ostreeRef: "test/13/x86_64/edge",
}
ta1 := TestArch{
name: TestArchName,
}
ta2 := TestArch{
name: TestArch2Name,
}
ta3 := TestArch{
name: TestArch3Name,
}
it1 := TestImageType{
name: TestImageTypeName,
}
it2 := TestImageType{
name: TestImageType2Name,
}
it3 := TestImageType{
name: TestImageTypeAmi,
}
it4 := TestImageType{
name: TestImageTypeVhd,
}
it5 := TestImageType{
name: TestImageTypeEdgeCommit,
}
it6 := TestImageType{
name: TestImageTypeEdgeInstaller,
}
it7 := TestImageType{
name: TestImageTypeImageInstaller,
}
it8 := TestImageType{
name: TestImageTypeQcow2,
}
it9 := TestImageType{
name: TestImageTypeVmdk,
}
it10 := TestImageType{
name: TestImageTypeGce,
}
it11 := TestImageType{
name: TestImageTypeOSTree,
}
ta1.addImageTypes(it1, it11)
ta2.addImageTypes(it1, it2)
ta3.addImageTypes(it3, it4, it5, it6, it7, it8, it9, it10)
td.addArches(&ta1, &ta2, &ta3)
return &td
}
// New returns new instance of TestDistro named "test-distro".
func New() *TestDistro {
return newTestDistro(TestDistroName, TestDistroModulePlatformID, TestDistroReleasever)
}
func NewRegistry() *distroregistry.Registry {
td := New()
registry, err := distroregistry.New(td, td)
if err != nil {
panic(err)
}
// Override the host's architecture name with the test's name
registry.SetHostArchName(TestArchName)
return registry
}
// New2 returns new instance of TestDistro named "test-distro-2".
func New2() *TestDistro {
return newTestDistro(TestDistro2Name, TestDistro2ModulePlatformID, TestDistro2Releasever)
}
// ResolveContent transforms content source specs into resolved specs for serialization.
// For packages, it uses the dnfjson_mock.BaseDeps() every time, but retains
// the map keys from the input.
// For ostree commits it hashes the URL+Ref to create a checksum.
func ResolveContent(pkgs map[string][]rpmmd.PackageSet, containers map[string][]container.SourceSpec, commits map[string][]ostree.SourceSpec) (map[string][]rpmmd.PackageSpec, map[string][]container.Spec, map[string][]ostree.CommitSpec) {
pkgSpecs := make(map[string][]rpmmd.PackageSpec, len(pkgs))
for name := range pkgs {
pkgSpecs[name] = dnfjson_mock.BaseDeps()
}
containerSpecs := make(map[string][]container.Spec, len(containers))
for name := range containers {
containerSpecs[name] = make([]container.Spec, len(containers[name]))
for idx := range containers[name] {
containerSpecs[name][idx] = container.Spec{
Source: containers[name][idx].Source,
TLSVerify: containers[name][idx].TLSVerify,
LocalName: containers[name][idx].Name,
}
}
}
commitSpecs := make(map[string][]ostree.CommitSpec, len(commits))
for name := range commits {
commitSpecs[name] = make([]ostree.CommitSpec, len(commits[name]))
for idx := range commits[name] {
commitSpecs[name][idx] = ostree.CommitSpec{
Ref: commits[name][idx].Ref,
URL: commits[name][idx].URL,
Checksum: fmt.Sprintf("%x", sha256.Sum256([]byte(commits[name][idx].URL+commits[name][idx].Ref))),
}
fmt.Printf("Test distro spec: %+v\n", commitSpecs[name][idx])
}
}
return pkgSpecs, containerSpecs, commitSpecs
}

View file

@ -0,0 +1,143 @@
package distroregistry
import (
"fmt"
"sort"
"strings"
"github.com/osbuild/images/internal/common"
"github.com/osbuild/images/pkg/distro"
"github.com/osbuild/images/pkg/distro/fedora"
"github.com/osbuild/images/pkg/distro/rhel7"
"github.com/osbuild/images/pkg/distro/rhel8"
"github.com/osbuild/images/pkg/distro/rhel9"
)
// When adding support for a new distribution, add it here.
// Note that this is a constant, do not write to this array.
var supportedDistros = []func() distro.Distro{
fedora.NewF37,
fedora.NewF38,
fedora.NewF39,
rhel7.New,
rhel8.New,
rhel8.NewRHEL84,
rhel8.NewRHEL85,
rhel8.NewRHEL86,
rhel8.NewRHEL87,
rhel8.NewRHEL88,
rhel8.NewRHEL89,
rhel8.NewCentos,
rhel9.New,
rhel9.NewRHEL90,
rhel9.NewRHEL91,
rhel9.NewRHEL92,
rhel9.NewRHEL93,
rhel9.NewCentOS9,
}
type Registry struct {
distros map[string]distro.Distro
hostDistro distro.Distro
hostArchName string
}
func New(hostDistro distro.Distro, distros ...distro.Distro) (*Registry, error) {
reg := &Registry{
distros: make(map[string]distro.Distro),
hostDistro: hostDistro,
hostArchName: common.CurrentArch(),
}
for _, d := range distros {
name := d.Name()
if _, exists := reg.distros[name]; exists {
return nil, fmt.Errorf("New: passed two distros with the same name: %s", d.Name())
}
reg.distros[name] = d
}
return reg, nil
}
// NewDefault creates a Registry with all distributions supported by
// osbuild-composer. If you need to add a distribution here, see the
// supportedDistros variable.
func NewDefault() *Registry {
var distros []distro.Distro
var hostDistro distro.Distro
// First determine the name of the Host Distro
// If there was an error, then the hostDistroName will be an empty string
// and as a result, the hostDistro will have a nil value when calling New().
// Getting the host distro later using FromHost() will return nil as well.
hostDistroName, _, _, _ := common.GetHostDistroName()
for _, supportedDistro := range supportedDistros {
distro := supportedDistro()
if distro.Name() == hostDistroName {
hostDistro = supportedDistro()
}
distros = append(distros, distro)
}
registry, err := New(hostDistro, distros...)
if err != nil {
panic(fmt.Sprintf("two supported distros have the same name, this is a programming error: %v", err))
}
return registry
}
func (r *Registry) GetDistro(name string) distro.Distro {
d, ok := r.distros[name]
if !ok {
return nil
}
return d
}
// List returns the names of all distros in a Registry, sorted alphabetically.
func (r *Registry) List() []string {
list := []string{}
for _, d := range r.distros {
list = append(list, d.Name())
}
sort.Strings(list)
return list
}
func mangleHostDistroName(name string, isBeta, isStream bool) string {
hostDistroName := name
if strings.HasPrefix(hostDistroName, "rhel-8") {
hostDistroName = "rhel-8"
}
if isBeta {
hostDistroName += "-beta"
}
// override repository for centos stream, remove when CentOS 8 is EOL
if isStream && hostDistroName == "centos-8" {
hostDistroName = "centos-stream-8"
}
return hostDistroName
}
// FromHost returns a distro instance, that is specific to the host.
// Its name may differ from other supported distros, if the host version
// is e.g. a Beta or a Stream.
func (r *Registry) FromHost() distro.Distro {
return r.hostDistro
}
// HostArchName returns the host's arch name
func (r *Registry) HostArchName() string {
return r.hostArchName
}
// SetHostArchName can be used to override the host's arch name for testing
func (r *Registry) SetHostArchName(name string) {
r.hostArchName = name
}

View file

@ -0,0 +1,127 @@
package image
import (
"fmt"
"math/rand"
"github.com/osbuild/images/internal/common"
"github.com/osbuild/images/internal/environment"
"github.com/osbuild/images/internal/workload"
"github.com/osbuild/images/pkg/artifact"
"github.com/osbuild/images/pkg/disk"
"github.com/osbuild/images/pkg/manifest"
"github.com/osbuild/images/pkg/platform"
"github.com/osbuild/images/pkg/rpmmd"
"github.com/osbuild/images/pkg/runner"
)
type AnacondaLiveInstaller struct {
Base
Platform platform.Platform
Environment environment.Environment
Workload workload.Workload
ExtraBasePackages rpmmd.PackageSet
ISOLabelTempl string
Product string
Variant string
OSName string
OSVersion string
Release string
Filename string
AdditionalKernelOpts []string
}
func NewAnacondaLiveInstaller() *AnacondaLiveInstaller {
return &AnacondaLiveInstaller{
Base: NewBase("live-installer"),
}
}
func (img *AnacondaLiveInstaller) InstantiateManifest(m *manifest.Manifest,
repos []rpmmd.RepoConfig,
runner runner.Runner,
rng *rand.Rand) (*artifact.Artifact, error) {
buildPipeline := manifest.NewBuild(m, runner, repos)
buildPipeline.Checkpoint()
livePipeline := manifest.NewAnacondaInstaller(m,
manifest.AnacondaInstallerTypeLive,
buildPipeline,
img.Platform,
repos,
"kernel",
img.Product,
img.OSVersion)
livePipeline.ExtraPackages = img.ExtraBasePackages.Include
livePipeline.Variant = img.Variant
livePipeline.Biosdevname = (img.Platform.GetArch() == platform.ARCH_X86_64)
livePipeline.Checkpoint()
rootfsPartitionTable := &disk.PartitionTable{
Size: 20 * common.MebiByte,
Partitions: []disk.Partition{
{
Start: 0,
Size: 20 * common.MebiByte,
Payload: &disk.Filesystem{
Type: "vfat",
Mountpoint: "/",
UUID: disk.NewVolIDFromRand(rng),
},
},
},
}
// TODO: replace isoLabelTmpl with more high-level properties
isoLabel := fmt.Sprintf(img.ISOLabelTempl, img.Platform.GetArch())
rootfsImagePipeline := manifest.NewISORootfsImg(m, buildPipeline, livePipeline)
rootfsImagePipeline.Size = 8 * common.GibiByte
bootTreePipeline := manifest.NewEFIBootTree(m, buildPipeline, img.Product, img.OSVersion)
bootTreePipeline.Platform = img.Platform
bootTreePipeline.UEFIVendor = img.Platform.GetUEFIVendor()
bootTreePipeline.ISOLabel = isoLabel
kernelOpts := []string{
fmt.Sprintf("root=live:CDLABEL=%s", isoLabel),
"rd.live.image",
"quiet",
"rhgb",
}
kernelOpts = append(kernelOpts, img.AdditionalKernelOpts...)
bootTreePipeline.KernelOpts = kernelOpts
// enable ISOLinux on x86_64 only
isoLinuxEnabled := img.Platform.GetArch() == platform.ARCH_X86_64
isoTreePipeline := manifest.NewAnacondaInstallerISOTree(m,
buildPipeline,
livePipeline,
rootfsImagePipeline,
bootTreePipeline,
isoLabel)
isoTreePipeline.PartitionTable = rootfsPartitionTable
isoTreePipeline.Release = img.Release
isoTreePipeline.OSName = img.OSName
isoTreePipeline.KernelOpts = kernelOpts
isoTreePipeline.ISOLinux = isoLinuxEnabled
isoPipeline := manifest.NewISO(m, buildPipeline, isoTreePipeline, isoLabel)
isoPipeline.Filename = img.Filename
isoPipeline.ISOLinux = isoLinuxEnabled
artifact := isoPipeline.Export()
return artifact, nil
}

View file

@ -0,0 +1,133 @@
package image
import (
"fmt"
"math/rand"
"github.com/osbuild/images/internal/common"
"github.com/osbuild/images/internal/users"
"github.com/osbuild/images/pkg/artifact"
"github.com/osbuild/images/pkg/disk"
"github.com/osbuild/images/pkg/manifest"
"github.com/osbuild/images/pkg/ostree"
"github.com/osbuild/images/pkg/platform"
"github.com/osbuild/images/pkg/rpmmd"
"github.com/osbuild/images/pkg/runner"
)
type AnacondaOSTreeInstaller struct {
Base
Platform platform.Platform
ExtraBasePackages rpmmd.PackageSet
Users []users.User
Groups []users.Group
SquashfsCompression string
ISOLabelTempl string
Product string
Variant string
OSName string
OSVersion string
Release string
Commit ostree.SourceSpec
Filename string
AdditionalDracutModules []string
AdditionalAnacondaModules []string
AdditionalDrivers []string
}
func NewAnacondaOSTreeInstaller(commit ostree.SourceSpec) *AnacondaOSTreeInstaller {
return &AnacondaOSTreeInstaller{
Base: NewBase("ostree-installer"),
Commit: commit,
}
}
func (img *AnacondaOSTreeInstaller) InstantiateManifest(m *manifest.Manifest,
repos []rpmmd.RepoConfig,
runner runner.Runner,
rng *rand.Rand) (*artifact.Artifact, error) {
buildPipeline := manifest.NewBuild(m, runner, repos)
buildPipeline.Checkpoint()
anacondaPipeline := manifest.NewAnacondaInstaller(m,
manifest.AnacondaInstallerTypePayload,
buildPipeline,
img.Platform,
repos,
"kernel",
img.Product,
img.OSVersion)
anacondaPipeline.ExtraPackages = img.ExtraBasePackages.Include
anacondaPipeline.ExtraRepos = img.ExtraBasePackages.Repositories
anacondaPipeline.Users = img.Users
anacondaPipeline.Groups = img.Groups
anacondaPipeline.Variant = img.Variant
anacondaPipeline.Biosdevname = (img.Platform.GetArch() == platform.ARCH_X86_64)
anacondaPipeline.Checkpoint()
anacondaPipeline.AdditionalDracutModules = img.AdditionalDracutModules
anacondaPipeline.AdditionalAnacondaModules = img.AdditionalAnacondaModules
anacondaPipeline.AdditionalDrivers = img.AdditionalDrivers
rootfsPartitionTable := &disk.PartitionTable{
Size: 20 * common.MebiByte,
Partitions: []disk.Partition{
{
Start: 0,
Size: 20 * common.MebiByte,
Payload: &disk.Filesystem{
Type: "vfat",
Mountpoint: "/",
UUID: disk.NewVolIDFromRand(rng),
},
},
},
}
// TODO: replace isoLabelTmpl with more high-level properties
isoLabel := fmt.Sprintf(img.ISOLabelTempl, img.Platform.GetArch())
rootfsImagePipeline := manifest.NewISORootfsImg(m, buildPipeline, anacondaPipeline)
rootfsImagePipeline.Size = 4 * common.GibiByte
bootTreePipeline := manifest.NewEFIBootTree(m, buildPipeline, img.Product, img.OSVersion)
bootTreePipeline.Platform = img.Platform
bootTreePipeline.UEFIVendor = img.Platform.GetUEFIVendor()
bootTreePipeline.ISOLabel = isoLabel
bootTreePipeline.KernelOpts = []string{fmt.Sprintf("inst.stage2=hd:LABEL=%s", isoLabel), fmt.Sprintf("inst.ks=hd:LABEL=%s:%s", isoLabel, kspath)}
// enable ISOLinux on x86_64 only
isoLinuxEnabled := img.Platform.GetArch() == platform.ARCH_X86_64
isoTreePipeline := manifest.NewAnacondaInstallerISOTree(m,
buildPipeline,
anacondaPipeline,
rootfsImagePipeline,
bootTreePipeline,
isoLabel)
isoTreePipeline.PartitionTable = rootfsPartitionTable
isoTreePipeline.Release = img.Release
isoTreePipeline.OSName = img.OSName
isoTreePipeline.Users = img.Users
isoTreePipeline.Groups = img.Groups
isoTreePipeline.SquashfsCompression = img.SquashfsCompression
// For ostree installers, always put the kickstart file in the root of the ISO
isoTreePipeline.KSPath = kspath
isoTreePipeline.PayloadPath = "/ostree/repo"
isoTreePipeline.OSTreeCommitSource = &img.Commit
isoTreePipeline.ISOLinux = isoLinuxEnabled
isoPipeline := manifest.NewISO(m, buildPipeline, isoTreePipeline, isoLabel)
isoPipeline.Filename = img.Filename
isoPipeline.ISOLinux = isoLinuxEnabled
artifact := isoPipeline.Export()
return artifact, nil
}

Some files were not shown because too many files have changed in this diff Show more