go.mod: update github.com/containers/image/v5

Version 5.22 introduced a new option to /etc/containers/policy.json called
keyPaths, see

https://github.com/containers/image/pull/1609

EL9 immediately took advantage of this new feature and started using it, see
04645c4a84

This quickly became an issue in our code: The go library (containers/image)
parses the configuration file very strictly and refuses to create a client
when policy.json with an unknown key is present on the filesystem. As we
used 5.21.1 that doesn't know the new key, our unit tests started to
failing when containers-common was present.

Reproducer:
podman run --pull=always --rm -it centos:stream9
dnf install -y dnf-plugins-core
dnf config-manager --set-enabled crb
dnf install -y gpgme-devel libassuan-devel krb5-devel golang git-core
git clone https://github.com/osbuild/osbuild-composer
cd osbuild-composer

# install the new containers-common and run the test
dnf install -y https://kojihub.stream.centos.org/kojifiles/packages/containers-common/1/44.el9/x86_64/containers-common-1-44.el9.x86_64.rpm
go test -count 1 ./...

# this returns:
--- FAIL: TestClientResolve (0.00s)
    client_test.go:31:
        	Error Trace:	client_test.go:31
        	Error:      	Received unexpected error:
        	            	Unknown key "keyPaths"
        	            	invalid policy in "/etc/containers/policy.json"
        	            	github.com/containers/image/v5/signature.NewPolicyFromFile
        	            		/osbuild-composer/vendor/github.com/containers/image/v5/signature/policy_config.go:88
        	            	github.com/osbuild/osbuild-composer/internal/container.NewClient
        	            		/osbuild-composer/internal/container/client.go:123
        	            	github.com/osbuild/osbuild-composer/internal/container_test.TestClientResolve
        	            		/osbuild-composer/internal/container/client_test.go:29
        	            	testing.tRunner
        	            		/usr/lib/golang/src/testing/testing.go:1439
        	            	runtime.goexit
        	            		/usr/lib/golang/src/runtime/asm_amd64.s:1571
        	Test:       	TestClientResolve
    client_test.go:32:
        	Error Trace:	client_test.go:32
        	Error:      	Expected value not to be nil.
        	Test:       	TestClientResolve

 When run with an older containers-common, it succeeds:
 dnf install -y https://kojihub.stream.centos.org/kojifiles/packages/containers-common/1/40.el9/x86_64/containers-common-1-40.el9.x86_64.rpm
 go test -count 1 ./...
 PASS

To sum it up, I had to upgrade github.com/containers/image/v5 to v5.22.0.
Unfortunately, this wasn't so simple, see

go get github.com/containers/image/v5@latest
go: github.com/containers/image/v5@v5.22.0 requires
	github.com/letsencrypt/boulder@v0.0.0-20220331220046-b23ab962616e requires
	github.com/honeycombio/beeline-go@v1.1.1 requires
	github.com/gobuffalo/pop/v5@v5.3.1 requires
	github.com/mattn/go-sqlite3@v2.0.3+incompatible: reading github.com/mattn/go-sqlite3/go.mod at revision v2.0.3: unknown revision v2.0.3

It turns out that github.com/mattn/go-sqlite3@v2.0.3+incompatible has been
recently retracted https://github.com/mattn/go-sqlite3/pull/998 and this
broke a ton of packages depending on it. I was able to fix it by adding

exclude github.com/mattn/go-sqlite3 v2.0.3+incompatible

to our go.mod, see
https://github.com/mattn/go-sqlite3/issues/975#issuecomment-955661657

After adding it,
go get github.com/containers/image/v5@latest
succeeded and tools/prepare-source.sh took care of the rest.

Signed-off-by: Ondřej Budai <ondrej@budai.cz>
This commit is contained in:
Ondřej Budai 2022-08-28 20:29:14 +02:00 committed by Tomáš Hozza
parent fa514c5326
commit 29f66a251f
694 changed files with 90636 additions and 50426 deletions

171
vendor/github.com/containers/image/v5/copy/blob.go generated vendored Normal file
View file

@ -0,0 +1,171 @@
package copy
import (
"context"
"errors"
"fmt"
"io"
"github.com/containers/image/v5/internal/private"
compressiontypes "github.com/containers/image/v5/pkg/compression/types"
"github.com/containers/image/v5/types"
"github.com/sirupsen/logrus"
)
// copyBlobFromStream copies a blob with srcInfo (with known Digest and Annotations and possibly known Size) from srcReader to dest,
// perhaps sending a copy to an io.Writer if getOriginalLayerCopyWriter != nil,
// perhaps (de/re/)compressing it if canModifyBlob,
// and returns a complete blobInfo of the copied blob.
func (ic *imageCopier) copyBlobFromStream(ctx context.Context, srcReader io.Reader, srcInfo types.BlobInfo,
getOriginalLayerCopyWriter func(decompressor compressiontypes.DecompressorFunc) io.Writer,
isConfig bool, toEncrypt bool, bar *progressBar, layerIndex int, emptyLayer bool) (types.BlobInfo, error) {
// The copying happens through a pipeline of connected io.Readers;
// that pipeline is built by updating stream.
// === Input: srcReader
stream := sourceStream{
reader: srcReader,
info: srcInfo,
}
// === Process input through digestingReader to validate against the expected digest.
// Be paranoid; in case PutBlob somehow managed to ignore an error from digestingReader,
// use a separate validation failure indicator.
// Note that for this check we don't use the stronger "validationSucceeded" indicator, because
// dest.PutBlob may detect that the layer already exists, in which case we don't
// read stream to the end, and validation does not happen.
digestingReader, err := newDigestingReader(stream.reader, srcInfo.Digest)
if err != nil {
return types.BlobInfo{}, fmt.Errorf("preparing to verify blob %s: %w", srcInfo.Digest, err)
}
stream.reader = digestingReader
// === Update progress bars
stream.reader = bar.ProxyReader(stream.reader)
// === Decrypt the stream, if required.
decryptionStep, err := ic.c.blobPipelineDecryptionStep(&stream, srcInfo)
if err != nil {
return types.BlobInfo{}, err
}
// === Detect compression of the input stream.
// This requires us to “peek ahead” into the stream to read the initial part, which requires us to chain through another io.Reader returned by DetectCompression.
detectedCompression, err := blobPipelineDetectCompressionStep(&stream, srcInfo)
if err != nil {
return types.BlobInfo{}, err
}
// === Send a copy of the original, uncompressed, stream, to a separate path if necessary.
var originalLayerReader io.Reader // DO NOT USE this other than to drain the input if no other consumer in the pipeline has done so.
if getOriginalLayerCopyWriter != nil {
stream.reader = io.TeeReader(stream.reader, getOriginalLayerCopyWriter(detectedCompression.decompressor))
originalLayerReader = stream.reader
}
// WARNING: If you are adding new reasons to change the blob, update also the OptimizeDestinationImageAlreadyExists
// short-circuit conditions
canModifyBlob := !isConfig && ic.cannotModifyManifestReason == ""
// === Deal with layer compression/decompression if necessary
compressionStep, err := ic.blobPipelineCompressionStep(&stream, canModifyBlob, srcInfo, detectedCompression)
if err != nil {
return types.BlobInfo{}, err
}
defer compressionStep.close()
// === Encrypt the stream for valid mediatypes if ociEncryptConfig provided
if decryptionStep.decrypting && toEncrypt {
// If nothing else, we can only set uploadedInfo.CryptoOperation to a single value.
// Before relaxing this, see the original pull requests review if there are other reasons to reject this.
return types.BlobInfo{}, errors.New("Unable to support both decryption and encryption in the same copy")
}
encryptionStep, err := ic.c.blobPipelineEncryptionStep(&stream, toEncrypt, srcInfo, decryptionStep)
if err != nil {
return types.BlobInfo{}, err
}
// === Report progress using the ic.c.progress channel, if required.
if ic.c.progress != nil && ic.c.progressInterval > 0 {
progressReader := newProgressReader(
stream.reader,
ic.c.progress,
ic.c.progressInterval,
srcInfo,
)
defer progressReader.reportDone()
stream.reader = progressReader
}
// === Finally, send the layer stream to dest.
options := private.PutBlobOptions{
Cache: ic.c.blobInfoCache,
IsConfig: isConfig,
EmptyLayer: emptyLayer,
}
if !isConfig {
options.LayerIndex = &layerIndex
}
uploadedInfo, err := ic.c.dest.PutBlobWithOptions(ctx, &errorAnnotationReader{stream.reader}, stream.info, options)
if err != nil {
return types.BlobInfo{}, fmt.Errorf("writing blob: %w", err)
}
uploadedInfo.Annotations = stream.info.Annotations
compressionStep.updateCompressionEdits(&uploadedInfo.CompressionOperation, &uploadedInfo.CompressionAlgorithm, &uploadedInfo.Annotations)
decryptionStep.updateCryptoOperation(&uploadedInfo.CryptoOperation)
if err := encryptionStep.updateCryptoOperationAndAnnotations(&uploadedInfo.CryptoOperation, &uploadedInfo.Annotations); err != nil {
return types.BlobInfo{}, err
}
// This is fairly horrible: the writer from getOriginalLayerCopyWriter wants to consume
// all of the input (to compute DiffIDs), even if dest.PutBlob does not need it.
// So, read everything from originalLayerReader, which will cause the rest to be
// sent there if we are not already at EOF.
if getOriginalLayerCopyWriter != nil {
logrus.Debugf("Consuming rest of the original blob to satisfy getOriginalLayerCopyWriter")
_, err := io.Copy(io.Discard, originalLayerReader)
if err != nil {
return types.BlobInfo{}, fmt.Errorf("reading input blob %s: %w", srcInfo.Digest, err)
}
}
if digestingReader.validationFailed { // Coverage: This should never happen.
return types.BlobInfo{}, fmt.Errorf("Internal error writing blob %s, digest verification failed but was ignored", srcInfo.Digest)
}
if stream.info.Digest != "" && uploadedInfo.Digest != stream.info.Digest {
return types.BlobInfo{}, fmt.Errorf("Internal error writing blob %s, blob with digest %s saved with digest %s", srcInfo.Digest, stream.info.Digest, uploadedInfo.Digest)
}
if digestingReader.validationSucceeded {
if err := compressionStep.recordValidatedDigestData(ic.c, uploadedInfo, srcInfo, encryptionStep, decryptionStep); err != nil {
return types.BlobInfo{}, err
}
}
return uploadedInfo, nil
}
// sourceStream encapsulates an input consumed by copyBlobFromStream, in progress of being built.
// This allows handles of individual aspects to build the copy pipeline without _too much_
// specific cooperation by the caller.
//
// We are currently very far from a generalized plug-and-play API for building/consuming the pipeline
// without specific knowledge of various aspects in copyBlobFromStream; that may come one day.
type sourceStream struct {
reader io.Reader
info types.BlobInfo // corresponding to the data available in reader.
}
// errorAnnotationReader wraps the io.Reader passed to PutBlob for annotating the error happened during read.
// These errors are reported as PutBlob errors, so we would otherwise misleadingly attribute them to the copy destination.
type errorAnnotationReader struct {
reader io.Reader
}
// Read annotates the error happened during read
func (r errorAnnotationReader) Read(b []byte) (n int, err error) {
n, err = r.reader.Read(b)
if err != nil && err != io.EOF {
return n, fmt.Errorf("happened during read: %w", err)
}
return n, err
}

View file

@ -0,0 +1,321 @@
package copy
import (
"errors"
"fmt"
"io"
internalblobinfocache "github.com/containers/image/v5/internal/blobinfocache"
"github.com/containers/image/v5/pkg/compression"
compressiontypes "github.com/containers/image/v5/pkg/compression/types"
"github.com/containers/image/v5/types"
"github.com/sirupsen/logrus"
)
// bpDetectCompressionStepData contains data that the copy pipeline needs about the “detect compression” step.
type bpDetectCompressionStepData struct {
isCompressed bool
format compressiontypes.Algorithm // Valid if isCompressed
decompressor compressiontypes.DecompressorFunc // Valid if isCompressed
srcCompressorName string // Compressor name to possibly record in the blob info cache for the source blob.
}
// blobPipelineDetectCompressionStep updates *stream to detect its current compression format.
// srcInfo is only used for error messages.
// Returns data for other steps.
func blobPipelineDetectCompressionStep(stream *sourceStream, srcInfo types.BlobInfo) (bpDetectCompressionStepData, error) {
// This requires us to “peek ahead” into the stream to read the initial part, which requires us to chain through another io.Reader returned by DetectCompression.
format, decompressor, reader, err := compression.DetectCompressionFormat(stream.reader) // We could skip this in some cases, but let's keep the code path uniform
if err != nil {
return bpDetectCompressionStepData{}, fmt.Errorf("reading blob %s: %w", srcInfo.Digest, err)
}
stream.reader = reader
res := bpDetectCompressionStepData{
isCompressed: decompressor != nil,
format: format,
decompressor: decompressor,
}
if res.isCompressed {
res.srcCompressorName = format.Name()
} else {
res.srcCompressorName = internalblobinfocache.Uncompressed
}
if expectedFormat, known := expectedCompressionFormats[stream.info.MediaType]; known && res.isCompressed && format.Name() != expectedFormat.Name() {
logrus.Debugf("blob %s with type %s should be compressed with %s, but compressor appears to be %s", srcInfo.Digest.String(), srcInfo.MediaType, expectedFormat.Name(), format.Name())
}
return res, nil
}
// bpCompressionStepData contains data that the copy pipeline needs about the compression step.
type bpCompressionStepData struct {
operation types.LayerCompression // Operation to use for updating the blob metadata.
uploadedAlgorithm *compressiontypes.Algorithm // An algorithm parameter for the compressionOperation edits.
uploadedAnnotations map[string]string // Annotations that should be set on the uploaded blob. WARNING: This is only set after the srcStream.reader is fully consumed.
srcCompressorName string // Compressor name to record in the blob info cache for the source blob.
uploadedCompressorName string // Compressor name to record in the blob info cache for the uploaded blob.
closers []io.Closer // Objects to close after the upload is done, if any.
}
// blobPipelineCompressionStep updates *stream to compress and/or decompress it.
// srcInfo is primarily used for error messages.
// Returns data for other steps; the caller should eventually call updateCompressionEdits and perhaps recordValidatedBlobData,
// and must eventually call close.
func (ic *imageCopier) blobPipelineCompressionStep(stream *sourceStream, canModifyBlob bool, srcInfo types.BlobInfo,
detected bpDetectCompressionStepData) (*bpCompressionStepData, error) {
// WARNING: If you are adding new reasons to change the blob, update also the OptimizeDestinationImageAlreadyExists
// short-circuit conditions
layerCompressionChangeSupported := ic.src.CanChangeLayerCompression(stream.info.MediaType)
if !layerCompressionChangeSupported {
logrus.Debugf("Compression change for blob %s (%q) not supported", srcInfo.Digest, stream.info.MediaType)
}
if canModifyBlob && layerCompressionChangeSupported {
for _, fn := range []func(*sourceStream, bpDetectCompressionStepData) (*bpCompressionStepData, error){
ic.bpcPreserveEncrypted,
ic.bpcCompressUncompressed,
ic.bpcRecompressCompressed,
ic.bpcDecompressCompressed,
} {
res, err := fn(stream, detected)
if err != nil {
return nil, err
}
if res != nil {
return res, nil
}
}
}
return ic.bpcPreserveOriginal(stream, detected, layerCompressionChangeSupported), nil
}
// bpcPreserveEncrypted checks if the input is encrypted, and returns a *bpCompressionStepData if so.
func (ic *imageCopier) bpcPreserveEncrypted(stream *sourceStream, _ bpDetectCompressionStepData) (*bpCompressionStepData, error) {
if isOciEncrypted(stream.info.MediaType) {
logrus.Debugf("Using original blob without modification for encrypted blob")
// PreserveOriginal due to any compression not being able to be done on an encrypted blob unless decrypted
return &bpCompressionStepData{
operation: types.PreserveOriginal,
uploadedAlgorithm: nil,
srcCompressorName: internalblobinfocache.UnknownCompression,
uploadedCompressorName: internalblobinfocache.UnknownCompression,
}, nil
}
return nil, nil
}
// bpcCompressUncompressed checks if we should be compressing an uncompressed input, and returns a *bpCompressionStepData if so.
func (ic *imageCopier) bpcCompressUncompressed(stream *sourceStream, detected bpDetectCompressionStepData) (*bpCompressionStepData, error) {
if ic.c.dest.DesiredLayerCompression() == types.Compress && !detected.isCompressed {
logrus.Debugf("Compressing blob on the fly")
var uploadedAlgorithm *compressiontypes.Algorithm
if ic.c.compressionFormat != nil {
uploadedAlgorithm = ic.c.compressionFormat
} else {
uploadedAlgorithm = defaultCompressionFormat
}
reader, annotations := ic.c.compressedStream(stream.reader, *uploadedAlgorithm)
// Note: reader must be closed on all return paths.
stream.reader = reader
stream.info = types.BlobInfo{ // FIXME? Should we preserve more data in src.info?
Digest: "",
Size: -1,
}
return &bpCompressionStepData{
operation: types.Compress,
uploadedAlgorithm: uploadedAlgorithm,
uploadedAnnotations: annotations,
srcCompressorName: detected.srcCompressorName,
uploadedCompressorName: uploadedAlgorithm.Name(),
closers: []io.Closer{reader},
}, nil
}
return nil, nil
}
// bpcRecompressCompressed checks if we should be recompressing a compressed input to another format, and returns a *bpCompressionStepData if so.
func (ic *imageCopier) bpcRecompressCompressed(stream *sourceStream, detected bpDetectCompressionStepData) (*bpCompressionStepData, error) {
if ic.c.dest.DesiredLayerCompression() == types.Compress && detected.isCompressed &&
ic.c.compressionFormat != nil && ic.c.compressionFormat.Name() != detected.format.Name() {
// When the blob is compressed, but the desired format is different, it first needs to be decompressed and finally
// re-compressed using the desired format.
logrus.Debugf("Blob will be converted")
decompressed, err := detected.decompressor(stream.reader)
if err != nil {
return nil, err
}
succeeded := false
defer func() {
if !succeeded {
decompressed.Close()
}
}()
recompressed, annotations := ic.c.compressedStream(decompressed, *ic.c.compressionFormat)
// Note: recompressed must be closed on all return paths.
stream.reader = recompressed
stream.info = types.BlobInfo{ // FIXME? Should we preserve more data in src.info?
Digest: "",
Size: -1,
}
succeeded = true
return &bpCompressionStepData{
operation: types.PreserveOriginal,
uploadedAlgorithm: ic.c.compressionFormat,
uploadedAnnotations: annotations,
srcCompressorName: detected.srcCompressorName,
uploadedCompressorName: ic.c.compressionFormat.Name(),
closers: []io.Closer{decompressed, recompressed},
}, nil
}
return nil, nil
}
// bpcDecompressCompressed checks if we should be decompressing a compressed input, and returns a *bpCompressionStepData if so.
func (ic *imageCopier) bpcDecompressCompressed(stream *sourceStream, detected bpDetectCompressionStepData) (*bpCompressionStepData, error) {
if ic.c.dest.DesiredLayerCompression() == types.Decompress && detected.isCompressed {
logrus.Debugf("Blob will be decompressed")
s, err := detected.decompressor(stream.reader)
if err != nil {
return nil, err
}
// Note: s must be closed on all return paths.
stream.reader = s
stream.info = types.BlobInfo{ // FIXME? Should we preserve more data in src.info?
Digest: "",
Size: -1,
}
return &bpCompressionStepData{
operation: types.Decompress,
uploadedAlgorithm: nil,
srcCompressorName: detected.srcCompressorName,
uploadedCompressorName: internalblobinfocache.Uncompressed,
closers: []io.Closer{s},
}, nil
}
return nil, nil
}
// bpcPreserveOriginal returns a *bpCompressionStepData for not changing the original blob.
func (ic *imageCopier) bpcPreserveOriginal(stream *sourceStream, detected bpDetectCompressionStepData,
layerCompressionChangeSupported bool) *bpCompressionStepData {
logrus.Debugf("Using original blob without modification")
// Remember if the original blob was compressed, and if so how, so that if
// LayerInfosForCopy() returned something that differs from what was in the
// source's manifest, and UpdatedImage() needs to call UpdateLayerInfos(),
// it will be able to correctly derive the MediaType for the copied blob.
//
// But dont touch blobs in objects where we cant change compression,
// so that src.UpdatedImage() doesnt fail; assume that for such blobs
// LayerInfosForCopy() should not be making any changes in the first place.
var algorithm *compressiontypes.Algorithm
if layerCompressionChangeSupported && detected.isCompressed {
algorithm = &detected.format
} else {
algorithm = nil
}
return &bpCompressionStepData{
operation: types.PreserveOriginal,
uploadedAlgorithm: algorithm,
srcCompressorName: detected.srcCompressorName,
uploadedCompressorName: detected.srcCompressorName,
}
}
// updateCompressionEdits sets *operation, *algorithm and updates *annotations, if necessary.
func (d *bpCompressionStepData) updateCompressionEdits(operation *types.LayerCompression, algorithm **compressiontypes.Algorithm, annotations *map[string]string) {
*operation = d.operation
// If we can modify the layer's blob, set the desired algorithm for it to be set in the manifest.
*algorithm = d.uploadedAlgorithm
if *annotations == nil {
*annotations = map[string]string{}
}
for k, v := range d.uploadedAnnotations {
(*annotations)[k] = v
}
}
// recordValidatedBlobData updates b.blobInfoCache with data about the created uploadedInfo adnd the original srcInfo.
// This must ONLY be called if all data has been validated by OUR code, and is not comming from third parties.
func (d *bpCompressionStepData) recordValidatedDigestData(c *copier, uploadedInfo types.BlobInfo, srcInfo types.BlobInfo,
encryptionStep *bpEncryptionStepData, decryptionStep *bpDecryptionStepData) error {
// Dont record any associations that involve encrypted data. This is a bit crude,
// some blob substitutions (replacing pulls of encrypted data with local reuse of known decryption outcomes)
// might be safe, but its not trivially obvious, so lets be conservative for now.
// This crude approach also means we dont need to record whether a blob is encrypted
// in the blob info cache (which would probably be necessary for any more complex logic),
// and the simplicity is attractive.
if !encryptionStep.encrypting && !decryptionStep.decrypting {
// If d.operation != types.PreserveOriginal, we now have two reliable digest values:
// srcinfo.Digest describes the pre-d.operation input, verified by digestingReader
// uploadedInfo.Digest describes the post-d.operation output, computed by PutBlob
// (because stream.info.Digest == "", this must have been computed afresh).
switch d.operation {
case types.PreserveOriginal:
break // Do nothing, we have only one digest and we might not have even verified it.
case types.Compress:
c.blobInfoCache.RecordDigestUncompressedPair(uploadedInfo.Digest, srcInfo.Digest)
case types.Decompress:
c.blobInfoCache.RecordDigestUncompressedPair(srcInfo.Digest, uploadedInfo.Digest)
default:
return fmt.Errorf("Internal error: Unexpected d.operation value %#v", d.operation)
}
}
if d.uploadedCompressorName != "" && d.uploadedCompressorName != internalblobinfocache.UnknownCompression {
c.blobInfoCache.RecordDigestCompressorName(uploadedInfo.Digest, d.uploadedCompressorName)
}
if srcInfo.Digest != "" && d.srcCompressorName != "" && d.srcCompressorName != internalblobinfocache.UnknownCompression {
c.blobInfoCache.RecordDigestCompressorName(srcInfo.Digest, d.srcCompressorName)
}
return nil
}
// close closes objects that carry state throughout the compression/decompression operation.
func (d *bpCompressionStepData) close() {
for _, c := range d.closers {
c.Close()
}
}
// doCompression reads all input from src and writes its compressed equivalent to dest.
func doCompression(dest io.Writer, src io.Reader, metadata map[string]string, compressionFormat compressiontypes.Algorithm, compressionLevel *int) error {
compressor, err := compression.CompressStreamWithMetadata(dest, metadata, compressionFormat, compressionLevel)
if err != nil {
return err
}
buf := make([]byte, compressionBufferSize)
_, err = io.CopyBuffer(compressor, src, buf) // Sets err to nil, i.e. causes dest.Close()
if err != nil {
compressor.Close()
return err
}
return compressor.Close()
}
// compressGoroutine reads all input from src and writes its compressed equivalent to dest.
func (c *copier) compressGoroutine(dest *io.PipeWriter, src io.Reader, metadata map[string]string, compressionFormat compressiontypes.Algorithm) {
err := errors.New("Internal error: unexpected panic in compressGoroutine")
defer func() { // Note that this is not the same as {defer dest.CloseWithError(err)}; we need err to be evaluated lazily.
_ = dest.CloseWithError(err) // CloseWithError(nil) is equivalent to Close(), always returns nil
}()
err = doCompression(dest, src, metadata, compressionFormat, c.compressionLevel)
}
// compressedStream returns a stream the input reader compressed using format, and a metadata map.
// The caller must close the returned reader.
// AFTER the stream is consumed, metadata will be updated with annotations to use on the data.
func (c *copier) compressedStream(reader io.Reader, algorithm compressiontypes.Algorithm) (io.ReadCloser, map[string]string) {
pipeReader, pipeWriter := io.Pipe()
annotations := map[string]string{}
// If this fails while writing data, it will do pipeWriter.CloseWithError(); if it fails otherwise,
// e.g. because we have exited and due to pipeReader.Close() above further writing to the pipe has failed,
// we dont care.
go c.compressGoroutine(pipeWriter, reader, annotations, algorithm) // Closes pipeWriter
return pipeReader, annotations
}

File diff suppressed because it is too large Load diff

View file

@ -1,11 +1,11 @@
package copy
import (
"fmt"
"hash"
"io"
digest "github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
type digestingReader struct {
@ -23,11 +23,11 @@ type digestingReader struct {
func newDigestingReader(source io.Reader, expectedDigest digest.Digest) (*digestingReader, error) {
var digester digest.Digester
if err := expectedDigest.Validate(); err != nil {
return nil, errors.Errorf("Invalid digest specification %s", expectedDigest)
return nil, fmt.Errorf("Invalid digest specification %s", expectedDigest)
}
digestAlgorithm := expectedDigest.Algorithm()
if !digestAlgorithm.Available() {
return nil, errors.Errorf("Invalid digest specification %s: unsupported digest algorithm %s", expectedDigest, digestAlgorithm)
return nil, fmt.Errorf("Invalid digest specification %s: unsupported digest algorithm %s", expectedDigest, digestAlgorithm)
}
digester = digestAlgorithm.Digester()
@ -47,14 +47,14 @@ func (d *digestingReader) Read(p []byte) (int, error) {
// Coverage: This should not happen, the hash.Hash interface requires
// d.digest.Write to never return an error, and the io.Writer interface
// requires n2 == len(input) if no error is returned.
return 0, errors.Wrapf(err, "updating digest during verification: %d vs. %d", n2, n)
return 0, fmt.Errorf("updating digest during verification: %d vs. %d: %w", n2, n, err)
}
}
if err == io.EOF {
actualDigest := d.digester.Digest()
if actualDigest != d.expectedDigest {
d.validationFailed = true
return 0, errors.Errorf("Digest did not match, expected %s, got %s", d.expectedDigest, actualDigest)
return 0, fmt.Errorf("Digest did not match, expected %s, got %s", d.expectedDigest, actualDigest)
}
d.validationSucceeded = true
}

View file

@ -1,24 +0,0 @@
package copy
import (
"strings"
"github.com/containers/image/v5/types"
)
// isOciEncrypted returns a bool indicating if a mediatype is encrypted
// This function will be moved to be part of OCI spec when adopted.
func isOciEncrypted(mediatype string) bool {
return strings.HasSuffix(mediatype, "+encrypted")
}
// isEncrypted checks if an image is encrypted
func isEncrypted(i types.Image) bool {
layers := i.LayerInfos()
for _, l := range layers {
if isOciEncrypted(l.MediaType) {
return true
}
}
return false
}

View file

@ -0,0 +1,129 @@
package copy
import (
"fmt"
"strings"
"github.com/containers/image/v5/types"
"github.com/containers/ocicrypt"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
)
// isOciEncrypted returns a bool indicating if a mediatype is encrypted
// This function will be moved to be part of OCI spec when adopted.
func isOciEncrypted(mediatype string) bool {
return strings.HasSuffix(mediatype, "+encrypted")
}
// isEncrypted checks if an image is encrypted
func isEncrypted(i types.Image) bool {
layers := i.LayerInfos()
for _, l := range layers {
if isOciEncrypted(l.MediaType) {
return true
}
}
return false
}
// bpDecryptionStepData contains data that the copy pipeline needs about the decryption step.
type bpDecryptionStepData struct {
decrypting bool // We are actually decrypting the stream
}
// blobPipelineDecryptionStep updates *stream to decrypt if, it necessary.
// srcInfo is only used for error messages.
// Returns data for other steps; the caller should eventually use updateCryptoOperation.
func (c *copier) blobPipelineDecryptionStep(stream *sourceStream, srcInfo types.BlobInfo) (*bpDecryptionStepData, error) {
if isOciEncrypted(stream.info.MediaType) && c.ociDecryptConfig != nil {
desc := imgspecv1.Descriptor{
Annotations: stream.info.Annotations,
}
reader, decryptedDigest, err := ocicrypt.DecryptLayer(c.ociDecryptConfig, stream.reader, desc, false)
if err != nil {
return nil, fmt.Errorf("decrypting layer %s: %w", srcInfo.Digest, err)
}
stream.reader = reader
stream.info.Digest = decryptedDigest
stream.info.Size = -1
for k := range stream.info.Annotations {
if strings.HasPrefix(k, "org.opencontainers.image.enc") {
delete(stream.info.Annotations, k)
}
}
return &bpDecryptionStepData{
decrypting: true,
}, nil
}
return &bpDecryptionStepData{
decrypting: false,
}, nil
}
// updateCryptoOperation sets *operation, if necessary.
func (d *bpDecryptionStepData) updateCryptoOperation(operation *types.LayerCrypto) {
if d.decrypting {
*operation = types.Decrypt
}
}
// bpdData contains data that the copy pipeline needs about the encryption step.
type bpEncryptionStepData struct {
encrypting bool // We are actually encrypting the stream
finalizer ocicrypt.EncryptLayerFinalizer
}
// blobPipelineEncryptionStep updates *stream to encrypt if, it required by toEncrypt.
// srcInfo is primarily used for error messages.
// Returns data for other steps; the caller should eventually call updateCryptoOperationAndAnnotations.
func (c *copier) blobPipelineEncryptionStep(stream *sourceStream, toEncrypt bool, srcInfo types.BlobInfo,
decryptionStep *bpDecryptionStepData) (*bpEncryptionStepData, error) {
if toEncrypt && !isOciEncrypted(srcInfo.MediaType) && c.ociEncryptConfig != nil {
var annotations map[string]string
if !decryptionStep.decrypting {
annotations = srcInfo.Annotations
}
desc := imgspecv1.Descriptor{
MediaType: srcInfo.MediaType,
Digest: srcInfo.Digest,
Size: srcInfo.Size,
Annotations: annotations,
}
reader, finalizer, err := ocicrypt.EncryptLayer(c.ociEncryptConfig, stream.reader, desc)
if err != nil {
return nil, fmt.Errorf("encrypting blob %s: %w", srcInfo.Digest, err)
}
stream.reader = reader
stream.info.Digest = ""
stream.info.Size = -1
return &bpEncryptionStepData{
encrypting: true,
finalizer: finalizer,
}, nil
}
return &bpEncryptionStepData{
encrypting: false,
}, nil
}
// updateCryptoOperationAndAnnotations sets *operation and updates *annotations, if necessary.
func (d *bpEncryptionStepData) updateCryptoOperationAndAnnotations(operation *types.LayerCrypto, annotations *map[string]string) error {
if !d.encrypting {
return nil
}
encryptAnnotations, err := d.finalizer()
if err != nil {
return fmt.Errorf("Unable to finalize encryption: %w", err)
}
*operation = types.Encrypt
if *annotations == nil {
*annotations = map[string]string{}
}
for k, v := range encryptAnnotations {
(*annotations)[k] = v
}
return nil
}

View file

@ -2,11 +2,12 @@ package copy
import (
"context"
"errors"
"fmt"
"strings"
"github.com/containers/image/v5/manifest"
"github.com/containers/image/v5/types"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
@ -38,31 +39,50 @@ func (os *orderedSet) append(s string) {
}
}
// determineManifestConversion updates ic.manifestUpdates to convert manifest to a supported MIME type, if necessary and ic.canModifyManifest.
// Note that the conversion will only happen later, through ic.src.UpdatedImage
// Returns the preferred manifest MIME type (whether we are converting to it or using it unmodified),
// and a list of other possible alternatives, in order.
func (ic *imageCopier) determineManifestConversion(ctx context.Context, destSupportedManifestMIMETypes []string, forceManifestMIMEType string, requiresOciEncryption bool) (string, []string, error) {
_, srcType, err := ic.src.Manifest(ctx)
if err != nil { // This should have been cached?!
return "", nil, errors.Wrap(err, "reading manifest")
}
// determineManifestConversionInputs contains the inputs for determineManifestConversion.
type determineManifestConversionInputs struct {
srcMIMEType string // MIME type of the input manifest
destSupportedManifestMIMETypes []string // MIME types supported by the destination, per types.ImageDestination.SupportedManifestMIMETypes()
forceManifestMIMEType string // Users choice of forced manifest MIME type
requiresOCIEncryption bool // Restrict to manifest formats that can support OCI encryption
cannotModifyManifestReason string // The reason the manifest cannot be modified, or an empty string if it can
}
// manifestConversionPlan contains the decisions made by determineManifestConversion.
type manifestConversionPlan struct {
// The preferred manifest MIME type (whether we are converting to it or using it unmodified).
// We compute this only to show it in error messages; without having to add this context
// in an error message, we would be happy enough to know only that no conversion is needed.
preferredMIMEType string
preferredMIMETypeNeedsConversion bool // True if using preferredMIMEType requires a conversion step.
otherMIMETypeCandidates []string // Other possible alternatives, in order
}
// determineManifestConversion returns a plan for what formats, and possibly conversions, to use based on in.
func determineManifestConversion(in determineManifestConversionInputs) (manifestConversionPlan, error) {
srcType := in.srcMIMEType
normalizedSrcType := manifest.NormalizedMIMEType(srcType)
if srcType != normalizedSrcType {
logrus.Debugf("Source manifest MIME type %s, treating it as %s", srcType, normalizedSrcType)
srcType = normalizedSrcType
}
if forceManifestMIMEType != "" {
destSupportedManifestMIMETypes = []string{forceManifestMIMEType}
destSupportedManifestMIMETypes := in.destSupportedManifestMIMETypes
if in.forceManifestMIMEType != "" {
destSupportedManifestMIMETypes = []string{in.forceManifestMIMEType}
}
if len(destSupportedManifestMIMETypes) == 0 && (!requiresOciEncryption || manifest.MIMETypeSupportsEncryption(srcType)) {
return srcType, []string{}, nil // Anything goes; just use the original as is, do not try any conversions.
if len(destSupportedManifestMIMETypes) == 0 && (!in.requiresOCIEncryption || manifest.MIMETypeSupportsEncryption(srcType)) {
return manifestConversionPlan{ // Anything goes; just use the original as is, do not try any conversions.
preferredMIMEType: srcType,
otherMIMETypeCandidates: []string{},
}, nil
}
supportedByDest := map[string]struct{}{}
for _, t := range destSupportedManifestMIMETypes {
if !requiresOciEncryption || manifest.MIMETypeSupportsEncryption(t) {
if !in.requiresOCIEncryption || manifest.MIMETypeSupportsEncryption(t) {
supportedByDest[t] = struct{}{}
}
}
@ -79,13 +99,16 @@ func (ic *imageCopier) determineManifestConversion(ctx context.Context, destSupp
if _, ok := supportedByDest[srcType]; ok {
prioritizedTypes.append(srcType)
}
if ic.cannotModifyManifestReason != "" {
if in.cannotModifyManifestReason != "" {
// We could also drop this check and have the caller
// make the choice; it is already doing that to an extent, to improve error
// messages. But it is nice to hide the “if we can't modify, do no conversion”
// special case in here; the caller can then worry (or not) only about a good UI.
logrus.Debugf("We can't modify the manifest, hoping for the best...")
return srcType, []string{}, nil // Take our chances - FIXME? Or should we fail without trying?
return manifestConversionPlan{ // Take our chances - FIXME? Or should we fail without trying?
preferredMIMEType: srcType,
otherMIMETypeCandidates: []string{},
}, nil
}
// Then use our list of preferred types.
@ -102,15 +125,17 @@ func (ic *imageCopier) determineManifestConversion(ctx context.Context, destSupp
logrus.Debugf("Manifest has MIME type %s, ordered candidate list [%s]", srcType, strings.Join(prioritizedTypes.list, ", "))
if len(prioritizedTypes.list) == 0 { // Coverage: destSupportedManifestMIMETypes is not empty (or we would have exited in the “Anything goes” case above), so this should never happen.
return "", nil, errors.New("Internal error: no candidate MIME types")
return manifestConversionPlan{}, errors.New("Internal error: no candidate MIME types")
}
preferredType := prioritizedTypes.list[0]
if preferredType != srcType {
ic.manifestUpdates.ManifestMIMEType = preferredType
} else {
res := manifestConversionPlan{
preferredMIMEType: prioritizedTypes.list[0],
otherMIMETypeCandidates: prioritizedTypes.list[1:],
}
res.preferredMIMETypeNeedsConversion = res.preferredMIMEType != srcType
if !res.preferredMIMETypeNeedsConversion {
logrus.Debugf("... will first try using the original manifest unmodified")
}
return preferredType, prioritizedTypes.list[1:], nil
return res, nil
}
// isMultiImage returns true if img is a list of images
@ -156,7 +181,7 @@ func (c *copier) determineListConversion(currentListMIMEType string, destSupport
logrus.Debugf("Manifest list has MIME type %s, ordered candidate list [%s]", currentListMIMEType, strings.Join(destSupportedMIMETypes, ", "))
if len(prioritizedTypes.list) == 0 {
return "", nil, errors.Errorf("destination does not support any supported manifest list types (%v)", manifest.SupportedListMIMETypes)
return "", nil, fmt.Errorf("destination does not support any supported manifest list types (%v)", manifest.SupportedListMIMETypes)
}
selectedType := prioritizedTypes.list[0]
otherSupportedTypes := prioritizedTypes.list[1:]

View file

@ -38,7 +38,8 @@ type progressBar struct {
}
// createProgressBar creates a progressBar in pool. Note that if the copier's reportWriter
// is io.Discard, the progress bar's output will be discarded
// is io.Discard, the progress bar's output will be discarded. Callers may call printCopyInfo()
// to print a single line instead.
//
// NOTE: Every progress bar created within a progress pool must either successfully
// complete or be aborted, or pool.Wait() will hang. That is typically done
@ -95,15 +96,21 @@ func (c *copier) createProgressBar(pool *mpb.Progress, partial bool, info types.
),
)
}
if c.progressOutput == io.Discard {
c.Printf("Copying %s %s\n", kind, info.Digest)
}
return &progressBar{
Bar: bar,
originalSize: info.Size,
}
}
// printCopyInfo prints a "Copying ..." message on the copier if the output is
// set to `io.Discard`. In that case, the progress bars won't be rendered but
// we still want to indicate when blobs and configs are copied.
func (c *copier) printCopyInfo(kind string, info types.BlobInfo) {
if c.progressOutput == io.Discard {
c.Printf("Copying %s %s\n", kind, info.Digest)
}
}
// mark100PercentComplete marks the progres bars as 100% complete;
// it may do so by possibly advancing the current state if it is below the known total.
func (bar *progressBar) mark100PercentComplete() {

View file

@ -1,38 +1,89 @@
package copy
import (
"context"
"fmt"
"github.com/containers/image/v5/docker/reference"
"github.com/containers/image/v5/internal/private"
internalsig "github.com/containers/image/v5/internal/signature"
"github.com/containers/image/v5/signature"
"github.com/containers/image/v5/signature/sigstore"
"github.com/containers/image/v5/transports"
"github.com/pkg/errors"
)
// sourceSignatures returns signatures from unparsedSource based on options,
// and verifies that they can be used (to avoid copying a large image when we
// can tell in advance that it would ultimately fail)
func (c *copier) sourceSignatures(ctx context.Context, unparsed private.UnparsedImage, options *Options,
gettingSignaturesMessage, checkingDestMessage string) ([]internalsig.Signature, error) {
var sigs []internalsig.Signature
if options.RemoveSignatures {
sigs = []internalsig.Signature{}
} else {
c.Printf("%s\n", gettingSignaturesMessage)
s, err := unparsed.UntrustedSignatures(ctx)
if err != nil {
return nil, fmt.Errorf("reading signatures: %w", err)
}
sigs = s
}
if len(sigs) != 0 {
c.Printf("%s\n", checkingDestMessage)
if err := c.dest.SupportsSignatures(ctx); err != nil {
return nil, fmt.Errorf("Can not copy signatures to %s: %w", transports.ImageName(c.dest.Reference()), err)
}
}
return sigs, nil
}
// createSignature creates a new signature of manifest using keyIdentity.
func (c *copier) createSignature(manifest []byte, keyIdentity string, passphrase string, identity reference.Named) ([]byte, error) {
func (c *copier) createSignature(manifest []byte, keyIdentity string, passphrase string, identity reference.Named) (internalsig.Signature, error) {
mech, err := signature.NewGPGSigningMechanism()
if err != nil {
return nil, errors.Wrap(err, "initializing GPG")
return nil, fmt.Errorf("initializing GPG: %w", err)
}
defer mech.Close()
if err := mech.SupportsSigning(); err != nil {
return nil, errors.Wrap(err, "Signing not supported")
return nil, fmt.Errorf("Signing not supported: %w", err)
}
if identity != nil {
if reference.IsNameOnly(identity) {
return nil, errors.Errorf("Sign identity must be a fully specified reference %s", identity)
return nil, fmt.Errorf("Sign identity must be a fully specified reference %s", identity)
}
} else {
identity = c.dest.Reference().DockerReference()
if identity == nil {
return nil, errors.Errorf("Cannot determine canonical Docker reference for destination %s", transports.ImageName(c.dest.Reference()))
return nil, fmt.Errorf("Cannot determine canonical Docker reference for destination %s", transports.ImageName(c.dest.Reference()))
}
}
c.Printf("Signing manifest\n")
c.Printf("Signing manifest using simple signing\n")
newSig, err := signature.SignDockerManifestWithOptions(manifest, identity.String(), mech, keyIdentity, &signature.SignOptions{Passphrase: passphrase})
if err != nil {
return nil, errors.Wrap(err, "creating signature")
return nil, fmt.Errorf("creating signature: %w", err)
}
return internalsig.SimpleSigningFromBlob(newSig), nil
}
// createSigstoreSignature creates a new sigstore signature of manifest using privateKeyFile and identity.
func (c *copier) createSigstoreSignature(manifest []byte, privateKeyFile string, passphrase []byte, identity reference.Named) (internalsig.Signature, error) {
if identity != nil {
if reference.IsNameOnly(identity) {
return nil, fmt.Errorf("Sign identity must be a fully specified reference %s", identity.String())
}
} else {
identity = c.dest.Reference().DockerReference()
if identity == nil {
return nil, fmt.Errorf("Cannot determine canonical Docker reference for destination %s", transports.ImageName(c.dest.Reference()))
}
}
c.Printf("Signing manifest using a sigstore signature\n")
newSig, err := sigstore.SignDockerManifestWithPrivateKeyFileUnstable(manifest, identity, privateKeyFile, passphrase)
if err != nil {
return nil, fmt.Errorf("creating signature: %w", err)
}
return newSig, nil
}