go.mod: vendor aws-sdk-v2

Removes the v1 aws sdk.
This commit is contained in:
Sanne Raymaekers 2024-08-06 15:23:28 +02:00
parent f27f9a2f80
commit 5e3bc8a705
1523 changed files with 628224 additions and 358932 deletions

View file

@ -0,0 +1,140 @@
# v1.11.3 (2024-06-28)
* No change notes available for this release.
# v1.11.2 (2024-03-29)
* No change notes available for this release.
# v1.11.1 (2024-02-21)
* No change notes available for this release.
# v1.11.0 (2024-02-13)
* **Feature**: Bump minimum Go version to 1.20 per our language support policy.
# v1.10.4 (2023-12-07)
* No change notes available for this release.
# v1.10.3 (2023-11-30)
* No change notes available for this release.
# v1.10.2 (2023-11-29)
* No change notes available for this release.
# v1.10.1 (2023-11-15)
* No change notes available for this release.
# v1.10.0 (2023-10-31)
* **Feature**: **BREAKING CHANGE**: Bump minimum go version to 1.19 per the revised [go version support policy](https://aws.amazon.com/blogs/developer/aws-sdk-for-go-aligns-with-go-release-policy-on-supported-runtimes/).
# v1.9.15 (2023-10-06)
* No change notes available for this release.
# v1.9.14 (2023-08-18)
* No change notes available for this release.
# v1.9.13 (2023-08-07)
* No change notes available for this release.
# v1.9.12 (2023-07-31)
* No change notes available for this release.
# v1.9.11 (2022-12-02)
* No change notes available for this release.
# v1.9.10 (2022-10-24)
* No change notes available for this release.
# v1.9.9 (2022-09-14)
* No change notes available for this release.
# v1.9.8 (2022-09-02)
* No change notes available for this release.
# v1.9.7 (2022-08-31)
* No change notes available for this release.
# v1.9.6 (2022-08-29)
* No change notes available for this release.
# v1.9.5 (2022-08-11)
* No change notes available for this release.
# v1.9.4 (2022-08-09)
* No change notes available for this release.
# v1.9.3 (2022-06-29)
* No change notes available for this release.
# v1.9.2 (2022-06-07)
* No change notes available for this release.
# v1.9.1 (2022-03-24)
* No change notes available for this release.
# v1.9.0 (2022-03-08)
* **Feature**: Updated `github.com/aws/smithy-go` to latest version
# v1.8.0 (2022-02-24)
* **Feature**: Updated `github.com/aws/smithy-go` to latest version
# v1.7.0 (2022-01-14)
* **Feature**: Updated `github.com/aws/smithy-go` to latest version
# v1.6.0 (2022-01-07)
* **Feature**: Updated `github.com/aws/smithy-go` to latest version
# v1.5.0 (2021-11-06)
* **Feature**: Updated `github.com/aws/smithy-go` to latest version
# v1.4.0 (2021-10-21)
* **Feature**: Updated to latest version
# v1.3.0 (2021-08-27)
* **Feature**: Updated `github.com/aws/smithy-go` to latest version
# v1.2.2 (2021-08-04)
* **Dependency Update**: Updated `github.com/aws/smithy-go` to latest version.
# v1.2.1 (2021-07-15)
* **Dependency Update**: Updated `github.com/aws/smithy-go` to latest version
# v1.2.0 (2021-06-25)
* **Feature**: Updated `github.com/aws/smithy-go` to latest version
# v1.1.0 (2021-05-14)
* **Feature**: Constant has been added to modules to enable runtime version inspection for reporting.

View file

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View file

@ -0,0 +1,176 @@
package acceptencoding
import (
"compress/gzip"
"context"
"fmt"
"io"
"github.com/aws/smithy-go"
"github.com/aws/smithy-go/middleware"
smithyhttp "github.com/aws/smithy-go/transport/http"
)
const acceptEncodingHeaderKey = "Accept-Encoding"
const contentEncodingHeaderKey = "Content-Encoding"
// AddAcceptEncodingGzipOptions provides the options for the
// AddAcceptEncodingGzip middleware setup.
type AddAcceptEncodingGzipOptions struct {
Enable bool
}
// AddAcceptEncodingGzip explicitly adds handling for accept-encoding GZIP
// middleware to the operation stack. This allows checksums to be correctly
// computed without disabling GZIP support.
func AddAcceptEncodingGzip(stack *middleware.Stack, options AddAcceptEncodingGzipOptions) error {
if options.Enable {
if err := stack.Finalize.Add(&EnableGzip{}, middleware.Before); err != nil {
return err
}
if err := stack.Deserialize.Insert(&DecompressGzip{}, "OperationDeserializer", middleware.After); err != nil {
return err
}
return nil
}
return stack.Finalize.Add(&DisableGzip{}, middleware.Before)
}
// DisableGzip provides the middleware that will
// disable the underlying http client automatically enabling for gzip
// decompress content-encoding support.
type DisableGzip struct{}
// ID returns the id for the middleware.
func (*DisableGzip) ID() string {
return "DisableAcceptEncodingGzip"
}
// HandleFinalize implements the FinalizeMiddleware interface.
func (*DisableGzip) HandleFinalize(
ctx context.Context, input middleware.FinalizeInput, next middleware.FinalizeHandler,
) (
output middleware.FinalizeOutput, metadata middleware.Metadata, err error,
) {
req, ok := input.Request.(*smithyhttp.Request)
if !ok {
return output, metadata, &smithy.SerializationError{
Err: fmt.Errorf("unknown request type %T", input.Request),
}
}
// Explicitly enable gzip support, this will prevent the http client from
// auto extracting the zipped content.
req.Header.Set(acceptEncodingHeaderKey, "identity")
return next.HandleFinalize(ctx, input)
}
// EnableGzip provides a middleware to enable support for
// gzip responses, with manual decompression. This prevents the underlying HTTP
// client from performing the gzip decompression automatically.
type EnableGzip struct{}
// ID returns the id for the middleware.
func (*EnableGzip) ID() string {
return "AcceptEncodingGzip"
}
// HandleFinalize implements the FinalizeMiddleware interface.
func (*EnableGzip) HandleFinalize(
ctx context.Context, input middleware.FinalizeInput, next middleware.FinalizeHandler,
) (
output middleware.FinalizeOutput, metadata middleware.Metadata, err error,
) {
req, ok := input.Request.(*smithyhttp.Request)
if !ok {
return output, metadata, &smithy.SerializationError{
Err: fmt.Errorf("unknown request type %T", input.Request),
}
}
// Explicitly enable gzip support, this will prevent the http client from
// auto extracting the zipped content.
req.Header.Set(acceptEncodingHeaderKey, "gzip")
return next.HandleFinalize(ctx, input)
}
// DecompressGzip provides the middleware for decompressing a gzip
// response from the service.
type DecompressGzip struct{}
// ID returns the id for the middleware.
func (*DecompressGzip) ID() string {
return "DecompressGzip"
}
// HandleDeserialize implements the DeserializeMiddlware interface.
func (*DecompressGzip) HandleDeserialize(
ctx context.Context, input middleware.DeserializeInput, next middleware.DeserializeHandler,
) (
output middleware.DeserializeOutput, metadata middleware.Metadata, err error,
) {
output, metadata, err = next.HandleDeserialize(ctx, input)
if err != nil {
return output, metadata, err
}
resp, ok := output.RawResponse.(*smithyhttp.Response)
if !ok {
return output, metadata, &smithy.DeserializationError{
Err: fmt.Errorf("unknown response type %T", output.RawResponse),
}
}
if v := resp.Header.Get(contentEncodingHeaderKey); v != "gzip" {
return output, metadata, err
}
// Clear content length since it will no longer be valid once the response
// body is decompressed.
resp.Header.Del("Content-Length")
resp.ContentLength = -1
resp.Body = wrapGzipReader(resp.Body)
return output, metadata, err
}
type gzipReader struct {
reader io.ReadCloser
gzip *gzip.Reader
}
func wrapGzipReader(reader io.ReadCloser) *gzipReader {
return &gzipReader{
reader: reader,
}
}
// Read wraps the gzip reader around the underlying io.Reader to extract the
// response bytes on the fly.
func (g *gzipReader) Read(b []byte) (n int, err error) {
if g.gzip == nil {
g.gzip, err = gzip.NewReader(g.reader)
if err != nil {
g.gzip = nil // ensure uninitialized gzip value isn't used in close.
return 0, fmt.Errorf("failed to decompress gzip response, %w", err)
}
}
return g.gzip.Read(b)
}
func (g *gzipReader) Close() error {
if g.gzip == nil {
return nil
}
if err := g.gzip.Close(); err != nil {
g.reader.Close()
return fmt.Errorf("failed to decompress gzip response, %w", err)
}
return g.reader.Close()
}

View file

@ -0,0 +1,22 @@
/*
Package acceptencoding provides customizations associated with Accept Encoding Header.
# Accept encoding gzip
The Go HTTP client automatically supports accept-encoding and content-encoding
gzip by default. This default behavior is not desired by the SDK, and prevents
validating the response body's checksum. To prevent this the SDK must manually
control usage of content-encoding gzip.
To control content-encoding, the SDK must always set the `Accept-Encoding`
header to a value. This prevents the HTTP client from using gzip automatically.
When gzip is enabled on the API client, the SDK's customization will control
decompressing the gzip data in order to not break the checksum validation. When
gzip is disabled, the API client will disable gzip, preventing the HTTP
client's default behavior.
An `EnableAcceptEncodingGzip` option may or may not be present depending on the client using
the below middleware. The option if present can be used to enable auto decompressing
gzip by the SDK.
*/
package acceptencoding

View file

@ -0,0 +1,6 @@
// Code generated by internal/repotools/cmd/updatemodulemeta DO NOT EDIT.
package acceptencoding
// goModuleVersion is the tagged release for this module
const goModuleVersion = "1.11.3"

View file

@ -0,0 +1,235 @@
# v1.3.5 (2024-03-07)
* **Bug Fix**: Remove dependency on go-cmp.
* **Dependency Update**: Updated to the latest SDK module versions
# v1.3.4 (2024-03-05)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.3.3 (2024-03-04)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.3.2 (2024-02-23)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.3.1 (2024-02-21)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.3.0 (2024-02-13)
* **Feature**: Bump minimum Go version to 1.20 per our language support policy.
* **Dependency Update**: Updated to the latest SDK module versions
# v1.2.10 (2024-01-04)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.2.9 (2023-12-07)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.2.8 (2023-12-01)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.2.7 (2023-11-30)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.2.6 (2023-11-29)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.2.5 (2023-11-28.2)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.2.4 (2023-11-20)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.2.3 (2023-11-15)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.2.2 (2023-11-09)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.2.1 (2023-11-01)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.2.0 (2023-10-31)
* **Feature**: **BREAKING CHANGE**: Bump minimum go version to 1.19 per the revised [go version support policy](https://aws.amazon.com/blogs/developer/aws-sdk-for-go-aligns-with-go-release-policy-on-supported-runtimes/).
* **Dependency Update**: Updated to the latest SDK module versions
# v1.1.38 (2023-10-12)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.1.37 (2023-10-06)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.1.36 (2023-08-21)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.1.35 (2023-08-18)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.1.34 (2023-08-17)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.1.33 (2023-08-07)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.1.32 (2023-07-31)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.1.31 (2023-07-28)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.1.30 (2023-07-13)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.1.29 (2023-06-13)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.1.28 (2023-04-24)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.1.27 (2023-04-07)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.1.26 (2023-03-21)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.1.25 (2023-03-10)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.1.24 (2023-02-20)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.1.23 (2023-02-03)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.1.22 (2022-12-15)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.1.21 (2022-12-02)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.1.20 (2022-10-24)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.1.19 (2022-10-21)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.1.18 (2022-09-20)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.1.17 (2022-09-14)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.1.16 (2022-09-02)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.1.15 (2022-08-31)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.1.14 (2022-08-29)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.1.13 (2022-08-11)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.1.12 (2022-08-09)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.1.11 (2022-08-08)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.1.10 (2022-08-01)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.1.9 (2022-07-05)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.1.8 (2022-06-29)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.1.7 (2022-06-07)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.1.6 (2022-05-17)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.1.5 (2022-04-27)
* **Bug Fix**: Fixes a bug that could cause the SigV4 payload hash to be incorrectly encoded, leading to signing errors.
# v1.1.4 (2022-04-25)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.1.3 (2022-03-30)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.1.2 (2022-03-24)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.1.1 (2022-03-23)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.1.0 (2022-03-08)
* **Feature**: Updates the SDK's checksum validation logic to require opt-in to output response payload validation. The SDK was always preforming output response payload checksum validation, not respecting the output validation model option. Fixes [#1606](https://github.com/aws/aws-sdk-go-v2/issues/1606)
* **Feature**: Updated `github.com/aws/smithy-go` to latest version
* **Dependency Update**: Updated to the latest SDK module versions
# v1.0.0 (2022-02-24)
* **Release**: New module for computing checksums
* **Feature**: Updated `github.com/aws/smithy-go` to latest version
* **Dependency Update**: Updated to the latest SDK module versions

View file

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View file

@ -0,0 +1,323 @@
package checksum
import (
"crypto/md5"
"crypto/sha1"
"crypto/sha256"
"encoding/base64"
"encoding/hex"
"fmt"
"hash"
"hash/crc32"
"io"
"strings"
"sync"
)
// Algorithm represents the checksum algorithms supported
type Algorithm string
// Enumeration values for supported checksum Algorithms.
const (
// AlgorithmCRC32C represents CRC32C hash algorithm
AlgorithmCRC32C Algorithm = "CRC32C"
// AlgorithmCRC32 represents CRC32 hash algorithm
AlgorithmCRC32 Algorithm = "CRC32"
// AlgorithmSHA1 represents SHA1 hash algorithm
AlgorithmSHA1 Algorithm = "SHA1"
// AlgorithmSHA256 represents SHA256 hash algorithm
AlgorithmSHA256 Algorithm = "SHA256"
)
var supportedAlgorithms = []Algorithm{
AlgorithmCRC32C,
AlgorithmCRC32,
AlgorithmSHA1,
AlgorithmSHA256,
}
func (a Algorithm) String() string { return string(a) }
// ParseAlgorithm attempts to parse the provided value into a checksum
// algorithm, matching without case. Returns the algorithm matched, or an error
// if the algorithm wasn't matched.
func ParseAlgorithm(v string) (Algorithm, error) {
for _, a := range supportedAlgorithms {
if strings.EqualFold(string(a), v) {
return a, nil
}
}
return "", fmt.Errorf("unknown checksum algorithm, %v", v)
}
// FilterSupportedAlgorithms filters the set of algorithms, returning a slice
// of algorithms that are supported.
func FilterSupportedAlgorithms(vs []string) []Algorithm {
found := map[Algorithm]struct{}{}
supported := make([]Algorithm, 0, len(supportedAlgorithms))
for _, v := range vs {
for _, a := range supportedAlgorithms {
// Only consider algorithms that are supported
if !strings.EqualFold(v, string(a)) {
continue
}
// Ignore duplicate algorithms in list.
if _, ok := found[a]; ok {
continue
}
supported = append(supported, a)
found[a] = struct{}{}
}
}
return supported
}
// NewAlgorithmHash returns a hash.Hash for the checksum algorithm. Error is
// returned if the algorithm is unknown.
func NewAlgorithmHash(v Algorithm) (hash.Hash, error) {
switch v {
case AlgorithmSHA1:
return sha1.New(), nil
case AlgorithmSHA256:
return sha256.New(), nil
case AlgorithmCRC32:
return crc32.NewIEEE(), nil
case AlgorithmCRC32C:
return crc32.New(crc32.MakeTable(crc32.Castagnoli)), nil
default:
return nil, fmt.Errorf("unknown checksum algorithm, %v", v)
}
}
// AlgorithmChecksumLength returns the length of the algorithm's checksum in
// bytes. If the algorithm is not known, an error is returned.
func AlgorithmChecksumLength(v Algorithm) (int, error) {
switch v {
case AlgorithmSHA1:
return sha1.Size, nil
case AlgorithmSHA256:
return sha256.Size, nil
case AlgorithmCRC32:
return crc32.Size, nil
case AlgorithmCRC32C:
return crc32.Size, nil
default:
return 0, fmt.Errorf("unknown checksum algorithm, %v", v)
}
}
const awsChecksumHeaderPrefix = "x-amz-checksum-"
// AlgorithmHTTPHeader returns the HTTP header for the algorithm's hash.
func AlgorithmHTTPHeader(v Algorithm) string {
return awsChecksumHeaderPrefix + strings.ToLower(string(v))
}
// base64EncodeHashSum computes base64 encoded checksum of a given running
// hash. The running hash must already have content written to it. Returns the
// byte slice of checksum and an error
func base64EncodeHashSum(h hash.Hash) []byte {
sum := h.Sum(nil)
sum64 := make([]byte, base64.StdEncoding.EncodedLen(len(sum)))
base64.StdEncoding.Encode(sum64, sum)
return sum64
}
// hexEncodeHashSum computes hex encoded checksum of a given running hash. The
// running hash must already have content written to it. Returns the byte slice
// of checksum and an error
func hexEncodeHashSum(h hash.Hash) []byte {
sum := h.Sum(nil)
sumHex := make([]byte, hex.EncodedLen(len(sum)))
hex.Encode(sumHex, sum)
return sumHex
}
// computeMD5Checksum computes base64 MD5 checksum of an io.Reader's contents.
// Returns the byte slice of MD5 checksum and an error.
func computeMD5Checksum(r io.Reader) ([]byte, error) {
h := md5.New()
// Copy errors may be assumed to be from the body.
if _, err := io.Copy(h, r); err != nil {
return nil, fmt.Errorf("failed compute MD5 hash of reader, %w", err)
}
// Encode the MD5 checksum in base64.
return base64EncodeHashSum(h), nil
}
// computeChecksumReader provides a reader wrapping an underlying io.Reader to
// compute the checksum of the stream's bytes.
type computeChecksumReader struct {
stream io.Reader
algorithm Algorithm
hasher hash.Hash
base64ChecksumLen int
mux sync.RWMutex
lockedChecksum string
lockedErr error
}
// newComputeChecksumReader returns a computeChecksumReader for the stream and
// algorithm specified. Returns error if unable to create the reader, or
// algorithm is unknown.
func newComputeChecksumReader(stream io.Reader, algorithm Algorithm) (*computeChecksumReader, error) {
hasher, err := NewAlgorithmHash(algorithm)
if err != nil {
return nil, err
}
checksumLength, err := AlgorithmChecksumLength(algorithm)
if err != nil {
return nil, err
}
return &computeChecksumReader{
stream: io.TeeReader(stream, hasher),
algorithm: algorithm,
hasher: hasher,
base64ChecksumLen: base64.StdEncoding.EncodedLen(checksumLength),
}, nil
}
// Read wraps the underlying reader. When the underlying reader returns EOF,
// the checksum of the reader will be computed, and can be retrieved with
// ChecksumBase64String.
func (r *computeChecksumReader) Read(p []byte) (int, error) {
n, err := r.stream.Read(p)
if err == nil {
return n, nil
} else if err != io.EOF {
r.mux.Lock()
defer r.mux.Unlock()
r.lockedErr = err
return n, err
}
b := base64EncodeHashSum(r.hasher)
r.mux.Lock()
defer r.mux.Unlock()
r.lockedChecksum = string(b)
return n, err
}
func (r *computeChecksumReader) Algorithm() Algorithm {
return r.algorithm
}
// Base64ChecksumLength returns the base64 encoded length of the checksum for
// algorithm.
func (r *computeChecksumReader) Base64ChecksumLength() int {
return r.base64ChecksumLen
}
// Base64Checksum returns the base64 checksum for the algorithm, or error if
// the underlying reader returned a non-EOF error.
//
// Safe to be called concurrently, but will return an error until after the
// underlying reader is returns EOF.
func (r *computeChecksumReader) Base64Checksum() (string, error) {
r.mux.RLock()
defer r.mux.RUnlock()
if r.lockedErr != nil {
return "", r.lockedErr
}
if r.lockedChecksum == "" {
return "", fmt.Errorf(
"checksum not available yet, called before reader returns EOF",
)
}
return r.lockedChecksum, nil
}
// validateChecksumReader implements io.ReadCloser interface. The wrapper
// performs checksum validation when the underlying reader has been fully read.
type validateChecksumReader struct {
originalBody io.ReadCloser
body io.Reader
hasher hash.Hash
algorithm Algorithm
expectChecksum string
}
// newValidateChecksumReader returns a configured io.ReadCloser that performs
// checksum validation when the underlying reader has been fully read.
func newValidateChecksumReader(
body io.ReadCloser,
algorithm Algorithm,
expectChecksum string,
) (*validateChecksumReader, error) {
hasher, err := NewAlgorithmHash(algorithm)
if err != nil {
return nil, err
}
return &validateChecksumReader{
originalBody: body,
body: io.TeeReader(body, hasher),
hasher: hasher,
algorithm: algorithm,
expectChecksum: expectChecksum,
}, nil
}
// Read attempts to read from the underlying stream while also updating the
// running hash. If the underlying stream returns with an EOF error, the
// checksum of the stream will be collected, and compared against the expected
// checksum. If the checksums do not match, an error will be returned.
//
// If a non-EOF error occurs when reading the underlying stream, that error
// will be returned and the checksum for the stream will be discarded.
func (c *validateChecksumReader) Read(p []byte) (n int, err error) {
n, err = c.body.Read(p)
if err == io.EOF {
if checksumErr := c.validateChecksum(); checksumErr != nil {
return n, checksumErr
}
}
return n, err
}
// Close closes the underlying reader, returning any error that occurred in the
// underlying reader.
func (c *validateChecksumReader) Close() (err error) {
return c.originalBody.Close()
}
func (c *validateChecksumReader) validateChecksum() error {
// Compute base64 encoded checksum hash of the payload's read bytes.
v := base64EncodeHashSum(c.hasher)
if e, a := c.expectChecksum, string(v); !strings.EqualFold(e, a) {
return validationError{
Algorithm: c.algorithm, Expect: e, Actual: a,
}
}
return nil
}
type validationError struct {
Algorithm Algorithm
Expect string
Actual string
}
func (v validationError) Error() string {
return fmt.Sprintf("checksum did not match: algorithm %v, expect %v, actual %v",
v.Algorithm, v.Expect, v.Actual)
}

View file

@ -0,0 +1,389 @@
package checksum
import (
"bytes"
"fmt"
"io"
"strconv"
"strings"
)
const (
crlf = "\r\n"
// https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html
defaultChunkLength = 1024 * 64
awsTrailerHeaderName = "x-amz-trailer"
decodedContentLengthHeaderName = "x-amz-decoded-content-length"
contentEncodingHeaderName = "content-encoding"
awsChunkedContentEncodingHeaderValue = "aws-chunked"
trailerKeyValueSeparator = ":"
)
var (
crlfBytes = []byte(crlf)
finalChunkBytes = []byte("0" + crlf)
)
type awsChunkedEncodingOptions struct {
// The total size of the stream. For unsigned encoding this implies that
// there will only be a single chunk containing the underlying payload,
// unless ChunkLength is also specified.
StreamLength int64
// Set of trailer key:value pairs that will be appended to the end of the
// payload after the end chunk has been written.
Trailers map[string]awsChunkedTrailerValue
// The maximum size of each chunk to be sent. Default value of -1, signals
// that optimal chunk length will be used automatically. ChunkSize must be
// at least 8KB.
//
// If ChunkLength and StreamLength are both specified, the stream will be
// broken up into ChunkLength chunks. The encoded length of the aws-chunked
// encoding can still be determined as long as all trailers, if any, have a
// fixed length.
ChunkLength int
}
type awsChunkedTrailerValue struct {
// Function to retrieve the value of the trailer. Will only be called after
// the underlying stream returns EOF error.
Get func() (string, error)
// If the length of the value can be pre-determined, and is constant
// specify the length. A value of -1 means the length is unknown, or
// cannot be pre-determined.
Length int
}
// awsChunkedEncoding provides a reader that wraps the payload such that
// payload is read as a single aws-chunk payload. This reader can only be used
// if the content length of payload is known. Content-Length is used as size of
// the single payload chunk. The final chunk and trailing checksum is appended
// at the end.
//
// https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html#sigv4-chunked-body-definition
//
// Here is the aws-chunked payload stream as read from the awsChunkedEncoding
// if original request stream is "Hello world", and checksum hash used is SHA256
//
// <b>\r\n
// Hello world\r\n
// 0\r\n
// x-amz-checksum-sha256:ZOyIygCyaOW6GjVnihtTFtIS9PNmskdyMlNKiuyjfzw=\r\n
// \r\n
type awsChunkedEncoding struct {
options awsChunkedEncodingOptions
encodedStream io.Reader
trailerEncodedLength int
}
// newUnsignedAWSChunkedEncoding returns a new awsChunkedEncoding configured
// for unsigned aws-chunked content encoding. Any additional trailers that need
// to be appended after the end chunk must be included as via Trailer
// callbacks.
func newUnsignedAWSChunkedEncoding(
stream io.Reader,
optFns ...func(*awsChunkedEncodingOptions),
) *awsChunkedEncoding {
options := awsChunkedEncodingOptions{
Trailers: map[string]awsChunkedTrailerValue{},
StreamLength: -1,
ChunkLength: -1,
}
for _, fn := range optFns {
fn(&options)
}
var chunkReader io.Reader
if options.ChunkLength != -1 || options.StreamLength == -1 {
if options.ChunkLength == -1 {
options.ChunkLength = defaultChunkLength
}
chunkReader = newBufferedAWSChunkReader(stream, options.ChunkLength)
} else {
chunkReader = newUnsignedChunkReader(stream, options.StreamLength)
}
trailerReader := newAWSChunkedTrailerReader(options.Trailers)
return &awsChunkedEncoding{
options: options,
encodedStream: io.MultiReader(chunkReader,
trailerReader,
bytes.NewBuffer(crlfBytes),
),
trailerEncodedLength: trailerReader.EncodedLength(),
}
}
// EncodedLength returns the final length of the aws-chunked content encoded
// stream if it can be determined without reading the underlying stream or lazy
// header values, otherwise -1 is returned.
func (e *awsChunkedEncoding) EncodedLength() int64 {
var length int64
if e.options.StreamLength == -1 || e.trailerEncodedLength == -1 {
return -1
}
if e.options.StreamLength != 0 {
// If the stream length is known, and there is no chunk length specified,
// only a single chunk will be used. Otherwise the stream length needs to
// include the multiple chunk padding content.
if e.options.ChunkLength == -1 {
length += getUnsignedChunkBytesLength(e.options.StreamLength)
} else {
// Compute chunk header and payload length
numChunks := e.options.StreamLength / int64(e.options.ChunkLength)
length += numChunks * getUnsignedChunkBytesLength(int64(e.options.ChunkLength))
if remainder := e.options.StreamLength % int64(e.options.ChunkLength); remainder != 0 {
length += getUnsignedChunkBytesLength(remainder)
}
}
}
// End chunk
length += int64(len(finalChunkBytes))
// Trailers
length += int64(e.trailerEncodedLength)
// Encoding terminator
length += int64(len(crlf))
return length
}
func getUnsignedChunkBytesLength(payloadLength int64) int64 {
payloadLengthStr := strconv.FormatInt(payloadLength, 16)
return int64(len(payloadLengthStr)) + int64(len(crlf)) + payloadLength + int64(len(crlf))
}
// HTTPHeaders returns the set of headers that must be included the request for
// aws-chunked to work. This includes the content-encoding: aws-chunked header.
//
// If there are multiple layered content encoding, the aws-chunked encoding
// must be appended to the previous layers the stream's encoding. The best way
// to do this is to append all header values returned to the HTTP request's set
// of headers.
func (e *awsChunkedEncoding) HTTPHeaders() map[string][]string {
headers := map[string][]string{
contentEncodingHeaderName: {
awsChunkedContentEncodingHeaderValue,
},
}
if len(e.options.Trailers) != 0 {
trailers := make([]string, 0, len(e.options.Trailers))
for name := range e.options.Trailers {
trailers = append(trailers, strings.ToLower(name))
}
headers[awsTrailerHeaderName] = trailers
}
return headers
}
func (e *awsChunkedEncoding) Read(b []byte) (n int, err error) {
return e.encodedStream.Read(b)
}
// awsChunkedTrailerReader provides a lazy reader for reading of aws-chunked
// content encoded trailers. The trailer values will not be retrieved until the
// reader is read from.
type awsChunkedTrailerReader struct {
reader *bytes.Buffer
trailers map[string]awsChunkedTrailerValue
trailerEncodedLength int
}
// newAWSChunkedTrailerReader returns an initialized awsChunkedTrailerReader to
// lazy reading aws-chunk content encoded trailers.
func newAWSChunkedTrailerReader(trailers map[string]awsChunkedTrailerValue) *awsChunkedTrailerReader {
return &awsChunkedTrailerReader{
trailers: trailers,
trailerEncodedLength: trailerEncodedLength(trailers),
}
}
func trailerEncodedLength(trailers map[string]awsChunkedTrailerValue) (length int) {
for name, trailer := range trailers {
length += len(name) + len(trailerKeyValueSeparator)
l := trailer.Length
if l == -1 {
return -1
}
length += l + len(crlf)
}
return length
}
// EncodedLength returns the length of the encoded trailers if the length could
// be determined without retrieving the header values. Returns -1 if length is
// unknown.
func (r *awsChunkedTrailerReader) EncodedLength() (length int) {
return r.trailerEncodedLength
}
// Read populates the passed in byte slice with bytes from the encoded
// trailers. Will lazy read header values first time Read is called.
func (r *awsChunkedTrailerReader) Read(p []byte) (int, error) {
if r.trailerEncodedLength == 0 {
return 0, io.EOF
}
if r.reader == nil {
trailerLen := r.trailerEncodedLength
if r.trailerEncodedLength == -1 {
trailerLen = 0
}
r.reader = bytes.NewBuffer(make([]byte, 0, trailerLen))
for name, trailer := range r.trailers {
r.reader.WriteString(name)
r.reader.WriteString(trailerKeyValueSeparator)
v, err := trailer.Get()
if err != nil {
return 0, fmt.Errorf("failed to get trailer value, %w", err)
}
r.reader.WriteString(v)
r.reader.WriteString(crlf)
}
}
return r.reader.Read(p)
}
// newUnsignedChunkReader returns an io.Reader encoding the underlying reader
// as unsigned aws-chunked chunks. The returned reader will also include the
// end chunk, but not the aws-chunked final `crlf` segment so trailers can be
// added.
//
// If the payload size is -1 for unknown length the content will be buffered in
// defaultChunkLength chunks before wrapped in aws-chunked chunk encoding.
func newUnsignedChunkReader(reader io.Reader, payloadSize int64) io.Reader {
if payloadSize == -1 {
return newBufferedAWSChunkReader(reader, defaultChunkLength)
}
var endChunk bytes.Buffer
if payloadSize == 0 {
endChunk.Write(finalChunkBytes)
return &endChunk
}
endChunk.WriteString(crlf)
endChunk.Write(finalChunkBytes)
var header bytes.Buffer
header.WriteString(strconv.FormatInt(payloadSize, 16))
header.WriteString(crlf)
return io.MultiReader(
&header,
reader,
&endChunk,
)
}
// Provides a buffered aws-chunked chunk encoder of an underlying io.Reader.
// Will include end chunk, but not the aws-chunked final `crlf` segment so
// trailers can be added.
//
// Note does not implement support for chunk extensions, e.g. chunk signing.
type bufferedAWSChunkReader struct {
reader io.Reader
chunkSize int
chunkSizeStr string
headerBuffer *bytes.Buffer
chunkBuffer *bytes.Buffer
multiReader io.Reader
multiReaderLen int
endChunkDone bool
}
// newBufferedAWSChunkReader returns an bufferedAWSChunkReader for reading
// aws-chunked encoded chunks.
func newBufferedAWSChunkReader(reader io.Reader, chunkSize int) *bufferedAWSChunkReader {
return &bufferedAWSChunkReader{
reader: reader,
chunkSize: chunkSize,
chunkSizeStr: strconv.FormatInt(int64(chunkSize), 16),
headerBuffer: bytes.NewBuffer(make([]byte, 0, 64)),
chunkBuffer: bytes.NewBuffer(make([]byte, 0, chunkSize+len(crlf))),
}
}
// Read attempts to read from the underlying io.Reader writing aws-chunked
// chunk encoded bytes to p. When the underlying io.Reader has been completed
// read the end chunk will be available. Once the end chunk is read, the reader
// will return EOF.
func (r *bufferedAWSChunkReader) Read(p []byte) (n int, err error) {
if r.multiReaderLen == 0 && r.endChunkDone {
return 0, io.EOF
}
if r.multiReader == nil || r.multiReaderLen == 0 {
r.multiReader, r.multiReaderLen, err = r.newMultiReader()
if err != nil {
return 0, err
}
}
n, err = r.multiReader.Read(p)
r.multiReaderLen -= n
if err == io.EOF && !r.endChunkDone {
// Edge case handling when the multi-reader has been completely read,
// and returned an EOF, make sure that EOF only gets returned if the
// end chunk was included in the multi-reader. Otherwise, the next call
// to read will initialize the next chunk's multi-reader.
err = nil
}
return n, err
}
// newMultiReader returns a new io.Reader for wrapping the next chunk. Will
// return an error if the underlying reader can not be read from. Will never
// return io.EOF.
func (r *bufferedAWSChunkReader) newMultiReader() (io.Reader, int, error) {
// io.Copy eats the io.EOF returned by io.LimitReader. Any error that
// occurs here is due to an actual read error.
n, err := io.Copy(r.chunkBuffer, io.LimitReader(r.reader, int64(r.chunkSize)))
if err != nil {
return nil, 0, err
}
if n == 0 {
// Early exit writing out only the end chunk. This does not include
// aws-chunk's final `crlf` so that trailers can still be added by
// upstream reader.
r.headerBuffer.Reset()
r.headerBuffer.WriteString("0")
r.headerBuffer.WriteString(crlf)
r.endChunkDone = true
return r.headerBuffer, r.headerBuffer.Len(), nil
}
r.chunkBuffer.WriteString(crlf)
chunkSizeStr := r.chunkSizeStr
if int(n) != r.chunkSize {
chunkSizeStr = strconv.FormatInt(n, 16)
}
r.headerBuffer.Reset()
r.headerBuffer.WriteString(chunkSizeStr)
r.headerBuffer.WriteString(crlf)
return io.MultiReader(
r.headerBuffer,
r.chunkBuffer,
), r.headerBuffer.Len() + r.chunkBuffer.Len(), nil
}

View file

@ -0,0 +1,6 @@
// Code generated by internal/repotools/cmd/updatemodulemeta DO NOT EDIT.
package checksum
// goModuleVersion is the tagged release for this module
const goModuleVersion = "1.3.5"

View file

@ -0,0 +1,180 @@
package checksum
import (
"github.com/aws/smithy-go/middleware"
)
// InputMiddlewareOptions provides the options for the request
// checksum middleware setup.
type InputMiddlewareOptions struct {
// GetAlgorithm is a function to get the checksum algorithm of the
// input payload from the input parameters.
//
// Given the input parameter value, the function must return the algorithm
// and true, or false if no algorithm is specified.
GetAlgorithm func(interface{}) (string, bool)
// Forces the middleware to compute the input payload's checksum. The
// request will fail if the algorithm is not specified or unable to compute
// the checksum.
RequireChecksum bool
// Enables support for wrapping the serialized input payload with a
// content-encoding: aws-check wrapper, and including a trailer for the
// algorithm's checksum value.
//
// The checksum will not be computed, nor added as trailing checksum, if
// the Algorithm's header is already set on the request.
EnableTrailingChecksum bool
// Enables support for computing the SHA256 checksum of input payloads
// along with the algorithm specified checksum. Prevents downstream
// middleware handlers (computePayloadSHA256) re-reading the payload.
//
// The SHA256 payload checksum will only be used for computed for requests
// that are not TLS, or do not enable trailing checksums.
//
// The SHA256 payload hash will not be computed, if the Algorithm's header
// is already set on the request.
EnableComputeSHA256PayloadHash bool
// Enables support for setting the aws-chunked decoded content length
// header for the decoded length of the underlying stream. Will only be set
// when used with trailing checksums, and aws-chunked content-encoding.
EnableDecodedContentLengthHeader bool
}
// AddInputMiddleware adds the middleware for performing checksum computing
// of request payloads, and checksum validation of response payloads.
func AddInputMiddleware(stack *middleware.Stack, options InputMiddlewareOptions) (err error) {
// TODO ensure this works correctly with presigned URLs
// Middleware stack:
// * (OK)(Initialize) --none--
// * (OK)(Serialize) EndpointResolver
// * (OK)(Build) ComputeContentLength
// * (AD)(Build) Header ComputeInputPayloadChecksum
// * SIGNED Payload - If HTTP && not support trailing checksum
// * UNSIGNED Payload - If HTTPS && not support trailing checksum
// * (RM)(Build) ContentChecksum - OK to remove
// * (OK)(Build) ComputePayloadHash
// * v4.dynamicPayloadSigningMiddleware
// * v4.computePayloadSHA256
// * v4.unsignedPayload
// (OK)(Build) Set computedPayloadHash header
// * (OK)(Finalize) Retry
// * (AD)(Finalize) Trailer ComputeInputPayloadChecksum,
// * Requires HTTPS && support trailing checksum
// * UNSIGNED Payload
// * Finalize run if HTTPS && support trailing checksum
// * (OK)(Finalize) Signing
// * (OK)(Deserialize) --none--
// Initial checksum configuration look up middleware
err = stack.Initialize.Add(&setupInputContext{
GetAlgorithm: options.GetAlgorithm,
}, middleware.Before)
if err != nil {
return err
}
stack.Build.Remove("ContentChecksum")
inputChecksum := &computeInputPayloadChecksum{
RequireChecksum: options.RequireChecksum,
EnableTrailingChecksum: options.EnableTrailingChecksum,
EnableComputePayloadHash: options.EnableComputeSHA256PayloadHash,
EnableDecodedContentLengthHeader: options.EnableDecodedContentLengthHeader,
}
if err := stack.Finalize.Insert(inputChecksum, "ResolveEndpointV2", middleware.After); err != nil {
return err
}
// If trailing checksum is not supported no need for finalize handler to be added.
if options.EnableTrailingChecksum {
trailerMiddleware := &addInputChecksumTrailer{
EnableTrailingChecksum: inputChecksum.EnableTrailingChecksum,
RequireChecksum: inputChecksum.RequireChecksum,
EnableComputePayloadHash: inputChecksum.EnableComputePayloadHash,
EnableDecodedContentLengthHeader: inputChecksum.EnableDecodedContentLengthHeader,
}
if err := stack.Finalize.Insert(trailerMiddleware, "Retry", middleware.After); err != nil {
return err
}
}
return nil
}
// RemoveInputMiddleware Removes the compute input payload checksum middleware
// handlers from the stack.
func RemoveInputMiddleware(stack *middleware.Stack) {
id := (*setupInputContext)(nil).ID()
stack.Initialize.Remove(id)
id = (*computeInputPayloadChecksum)(nil).ID()
stack.Finalize.Remove(id)
}
// OutputMiddlewareOptions provides options for configuring output checksum
// validation middleware.
type OutputMiddlewareOptions struct {
// GetValidationMode is a function to get the checksum validation
// mode of the output payload from the input parameters.
//
// Given the input parameter value, the function must return the validation
// mode and true, or false if no mode is specified.
GetValidationMode func(interface{}) (string, bool)
// The set of checksum algorithms that should be used for response payload
// checksum validation. The algorithm(s) used will be a union of the
// output's returned algorithms and this set.
//
// Only the first algorithm in the union is currently used.
ValidationAlgorithms []string
// If set the middleware will ignore output multipart checksums. Otherwise
// an checksum format error will be returned by the middleware.
IgnoreMultipartValidation bool
// When set the middleware will log when output does not have checksum or
// algorithm to validate.
LogValidationSkipped bool
// When set the middleware will log when the output contains a multipart
// checksum that was, skipped and not validated.
LogMultipartValidationSkipped bool
}
// AddOutputMiddleware adds the middleware for validating response payload's
// checksum.
func AddOutputMiddleware(stack *middleware.Stack, options OutputMiddlewareOptions) error {
err := stack.Initialize.Add(&setupOutputContext{
GetValidationMode: options.GetValidationMode,
}, middleware.Before)
if err != nil {
return err
}
// Resolve a supported priority order list of algorithms to validate.
algorithms := FilterSupportedAlgorithms(options.ValidationAlgorithms)
m := &validateOutputPayloadChecksum{
Algorithms: algorithms,
IgnoreMultipartValidation: options.IgnoreMultipartValidation,
LogMultipartValidationSkipped: options.LogMultipartValidationSkipped,
LogValidationSkipped: options.LogValidationSkipped,
}
return stack.Deserialize.Add(m, middleware.After)
}
// RemoveOutputMiddleware Removes the compute input payload checksum middleware
// handlers from the stack.
func RemoveOutputMiddleware(stack *middleware.Stack) {
id := (*setupOutputContext)(nil).ID()
stack.Initialize.Remove(id)
id = (*validateOutputPayloadChecksum)(nil).ID()
stack.Deserialize.Remove(id)
}

View file

@ -0,0 +1,482 @@
package checksum
import (
"context"
"crypto/sha256"
"fmt"
"hash"
"io"
"strconv"
v4 "github.com/aws/aws-sdk-go-v2/aws/signer/v4"
internalcontext "github.com/aws/aws-sdk-go-v2/internal/context"
presignedurlcust "github.com/aws/aws-sdk-go-v2/service/internal/presigned-url"
"github.com/aws/smithy-go/middleware"
smithyhttp "github.com/aws/smithy-go/transport/http"
)
const (
contentMD5Header = "Content-Md5"
streamingUnsignedPayloadTrailerPayloadHash = "STREAMING-UNSIGNED-PAYLOAD-TRAILER"
)
// computedInputChecksumsKey is the metadata key for recording the algorithm the
// checksum was computed for and the checksum value.
type computedInputChecksumsKey struct{}
// GetComputedInputChecksums returns the map of checksum algorithm to their
// computed value stored in the middleware Metadata. Returns false if no values
// were stored in the Metadata.
func GetComputedInputChecksums(m middleware.Metadata) (map[string]string, bool) {
vs, ok := m.Get(computedInputChecksumsKey{}).(map[string]string)
return vs, ok
}
// SetComputedInputChecksums stores the map of checksum algorithm to their
// computed value in the middleware Metadata. Overwrites any values that
// currently exist in the metadata.
func SetComputedInputChecksums(m *middleware.Metadata, vs map[string]string) {
m.Set(computedInputChecksumsKey{}, vs)
}
// computeInputPayloadChecksum middleware computes payload checksum
type computeInputPayloadChecksum struct {
// Enables support for wrapping the serialized input payload with a
// content-encoding: aws-check wrapper, and including a trailer for the
// algorithm's checksum value.
//
// The checksum will not be computed, nor added as trailing checksum, if
// the Algorithm's header is already set on the request.
EnableTrailingChecksum bool
// States that a checksum is required to be included for the operation. If
// Input does not specify a checksum, fallback to built in MD5 checksum is
// used.
//
// Replaces smithy-go's ContentChecksum middleware.
RequireChecksum bool
// Enables support for computing the SHA256 checksum of input payloads
// along with the algorithm specified checksum. Prevents downstream
// middleware handlers (computePayloadSHA256) re-reading the payload.
//
// The SHA256 payload hash will only be used for computed for requests
// that are not TLS, or do not enable trailing checksums.
//
// The SHA256 payload hash will not be computed, if the Algorithm's header
// is already set on the request.
EnableComputePayloadHash bool
// Enables support for setting the aws-chunked decoded content length
// header for the decoded length of the underlying stream. Will only be set
// when used with trailing checksums, and aws-chunked content-encoding.
EnableDecodedContentLengthHeader bool
useTrailer bool
}
type useTrailer struct{}
// ID provides the middleware's identifier.
func (m *computeInputPayloadChecksum) ID() string {
return "AWSChecksum:ComputeInputPayloadChecksum"
}
type computeInputHeaderChecksumError struct {
Msg string
Err error
}
func (e computeInputHeaderChecksumError) Error() string {
const intro = "compute input header checksum failed"
if e.Err != nil {
return fmt.Sprintf("%s, %s, %v", intro, e.Msg, e.Err)
}
return fmt.Sprintf("%s, %s", intro, e.Msg)
}
func (e computeInputHeaderChecksumError) Unwrap() error { return e.Err }
// HandleBuild handles computing the payload's checksum, in the following cases:
// - Is HTTP, not HTTPS
// - RequireChecksum is true, and no checksums were specified via the Input
// - Trailing checksums are not supported
//
// The build handler must be inserted in the stack before ContentPayloadHash
// and after ComputeContentLength.
func (m *computeInputPayloadChecksum) HandleFinalize(
ctx context.Context, in middleware.FinalizeInput, next middleware.FinalizeHandler,
) (
out middleware.FinalizeOutput, metadata middleware.Metadata, err error,
) {
req, ok := in.Request.(*smithyhttp.Request)
if !ok {
return out, metadata, computeInputHeaderChecksumError{
Msg: fmt.Sprintf("unknown request type %T", req),
}
}
var algorithm Algorithm
var checksum string
defer func() {
if algorithm == "" || checksum == "" || err != nil {
return
}
// Record the checksum and algorithm that was computed
SetComputedInputChecksums(&metadata, map[string]string{
string(algorithm): checksum,
})
}()
// If no algorithm was specified, and the operation requires a checksum,
// fallback to the legacy content MD5 checksum.
algorithm, ok, err = getInputAlgorithm(ctx)
if err != nil {
return out, metadata, err
} else if !ok {
if m.RequireChecksum {
checksum, err = setMD5Checksum(ctx, req)
if err != nil {
return out, metadata, computeInputHeaderChecksumError{
Msg: "failed to compute stream's MD5 checksum",
Err: err,
}
}
algorithm = Algorithm("MD5")
}
return next.HandleFinalize(ctx, in)
}
// If the checksum header is already set nothing to do.
checksumHeader := AlgorithmHTTPHeader(algorithm)
if checksum = req.Header.Get(checksumHeader); checksum != "" {
return next.HandleFinalize(ctx, in)
}
computePayloadHash := m.EnableComputePayloadHash
if v := v4.GetPayloadHash(ctx); v != "" {
computePayloadHash = false
}
stream := req.GetStream()
streamLength, err := getRequestStreamLength(req)
if err != nil {
return out, metadata, computeInputHeaderChecksumError{
Msg: "failed to determine stream length",
Err: err,
}
}
// If trailing checksums are supported, the request is HTTPS, and the
// stream is not nil or empty, instead switch to a trailing checksum.
//
// Nil and empty streams will always be handled as a request header,
// regardless if the operation supports trailing checksums or not.
if req.IsHTTPS() && !presignedurlcust.GetIsPresigning(ctx) {
if stream != nil && streamLength != 0 && m.EnableTrailingChecksum {
if m.EnableComputePayloadHash {
// ContentSHA256Header middleware handles the header
ctx = v4.SetPayloadHash(ctx, streamingUnsignedPayloadTrailerPayloadHash)
}
m.useTrailer = true
ctx = middleware.WithStackValue(ctx, useTrailer{}, true)
return next.HandleFinalize(ctx, in)
}
// If trailing checksums are not enabled but protocol is still HTTPS
// disabling computing the payload hash. Downstream middleware handler
// (ComputetPayloadHash) will set the payload hash to unsigned payload,
// if signing was used.
computePayloadHash = false
}
// Only seekable streams are supported for non-trailing checksums, because
// the stream needs to be rewound before the handler can continue.
if stream != nil && !req.IsStreamSeekable() {
return out, metadata, computeInputHeaderChecksumError{
Msg: "unseekable stream is not supported without TLS and trailing checksum",
}
}
var sha256Checksum string
checksum, sha256Checksum, err = computeStreamChecksum(
algorithm, stream, computePayloadHash)
if err != nil {
return out, metadata, computeInputHeaderChecksumError{
Msg: "failed to compute stream checksum",
Err: err,
}
}
if err := req.RewindStream(); err != nil {
return out, metadata, computeInputHeaderChecksumError{
Msg: "failed to rewind stream",
Err: err,
}
}
req.Header.Set(checksumHeader, checksum)
if computePayloadHash {
ctx = v4.SetPayloadHash(ctx, sha256Checksum)
}
return next.HandleFinalize(ctx, in)
}
type computeInputTrailingChecksumError struct {
Msg string
Err error
}
func (e computeInputTrailingChecksumError) Error() string {
const intro = "compute input trailing checksum failed"
if e.Err != nil {
return fmt.Sprintf("%s, %s, %v", intro, e.Msg, e.Err)
}
return fmt.Sprintf("%s, %s", intro, e.Msg)
}
func (e computeInputTrailingChecksumError) Unwrap() error { return e.Err }
// addInputChecksumTrailer
// - Is HTTPS, not HTTP
// - A checksum was specified via the Input
// - Trailing checksums are supported.
type addInputChecksumTrailer struct {
EnableTrailingChecksum bool
RequireChecksum bool
EnableComputePayloadHash bool
EnableDecodedContentLengthHeader bool
}
// ID identifies this middleware.
func (*addInputChecksumTrailer) ID() string {
return "addInputChecksumTrailer"
}
// HandleFinalize wraps the request body to write the trailing checksum.
func (m *addInputChecksumTrailer) HandleFinalize(
ctx context.Context, in middleware.FinalizeInput, next middleware.FinalizeHandler,
) (
out middleware.FinalizeOutput, metadata middleware.Metadata, err error,
) {
if enabled, _ := middleware.GetStackValue(ctx, useTrailer{}).(bool); !enabled {
return next.HandleFinalize(ctx, in)
}
req, ok := in.Request.(*smithyhttp.Request)
if !ok {
return out, metadata, computeInputTrailingChecksumError{
Msg: fmt.Sprintf("unknown request type %T", req),
}
}
// Trailing checksums are only supported when TLS is enabled.
if !req.IsHTTPS() {
return out, metadata, computeInputTrailingChecksumError{
Msg: "HTTPS required",
}
}
// If no algorithm was specified, there is nothing to do.
algorithm, ok, err := getInputAlgorithm(ctx)
if err != nil {
return out, metadata, computeInputTrailingChecksumError{
Msg: "failed to get algorithm",
Err: err,
}
} else if !ok {
return out, metadata, computeInputTrailingChecksumError{
Msg: "no algorithm specified",
}
}
// If the checksum header is already set before finalize could run, there
// is nothing to do.
checksumHeader := AlgorithmHTTPHeader(algorithm)
if req.Header.Get(checksumHeader) != "" {
return next.HandleFinalize(ctx, in)
}
stream := req.GetStream()
streamLength, err := getRequestStreamLength(req)
if err != nil {
return out, metadata, computeInputTrailingChecksumError{
Msg: "failed to determine stream length",
Err: err,
}
}
if stream == nil || streamLength == 0 {
// Nil and empty streams are handled by the Build handler. They are not
// supported by the trailing checksums finalize handler. There is no
// benefit to sending them as trailers compared to headers.
return out, metadata, computeInputTrailingChecksumError{
Msg: "nil or empty streams are not supported",
}
}
checksumReader, err := newComputeChecksumReader(stream, algorithm)
if err != nil {
return out, metadata, computeInputTrailingChecksumError{
Msg: "failed to created checksum reader",
Err: err,
}
}
awsChunkedReader := newUnsignedAWSChunkedEncoding(checksumReader,
func(o *awsChunkedEncodingOptions) {
o.Trailers[AlgorithmHTTPHeader(checksumReader.Algorithm())] = awsChunkedTrailerValue{
Get: checksumReader.Base64Checksum,
Length: checksumReader.Base64ChecksumLength(),
}
o.StreamLength = streamLength
})
for key, values := range awsChunkedReader.HTTPHeaders() {
for _, value := range values {
req.Header.Add(key, value)
}
}
// Setting the stream on the request will create a copy. The content length
// is not updated until after the request is copied to prevent impacting
// upstream middleware.
req, err = req.SetStream(awsChunkedReader)
if err != nil {
return out, metadata, computeInputTrailingChecksumError{
Msg: "failed updating request to trailing checksum wrapped stream",
Err: err,
}
}
req.ContentLength = awsChunkedReader.EncodedLength()
in.Request = req
// Add decoded content length header if original stream's content length is known.
if streamLength != -1 && m.EnableDecodedContentLengthHeader {
req.Header.Set(decodedContentLengthHeaderName, strconv.FormatInt(streamLength, 10))
}
out, metadata, err = next.HandleFinalize(ctx, in)
if err == nil {
checksum, err := checksumReader.Base64Checksum()
if err != nil {
return out, metadata, fmt.Errorf("failed to get computed checksum, %w", err)
}
// Record the checksum and algorithm that was computed
SetComputedInputChecksums(&metadata, map[string]string{
string(algorithm): checksum,
})
}
return out, metadata, err
}
func getInputAlgorithm(ctx context.Context) (Algorithm, bool, error) {
ctxAlgorithm := internalcontext.GetChecksumInputAlgorithm(ctx)
if ctxAlgorithm == "" {
return "", false, nil
}
algorithm, err := ParseAlgorithm(ctxAlgorithm)
if err != nil {
return "", false, fmt.Errorf(
"failed to parse algorithm, %w", err)
}
return algorithm, true, nil
}
func computeStreamChecksum(algorithm Algorithm, stream io.Reader, computePayloadHash bool) (
checksum string, sha256Checksum string, err error,
) {
hasher, err := NewAlgorithmHash(algorithm)
if err != nil {
return "", "", fmt.Errorf(
"failed to get hasher for checksum algorithm, %w", err)
}
var sha256Hasher hash.Hash
var batchHasher io.Writer = hasher
// Compute payload hash for the protocol. To prevent another handler
// (computePayloadSHA256) re-reading body also compute the SHA256 for
// request signing. If configured checksum algorithm is SHA256, don't
// double wrap stream with another SHA256 hasher.
if computePayloadHash && algorithm != AlgorithmSHA256 {
sha256Hasher = sha256.New()
batchHasher = io.MultiWriter(hasher, sha256Hasher)
}
if stream != nil {
if _, err = io.Copy(batchHasher, stream); err != nil {
return "", "", fmt.Errorf(
"failed to read stream to compute hash, %w", err)
}
}
checksum = string(base64EncodeHashSum(hasher))
if computePayloadHash {
if algorithm != AlgorithmSHA256 {
sha256Checksum = string(hexEncodeHashSum(sha256Hasher))
} else {
sha256Checksum = string(hexEncodeHashSum(hasher))
}
}
return checksum, sha256Checksum, nil
}
func getRequestStreamLength(req *smithyhttp.Request) (int64, error) {
if v := req.ContentLength; v > 0 {
return v, nil
}
if length, ok, err := req.StreamLength(); err != nil {
return 0, fmt.Errorf("failed getting request stream's length, %w", err)
} else if ok {
return length, nil
}
return -1, nil
}
// setMD5Checksum computes the MD5 of the request payload and sets it to the
// Content-MD5 header. Returning the MD5 base64 encoded string or error.
//
// If the MD5 is already set as the Content-MD5 header, that value will be
// returned, and nothing else will be done.
//
// If the payload is empty, no MD5 will be computed. No error will be returned.
// Empty payloads do not have an MD5 value.
//
// Replaces the smithy-go middleware for httpChecksum trait.
func setMD5Checksum(ctx context.Context, req *smithyhttp.Request) (string, error) {
if v := req.Header.Get(contentMD5Header); len(v) != 0 {
return v, nil
}
stream := req.GetStream()
if stream == nil {
return "", nil
}
if !req.IsStreamSeekable() {
return "", fmt.Errorf(
"unseekable stream is not supported for computing md5 checksum")
}
v, err := computeMD5Checksum(stream)
if err != nil {
return "", err
}
if err := req.RewindStream(); err != nil {
return "", fmt.Errorf("failed to rewind stream after computing MD5 checksum, %w", err)
}
// set the 'Content-MD5' header
req.Header.Set(contentMD5Header, string(v))
return string(v), nil
}

View file

@ -0,0 +1,98 @@
package checksum
import (
"context"
internalcontext "github.com/aws/aws-sdk-go-v2/internal/context"
"github.com/aws/smithy-go/middleware"
)
// setupChecksumContext is the initial middleware that looks up the input
// used to configure checksum behavior. This middleware must be executed before
// input validation step or any other checksum middleware.
type setupInputContext struct {
// GetAlgorithm is a function to get the checksum algorithm of the
// input payload from the input parameters.
//
// Given the input parameter value, the function must return the algorithm
// and true, or false if no algorithm is specified.
GetAlgorithm func(interface{}) (string, bool)
}
// ID for the middleware
func (m *setupInputContext) ID() string {
return "AWSChecksum:SetupInputContext"
}
// HandleInitialize initialization middleware that setups up the checksum
// context based on the input parameters provided in the stack.
func (m *setupInputContext) HandleInitialize(
ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler,
) (
out middleware.InitializeOutput, metadata middleware.Metadata, err error,
) {
// Check if validation algorithm is specified.
if m.GetAlgorithm != nil {
// check is input resource has a checksum algorithm
algorithm, ok := m.GetAlgorithm(in.Parameters)
if ok && len(algorithm) != 0 {
ctx = internalcontext.SetChecksumInputAlgorithm(ctx, algorithm)
}
}
return next.HandleInitialize(ctx, in)
}
type setupOutputContext struct {
// GetValidationMode is a function to get the checksum validation
// mode of the output payload from the input parameters.
//
// Given the input parameter value, the function must return the validation
// mode and true, or false if no mode is specified.
GetValidationMode func(interface{}) (string, bool)
}
// ID for the middleware
func (m *setupOutputContext) ID() string {
return "AWSChecksum:SetupOutputContext"
}
// HandleInitialize initialization middleware that setups up the checksum
// context based on the input parameters provided in the stack.
func (m *setupOutputContext) HandleInitialize(
ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler,
) (
out middleware.InitializeOutput, metadata middleware.Metadata, err error,
) {
// Check if validation mode is specified.
if m.GetValidationMode != nil {
// check is input resource has a checksum algorithm
mode, ok := m.GetValidationMode(in.Parameters)
if ok && len(mode) != 0 {
ctx = setContextOutputValidationMode(ctx, mode)
}
}
return next.HandleInitialize(ctx, in)
}
// outputValidationModeKey is the key set on context used to identify if
// output checksum validation is enabled.
type outputValidationModeKey struct{}
// setContextOutputValidationMode sets the request checksum
// algorithm on the context.
//
// Scoped to stack values.
func setContextOutputValidationMode(ctx context.Context, value string) context.Context {
return middleware.WithStackValue(ctx, outputValidationModeKey{}, value)
}
// getContextOutputValidationMode returns response checksum validation state,
// if one was specified. Empty string is returned if one is not specified.
//
// Scoped to stack values.
func getContextOutputValidationMode(ctx context.Context) (v string) {
v, _ = middleware.GetStackValue(ctx, outputValidationModeKey{}).(string)
return v
}

View file

@ -0,0 +1,131 @@
package checksum
import (
"context"
"fmt"
"strings"
"github.com/aws/smithy-go"
"github.com/aws/smithy-go/logging"
"github.com/aws/smithy-go/middleware"
smithyhttp "github.com/aws/smithy-go/transport/http"
)
// outputValidationAlgorithmsUsedKey is the metadata key for indexing the algorithms
// that were used, by the middleware's validation.
type outputValidationAlgorithmsUsedKey struct{}
// GetOutputValidationAlgorithmsUsed returns the checksum algorithms used
// stored in the middleware Metadata. Returns false if no algorithms were
// stored in the Metadata.
func GetOutputValidationAlgorithmsUsed(m middleware.Metadata) ([]string, bool) {
vs, ok := m.Get(outputValidationAlgorithmsUsedKey{}).([]string)
return vs, ok
}
// SetOutputValidationAlgorithmsUsed stores the checksum algorithms used in the
// middleware Metadata.
func SetOutputValidationAlgorithmsUsed(m *middleware.Metadata, vs []string) {
m.Set(outputValidationAlgorithmsUsedKey{}, vs)
}
// validateOutputPayloadChecksum middleware computes payload checksum of the
// received response and validates with checksum returned by the service.
type validateOutputPayloadChecksum struct {
// Algorithms represents a priority-ordered list of valid checksum
// algorithm that should be validated when present in HTTP response
// headers.
Algorithms []Algorithm
// IgnoreMultipartValidation indicates multipart checksums ending with "-#"
// will be ignored.
IgnoreMultipartValidation bool
// When set the middleware will log when output does not have checksum or
// algorithm to validate.
LogValidationSkipped bool
// When set the middleware will log when the output contains a multipart
// checksum that was, skipped and not validated.
LogMultipartValidationSkipped bool
}
func (m *validateOutputPayloadChecksum) ID() string {
return "AWSChecksum:ValidateOutputPayloadChecksum"
}
// HandleDeserialize is a Deserialize middleware that wraps the HTTP response
// body with an io.ReadCloser that will validate the its checksum.
func (m *validateOutputPayloadChecksum) HandleDeserialize(
ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler,
) (
out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
) {
out, metadata, err = next.HandleDeserialize(ctx, in)
if err != nil {
return out, metadata, err
}
// If there is no validation mode specified nothing is supported.
if mode := getContextOutputValidationMode(ctx); mode != "ENABLED" {
return out, metadata, err
}
response, ok := out.RawResponse.(*smithyhttp.Response)
if !ok {
return out, metadata, &smithy.DeserializationError{
Err: fmt.Errorf("unknown transport type %T", out.RawResponse),
}
}
var expectedChecksum string
var algorithmToUse Algorithm
for _, algorithm := range m.Algorithms {
value := response.Header.Get(AlgorithmHTTPHeader(algorithm))
if len(value) == 0 {
continue
}
expectedChecksum = value
algorithmToUse = algorithm
}
// TODO this must validate the validation mode is set to enabled.
logger := middleware.GetLogger(ctx)
// Skip validation if no checksum algorithm or checksum is available.
if len(expectedChecksum) == 0 || len(algorithmToUse) == 0 {
if m.LogValidationSkipped {
// TODO this probably should have more information about the
// operation output that won't be validated.
logger.Logf(logging.Warn,
"Response has no supported checksum. Not validating response payload.")
}
return out, metadata, nil
}
// Ignore multipart validation
if m.IgnoreMultipartValidation && strings.Contains(expectedChecksum, "-") {
if m.LogMultipartValidationSkipped {
// TODO this probably should have more information about the
// operation output that won't be validated.
logger.Logf(logging.Warn, "Skipped validation of multipart checksum.")
}
return out, metadata, nil
}
body, err := newValidateChecksumReader(response.Body, algorithmToUse, expectedChecksum)
if err != nil {
return out, metadata, fmt.Errorf("failed to create checksum validation reader, %w", err)
}
response.Body = body
// Update the metadata to include the set of the checksum algorithms that
// will be validated.
SetOutputValidationAlgorithmsUsed(&metadata, []string{
string(algorithmToUse),
})
return out, metadata, nil
}

View file

@ -0,0 +1,346 @@
# v1.11.17 (2024-07-10.2)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.11.16 (2024-07-10)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.11.15 (2024-06-28)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.11.14 (2024-06-19)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.11.13 (2024-06-18)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.11.12 (2024-06-17)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.11.11 (2024-06-07)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.11.10 (2024-06-03)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.11.9 (2024-05-16)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.11.8 (2024-05-15)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.11.7 (2024-03-29)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.11.6 (2024-03-18)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.11.5 (2024-03-07)
* **Bug Fix**: Remove dependency on go-cmp.
* **Dependency Update**: Updated to the latest SDK module versions
# v1.11.4 (2024-03-05)
* **Bug Fix**: Restore typo'd API `AddAsIsInternalPresigingMiddleware` as an alias for backwards compatibility.
# v1.11.3 (2024-03-04)
* **Bug Fix**: Correct a typo in internal AddAsIsPresigningMiddleware API.
# v1.11.2 (2024-02-23)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.11.1 (2024-02-21)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.11.0 (2024-02-13)
* **Feature**: Bump minimum Go version to 1.20 per our language support policy.
* **Dependency Update**: Updated to the latest SDK module versions
# v1.10.10 (2024-01-04)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.10.9 (2023-12-07)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.10.8 (2023-12-01)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.10.7 (2023-11-30)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.10.6 (2023-11-29)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.10.5 (2023-11-28.2)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.10.4 (2023-11-20)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.10.3 (2023-11-15)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.10.2 (2023-11-09)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.10.1 (2023-11-01)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.10.0 (2023-10-31)
* **Feature**: **BREAKING CHANGE**: Bump minimum go version to 1.19 per the revised [go version support policy](https://aws.amazon.com/blogs/developer/aws-sdk-for-go-aligns-with-go-release-policy-on-supported-runtimes/).
* **Dependency Update**: Updated to the latest SDK module versions
# v1.9.37 (2023-10-12)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.9.36 (2023-10-06)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.9.35 (2023-08-21)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.9.34 (2023-08-18)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.9.33 (2023-08-17)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.9.32 (2023-08-07)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.9.31 (2023-07-31)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.9.30 (2023-07-28)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.9.29 (2023-07-13)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.9.28 (2023-06-13)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.9.27 (2023-04-24)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.9.26 (2023-04-07)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.9.25 (2023-03-21)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.9.24 (2023-03-10)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.9.23 (2023-02-20)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.9.22 (2023-02-03)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.9.21 (2022-12-15)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.9.20 (2022-12-02)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.9.19 (2022-10-24)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.9.18 (2022-10-21)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.9.17 (2022-09-20)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.9.16 (2022-09-14)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.9.15 (2022-09-02)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.9.14 (2022-08-31)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.9.13 (2022-08-29)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.9.12 (2022-08-11)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.9.11 (2022-08-09)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.9.10 (2022-08-08)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.9.9 (2022-08-01)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.9.8 (2022-07-05)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.9.7 (2022-06-29)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.9.6 (2022-06-07)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.9.5 (2022-05-17)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.9.4 (2022-04-25)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.9.3 (2022-03-30)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.9.2 (2022-03-24)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.9.1 (2022-03-23)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.9.0 (2022-03-08)
* **Feature**: Updated `github.com/aws/smithy-go` to latest version
* **Dependency Update**: Updated to the latest SDK module versions
# v1.8.0 (2022-02-24)
* **Feature**: Updated `github.com/aws/smithy-go` to latest version
* **Dependency Update**: Updated to the latest SDK module versions
# v1.7.0 (2022-01-14)
* **Feature**: Updated `github.com/aws/smithy-go` to latest version
* **Dependency Update**: Updated to the latest SDK module versions
# v1.6.0 (2022-01-07)
* **Feature**: Updated `github.com/aws/smithy-go` to latest version
* **Dependency Update**: Updated to the latest SDK module versions
# v1.5.2 (2021-12-02)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.5.1 (2021-11-19)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.5.0 (2021-11-06)
* **Feature**: Updated `github.com/aws/smithy-go` to latest version
* **Dependency Update**: Updated to the latest SDK module versions
# v1.4.0 (2021-10-21)
* **Feature**: Updated to latest version
* **Dependency Update**: Updated to the latest SDK module versions
# v1.3.2 (2021-10-11)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.3.1 (2021-09-17)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.3.0 (2021-08-27)
* **Feature**: Updated `github.com/aws/smithy-go` to latest version
* **Dependency Update**: Updated to the latest SDK module versions
# v1.2.3 (2021-08-19)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.2.2 (2021-08-04)
* **Dependency Update**: Updated `github.com/aws/smithy-go` to latest version.
* **Dependency Update**: Updated to the latest SDK module versions
# v1.2.1 (2021-07-15)
* **Dependency Update**: Updated `github.com/aws/smithy-go` to latest version
* **Dependency Update**: Updated to the latest SDK module versions
# v1.2.0 (2021-06-25)
* **Feature**: Updated `github.com/aws/smithy-go` to latest version
* **Dependency Update**: Updated to the latest SDK module versions
# v1.1.1 (2021-05-20)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.1.0 (2021-05-14)
* **Feature**: Constant has been added to modules to enable runtime version inspection for reporting.
* **Dependency Update**: Updated to the latest SDK module versions

View file

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View file

@ -0,0 +1,56 @@
package presignedurl
import (
"context"
"github.com/aws/smithy-go/middleware"
)
// WithIsPresigning adds the isPresigning sentinel value to a context to signal
// that the middleware stack is using the presign flow.
//
// Scoped to stack values. Use github.com/aws/smithy-go/middleware#ClearStackValues
// to clear all stack values.
func WithIsPresigning(ctx context.Context) context.Context {
return middleware.WithStackValue(ctx, isPresigningKey{}, true)
}
// GetIsPresigning returns if the context contains the isPresigning sentinel
// value for presigning flows.
//
// Scoped to stack values. Use github.com/aws/smithy-go/middleware#ClearStackValues
// to clear all stack values.
func GetIsPresigning(ctx context.Context) bool {
v, _ := middleware.GetStackValue(ctx, isPresigningKey{}).(bool)
return v
}
type isPresigningKey struct{}
// AddAsIsPresigningMiddleware adds a middleware to the head of the stack that
// will update the stack's context to be flagged as being invoked for the
// purpose of presigning.
func AddAsIsPresigningMiddleware(stack *middleware.Stack) error {
return stack.Initialize.Add(asIsPresigningMiddleware{}, middleware.Before)
}
// AddAsIsPresigingMiddleware is an alias for backwards compatibility.
//
// Deprecated: This API was released with a typo. Use
// [AddAsIsPresigningMiddleware] instead.
func AddAsIsPresigingMiddleware(stack *middleware.Stack) error {
return AddAsIsPresigningMiddleware(stack)
}
type asIsPresigningMiddleware struct{}
func (asIsPresigningMiddleware) ID() string { return "AsIsPresigningMiddleware" }
func (asIsPresigningMiddleware) HandleInitialize(
ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler,
) (
out middleware.InitializeOutput, metadata middleware.Metadata, err error,
) {
ctx = WithIsPresigning(ctx)
return next.HandleInitialize(ctx, in)
}

View file

@ -0,0 +1,3 @@
// Package presignedurl provides the customizations for API clients to fill in
// presigned URLs into input parameters.
package presignedurl

View file

@ -0,0 +1,6 @@
// Code generated by internal/repotools/cmd/updatemodulemeta DO NOT EDIT.
package presignedurl
// goModuleVersion is the tagged release for this module
const goModuleVersion = "1.11.17"

View file

@ -0,0 +1,110 @@
package presignedurl
import (
"context"
"fmt"
awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
v4 "github.com/aws/aws-sdk-go-v2/aws/signer/v4"
"github.com/aws/smithy-go/middleware"
)
// URLPresigner provides the interface to presign the input parameters in to a
// presigned URL.
type URLPresigner interface {
// PresignURL presigns a URL.
PresignURL(ctx context.Context, srcRegion string, params interface{}) (*v4.PresignedHTTPRequest, error)
}
// ParameterAccessor provides an collection of accessor to for retrieving and
// setting the values needed to PresignedURL generation
type ParameterAccessor struct {
// GetPresignedURL accessor points to a function that retrieves a presigned url if present
GetPresignedURL func(interface{}) (string, bool, error)
// GetSourceRegion accessor points to a function that retrieves source region for presigned url
GetSourceRegion func(interface{}) (string, bool, error)
// CopyInput accessor points to a function that takes in an input, and returns a copy.
CopyInput func(interface{}) (interface{}, error)
// SetDestinationRegion accessor points to a function that sets destination region on api input struct
SetDestinationRegion func(interface{}, string) error
// SetPresignedURL accessor points to a function that sets presigned url on api input struct
SetPresignedURL func(interface{}, string) error
}
// Options provides the set of options needed by the presigned URL middleware.
type Options struct {
// Accessor are the parameter accessors used by this middleware
Accessor ParameterAccessor
// Presigner is the URLPresigner used by the middleware
Presigner URLPresigner
}
// AddMiddleware adds the Presign URL middleware to the middleware stack.
func AddMiddleware(stack *middleware.Stack, opts Options) error {
return stack.Initialize.Add(&presign{options: opts}, middleware.Before)
}
// RemoveMiddleware removes the Presign URL middleware from the stack.
func RemoveMiddleware(stack *middleware.Stack) error {
_, err := stack.Initialize.Remove((*presign)(nil).ID())
return err
}
type presign struct {
options Options
}
func (m *presign) ID() string { return "Presign" }
func (m *presign) HandleInitialize(
ctx context.Context, input middleware.InitializeInput, next middleware.InitializeHandler,
) (
out middleware.InitializeOutput, metadata middleware.Metadata, err error,
) {
// If PresignedURL is already set ignore middleware.
if _, ok, err := m.options.Accessor.GetPresignedURL(input.Parameters); err != nil {
return out, metadata, fmt.Errorf("presign middleware failed, %w", err)
} else if ok {
return next.HandleInitialize(ctx, input)
}
// If have source region is not set ignore middleware.
srcRegion, ok, err := m.options.Accessor.GetSourceRegion(input.Parameters)
if err != nil {
return out, metadata, fmt.Errorf("presign middleware failed, %w", err)
} else if !ok || len(srcRegion) == 0 {
return next.HandleInitialize(ctx, input)
}
// Create a copy of the original input so the destination region value can
// be added. This ensures that value does not leak into the original
// request parameters.
paramCpy, err := m.options.Accessor.CopyInput(input.Parameters)
if err != nil {
return out, metadata, fmt.Errorf("unable to create presigned URL, %w", err)
}
// Destination region is the API client's configured region.
dstRegion := awsmiddleware.GetRegion(ctx)
if err = m.options.Accessor.SetDestinationRegion(paramCpy, dstRegion); err != nil {
return out, metadata, fmt.Errorf("presign middleware failed, %w", err)
}
presignedReq, err := m.options.Presigner.PresignURL(ctx, srcRegion, paramCpy)
if err != nil {
return out, metadata, fmt.Errorf("unable to create presigned URL, %w", err)
}
// Update the original input with the presigned URL value.
if err = m.options.Accessor.SetPresignedURL(input.Parameters, presignedReq.URL); err != nil {
return out, metadata, fmt.Errorf("presign middleware failed, %w", err)
}
return next.HandleInitialize(ctx, input)
}

View file

@ -0,0 +1,300 @@
# v1.17.3 (2024-03-07)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.17.2 (2024-02-23)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.17.1 (2024-02-21)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.17.0 (2024-02-13)
* **Feature**: Bump minimum Go version to 1.20 per our language support policy.
* **Dependency Update**: Updated to the latest SDK module versions
# v1.16.10 (2024-01-04)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.16.9 (2023-12-07)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.16.8 (2023-12-01)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.16.7 (2023-11-30)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.16.6 (2023-11-29)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.16.5 (2023-11-28.2)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.16.4 (2023-11-20)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.16.3 (2023-11-15)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.16.2 (2023-11-09)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.16.1 (2023-11-01)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.16.0 (2023-10-31)
* **Feature**: **BREAKING CHANGE**: Bump minimum go version to 1.19 per the revised [go version support policy](https://aws.amazon.com/blogs/developer/aws-sdk-for-go-aligns-with-go-release-policy-on-supported-runtimes/).
* **Dependency Update**: Updated to the latest SDK module versions
# v1.15.6 (2023-10-12)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.15.5 (2023-10-06)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.15.4 (2023-08-21)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.15.3 (2023-08-18)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.15.2 (2023-08-17)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.15.1 (2023-08-07)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.15.0 (2023-07-31)
* **Feature**: Adds support for smithy-modeled endpoint resolution. A new rules-based endpoint resolution will be added to the SDK which will supercede and deprecate existing endpoint resolution. Specifically, EndpointResolver will be deprecated while BaseEndpoint and EndpointResolverV2 will take its place. For more information, please see the Endpoints section in our Developer Guide.
* **Dependency Update**: Updated to the latest SDK module versions
# v1.14.5 (2023-07-28)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.14.4 (2023-07-13)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.14.3 (2023-06-13)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.14.2 (2023-04-24)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.14.1 (2023-04-07)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.14.0 (2023-03-21)
* **Feature**: port v1 sdk 100-continue http header customization for s3 PutObject/UploadPart request and enable user config
* **Dependency Update**: Updated to the latest SDK module versions
# v1.13.24 (2023-03-10)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.13.23 (2023-02-20)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.13.22 (2023-02-03)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.13.21 (2022-12-15)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.13.20 (2022-12-02)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.13.19 (2022-10-24)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.13.18 (2022-10-21)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.13.17 (2022-09-20)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.13.16 (2022-09-14)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.13.15 (2022-09-02)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.13.14 (2022-08-31)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.13.13 (2022-08-29)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.13.12 (2022-08-11)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.13.11 (2022-08-09)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.13.10 (2022-08-08)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.13.9 (2022-08-01)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.13.8 (2022-07-05)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.13.7 (2022-06-29)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.13.6 (2022-06-07)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.13.5 (2022-05-17)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.13.4 (2022-04-25)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.13.3 (2022-03-30)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.13.2 (2022-03-24)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.13.1 (2022-03-23)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.13.0 (2022-03-08)
* **Feature**: Updated `github.com/aws/smithy-go` to latest version
* **Dependency Update**: Updated to the latest SDK module versions
# v1.12.0 (2022-02-24)
* **Feature**: Updated `github.com/aws/smithy-go` to latest version
* **Dependency Update**: Updated to the latest SDK module versions
# v1.11.0 (2022-01-14)
* **Feature**: Updated `github.com/aws/smithy-go` to latest version
* **Dependency Update**: Updated to the latest SDK module versions
# v1.10.0 (2022-01-07)
* **Feature**: Updated `github.com/aws/smithy-go` to latest version
* **Dependency Update**: Updated to the latest SDK module versions
# v1.9.2 (2021-12-02)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.9.1 (2021-11-19)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.9.0 (2021-11-06)
* **Feature**: The SDK now supports configuration of FIPS and DualStack endpoints using environment variables, shared configuration, or programmatically.
* **Feature**: Updated `github.com/aws/smithy-go` to latest version
* **Dependency Update**: Updated to the latest SDK module versions
# v1.8.0 (2021-10-21)
* **Feature**: Updated to latest version
* **Dependency Update**: Updated to the latest SDK module versions
# v1.7.2 (2021-10-11)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.7.1 (2021-09-17)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.7.0 (2021-09-02)
* **Feature**: Add support for S3 Multi-Region Access Point ARNs.
# v1.6.0 (2021-08-27)
* **Feature**: Updated `github.com/aws/smithy-go` to latest version
* **Dependency Update**: Updated to the latest SDK module versions
# v1.5.3 (2021-08-19)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.5.2 (2021-08-04)
* **Dependency Update**: Updated `github.com/aws/smithy-go` to latest version.
* **Dependency Update**: Updated to the latest SDK module versions
# v1.5.1 (2021-07-15)
* **Dependency Update**: Updated `github.com/aws/smithy-go` to latest version
* **Dependency Update**: Updated to the latest SDK module versions
# v1.5.0 (2021-06-25)
* **Feature**: Updated `github.com/aws/smithy-go` to latest version
* **Dependency Update**: Updated to the latest SDK module versions
# v1.4.0 (2021-06-04)
* **Feature**: The handling of AccessPoint and Outpost ARNs have been updated.
# v1.3.1 (2021-05-20)
* **Dependency Update**: Updated to the latest SDK module versions
# v1.3.0 (2021-05-14)
* **Feature**: Constant has been added to modules to enable runtime version inspection for reporting.
* **Dependency Update**: Updated to the latest SDK module versions

View file

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View file

@ -0,0 +1,53 @@
package arn
import (
"strings"
"github.com/aws/aws-sdk-go-v2/aws/arn"
)
// AccessPointARN provides representation
type AccessPointARN struct {
arn.ARN
AccessPointName string
}
// GetARN returns the base ARN for the Access Point resource
func (a AccessPointARN) GetARN() arn.ARN {
return a.ARN
}
// ParseAccessPointResource attempts to parse the ARN's resource as an
// AccessPoint resource.
//
// Supported Access point resource format:
// - Access point format: arn:{partition}:s3:{region}:{accountId}:accesspoint/{accesspointName}
// - example: arn:aws:s3:us-west-2:012345678901:accesspoint/myaccesspoint
func ParseAccessPointResource(a arn.ARN, resParts []string) (AccessPointARN, error) {
if isFIPS(a.Region) {
return AccessPointARN{}, InvalidARNError{ARN: a, Reason: "FIPS region not allowed in ARN"}
}
if len(a.AccountID) == 0 {
return AccessPointARN{}, InvalidARNError{ARN: a, Reason: "account-id not set"}
}
if len(resParts) == 0 {
return AccessPointARN{}, InvalidARNError{ARN: a, Reason: "resource-id not set"}
}
if len(resParts) > 1 {
return AccessPointARN{}, InvalidARNError{ARN: a, Reason: "sub resource not supported"}
}
resID := resParts[0]
if len(strings.TrimSpace(resID)) == 0 {
return AccessPointARN{}, InvalidARNError{ARN: a, Reason: "resource-id not set"}
}
return AccessPointARN{
ARN: a,
AccessPointName: resID,
}, nil
}
func isFIPS(region string) bool {
return strings.HasPrefix(region, "fips-") || strings.HasSuffix(region, "-fips")
}

View file

@ -0,0 +1,85 @@
package arn
import (
"fmt"
"strings"
"github.com/aws/aws-sdk-go-v2/aws/arn"
)
var supportedServiceARN = []string{
"s3",
"s3-outposts",
"s3-object-lambda",
}
func isSupportedServiceARN(service string) bool {
for _, name := range supportedServiceARN {
if name == service {
return true
}
}
return false
}
// Resource provides the interfaces abstracting ARNs of specific resource
// types.
type Resource interface {
GetARN() arn.ARN
String() string
}
// ResourceParser provides the function for parsing an ARN's resource
// component into a typed resource.
type ResourceParser func(arn.ARN) (Resource, error)
// ParseResource parses an AWS ARN into a typed resource for the S3 API.
func ParseResource(a arn.ARN, resParser ResourceParser) (resARN Resource, err error) {
if len(a.Partition) == 0 {
return nil, InvalidARNError{ARN: a, Reason: "partition not set"}
}
if !isSupportedServiceARN(a.Service) {
return nil, InvalidARNError{ARN: a, Reason: "service is not supported"}
}
if len(a.Resource) == 0 {
return nil, InvalidARNError{ARN: a, Reason: "resource not set"}
}
return resParser(a)
}
// SplitResource splits the resource components by the ARN resource delimiters.
func SplitResource(v string) []string {
var parts []string
var offset int
for offset <= len(v) {
idx := strings.IndexAny(v[offset:], "/:")
if idx < 0 {
parts = append(parts, v[offset:])
break
}
parts = append(parts, v[offset:idx+offset])
offset += idx + 1
}
return parts
}
// IsARN returns whether the given string is an ARN
func IsARN(s string) bool {
return arn.IsARN(s)
}
// InvalidARNError provides the error for an invalid ARN error.
type InvalidARNError struct {
ARN arn.ARN
Reason string
}
// Error returns a string denoting the occurred InvalidARNError
func (e InvalidARNError) Error() string {
return fmt.Sprintf("invalid Amazon %s ARN, %s, %s", e.ARN.Service, e.Reason, e.ARN.String())
}

View file

@ -0,0 +1,32 @@
package arn
import "fmt"
// arnable is implemented by the relevant S3/S3Control
// operations which have members that may need ARN
// processing.
type arnable interface {
SetARNMember(string) error
GetARNMember() (*string, bool)
}
// GetARNField would be called during middleware execution
// to retrieve a member value that is an ARN in need of
// processing.
func GetARNField(input interface{}) (*string, bool) {
v, ok := input.(arnable)
if !ok {
return nil, false
}
return v.GetARNMember()
}
// SetARNField would called during middleware exeuction
// to set a member value that required ARN processing.
func SetARNField(input interface{}, v string) error {
params, ok := input.(arnable)
if !ok {
return fmt.Errorf("Params does not contain arn field member")
}
return params.SetARNMember(v)
}

View file

@ -0,0 +1,128 @@
package arn
import (
"strings"
"github.com/aws/aws-sdk-go-v2/aws/arn"
)
// OutpostARN interface that should be satisfied by outpost ARNs
type OutpostARN interface {
Resource
GetOutpostID() string
}
// ParseOutpostARNResource will parse a provided ARNs resource using the appropriate ARN format
// and return a specific OutpostARN type
//
// Currently supported outpost ARN formats:
// * Outpost AccessPoint ARN format:
// - ARN format: arn:{partition}:s3-outposts:{region}:{accountId}:outpost/{outpostId}/accesspoint/{accesspointName}
// - example: arn:aws:s3-outposts:us-west-2:012345678901:outpost/op-1234567890123456/accesspoint/myaccesspoint
//
// * Outpost Bucket ARN format:
// - ARN format: arn:{partition}:s3-outposts:{region}:{accountId}:outpost/{outpostId}/bucket/{bucketName}
// - example: arn:aws:s3-outposts:us-west-2:012345678901:outpost/op-1234567890123456/bucket/mybucket
//
// Other outpost ARN formats may be supported and added in the future.
func ParseOutpostARNResource(a arn.ARN, resParts []string) (OutpostARN, error) {
if len(a.Region) == 0 {
return nil, InvalidARNError{ARN: a, Reason: "region not set"}
}
if isFIPS(a.Region) {
return nil, InvalidARNError{ARN: a, Reason: "FIPS region not allowed in ARN"}
}
if len(a.AccountID) == 0 {
return nil, InvalidARNError{ARN: a, Reason: "account-id not set"}
}
// verify if outpost id is present and valid
if len(resParts) == 0 || len(strings.TrimSpace(resParts[0])) == 0 {
return nil, InvalidARNError{ARN: a, Reason: "outpost resource-id not set"}
}
// verify possible resource type exists
if len(resParts) < 3 {
return nil, InvalidARNError{
ARN: a, Reason: "incomplete outpost resource type. Expected bucket or access-point resource to be present",
}
}
// Since we know this is a OutpostARN fetch outpostID
outpostID := strings.TrimSpace(resParts[0])
switch resParts[1] {
case "accesspoint":
accesspointARN, err := ParseAccessPointResource(a, resParts[2:])
if err != nil {
return OutpostAccessPointARN{}, err
}
return OutpostAccessPointARN{
AccessPointARN: accesspointARN,
OutpostID: outpostID,
}, nil
case "bucket":
bucketName, err := parseBucketResource(a, resParts[2:])
if err != nil {
return nil, err
}
return OutpostBucketARN{
ARN: a,
BucketName: bucketName,
OutpostID: outpostID,
}, nil
default:
return nil, InvalidARNError{ARN: a, Reason: "unknown resource set for outpost ARN"}
}
}
// OutpostAccessPointARN represents outpost access point ARN.
type OutpostAccessPointARN struct {
AccessPointARN
OutpostID string
}
// GetOutpostID returns the outpost id of outpost access point arn
func (o OutpostAccessPointARN) GetOutpostID() string {
return o.OutpostID
}
// OutpostBucketARN represents the outpost bucket ARN.
type OutpostBucketARN struct {
arn.ARN
BucketName string
OutpostID string
}
// GetOutpostID returns the outpost id of outpost bucket arn
func (o OutpostBucketARN) GetOutpostID() string {
return o.OutpostID
}
// GetARN retrives the base ARN from outpost bucket ARN resource
func (o OutpostBucketARN) GetARN() arn.ARN {
return o.ARN
}
// parseBucketResource attempts to parse the ARN's bucket resource and retrieve the
// bucket resource id.
//
// parseBucketResource only parses the bucket resource id.
func parseBucketResource(a arn.ARN, resParts []string) (bucketName string, err error) {
if len(resParts) == 0 {
return bucketName, InvalidARNError{ARN: a, Reason: "bucket resource-id not set"}
}
if len(resParts) > 1 {
return bucketName, InvalidARNError{ARN: a, Reason: "sub resource not supported"}
}
bucketName = strings.TrimSpace(resParts[0])
if len(bucketName) == 0 {
return bucketName, InvalidARNError{ARN: a, Reason: "bucket resource-id not set"}
}
return bucketName, err
}

View file

@ -0,0 +1,15 @@
package arn
// S3ObjectLambdaARN represents an ARN for the s3-object-lambda service
type S3ObjectLambdaARN interface {
Resource
isS3ObjectLambdasARN()
}
// S3ObjectLambdaAccessPointARN is an S3ObjectLambdaARN for the Access Point resource type
type S3ObjectLambdaAccessPointARN struct {
AccessPointARN
}
func (s S3ObjectLambdaAccessPointARN) isS3ObjectLambdasARN() {}

View file

@ -0,0 +1,73 @@
package s3shared
import (
"context"
"fmt"
"github.com/aws/smithy-go/middleware"
"github.com/aws/aws-sdk-go-v2/aws/arn"
)
// ARNLookup is the initial middleware that looks up if an arn is provided.
// This middleware is responsible for fetching ARN from a arnable field, and registering the ARN on
// middleware context. This middleware must be executed before input validation step or any other
// arn processing middleware.
type ARNLookup struct {
// GetARNValue takes in a input interface and returns a ptr to string and a bool
GetARNValue func(interface{}) (*string, bool)
}
// ID for the middleware
func (m *ARNLookup) ID() string {
return "S3Shared:ARNLookup"
}
// HandleInitialize handles the behavior of this initialize step
func (m *ARNLookup) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
out middleware.InitializeOutput, metadata middleware.Metadata, err error,
) {
// check if GetARNValue is supported
if m.GetARNValue == nil {
return next.HandleInitialize(ctx, in)
}
// check is input resource is an ARN; if not go to next
v, ok := m.GetARNValue(in.Parameters)
if !ok || v == nil || !arn.IsARN(*v) {
return next.HandleInitialize(ctx, in)
}
// if ARN process ResourceRequest and put it on ctx
av, err := arn.Parse(*v)
if err != nil {
return out, metadata, fmt.Errorf("error parsing arn: %w", err)
}
// set parsed arn on context
ctx = setARNResourceOnContext(ctx, av)
return next.HandleInitialize(ctx, in)
}
// arnResourceKey is the key set on context used to identify, retrive an ARN resource
// if present on the context.
type arnResourceKey struct{}
// SetARNResourceOnContext sets the S3 ARN on the context.
//
// Scoped to stack values. Use github.com/aws/smithy-go/middleware#ClearStackValues
// to clear all stack values.
func setARNResourceOnContext(ctx context.Context, value arn.ARN) context.Context {
return middleware.WithStackValue(ctx, arnResourceKey{}, value)
}
// GetARNResourceFromContext returns an ARN from context and a bool indicating
// presence of ARN on ctx.
//
// Scoped to stack values. Use github.com/aws/smithy-go/middleware#ClearStackValues
// to clear all stack values.
func GetARNResourceFromContext(ctx context.Context) (arn.ARN, bool) {
v, ok := middleware.GetStackValue(ctx, arnResourceKey{}).(arn.ARN)
return v, ok
}

View file

@ -0,0 +1,41 @@
package config
import "context"
// UseARNRegionProvider is an interface for retrieving external configuration value for UseARNRegion
type UseARNRegionProvider interface {
GetS3UseARNRegion(ctx context.Context) (value bool, found bool, err error)
}
// DisableMultiRegionAccessPointsProvider is an interface for retrieving external configuration value for DisableMultiRegionAccessPoints
type DisableMultiRegionAccessPointsProvider interface {
GetS3DisableMultiRegionAccessPoints(ctx context.Context) (value bool, found bool, err error)
}
// ResolveUseARNRegion extracts the first instance of a UseARNRegion from the config slice.
// Additionally returns a boolean to indicate if the value was found in provided configs, and error if one is encountered.
func ResolveUseARNRegion(ctx context.Context, configs []interface{}) (value bool, found bool, err error) {
for _, cfg := range configs {
if p, ok := cfg.(UseARNRegionProvider); ok {
value, found, err = p.GetS3UseARNRegion(ctx)
if err != nil || found {
break
}
}
}
return
}
// ResolveDisableMultiRegionAccessPoints extracts the first instance of a DisableMultiRegionAccessPoints from the config slice.
// Additionally returns a boolean to indicate if the value was found in provided configs, and error if one is encountered.
func ResolveDisableMultiRegionAccessPoints(ctx context.Context, configs []interface{}) (value bool, found bool, err error) {
for _, cfg := range configs {
if p, ok := cfg.(DisableMultiRegionAccessPointsProvider); ok {
value, found, err = p.GetS3DisableMultiRegionAccessPoints(ctx)
if err != nil || found {
break
}
}
}
return
}

View file

@ -0,0 +1,183 @@
package s3shared
import (
"fmt"
"github.com/aws/aws-sdk-go-v2/service/internal/s3shared/arn"
)
// TODO: fix these error statements to be relevant to v2 sdk
const (
invalidARNErrorErrCode = "InvalidARNError"
configurationErrorErrCode = "ConfigurationError"
)
// InvalidARNError denotes the error for Invalid ARN
type InvalidARNError struct {
message string
resource arn.Resource
origErr error
}
// Error returns the InvalidARN error string
func (e InvalidARNError) Error() string {
var extra string
if e.resource != nil {
extra = "ARN: " + e.resource.String()
}
msg := invalidARNErrorErrCode + " : " + e.message
if extra != "" {
msg = msg + "\n\t" + extra
}
return msg
}
// OrigErr is the original error wrapped by Invalid ARN Error
func (e InvalidARNError) Unwrap() error {
return e.origErr
}
// NewInvalidARNError denotes invalid arn error
func NewInvalidARNError(resource arn.Resource, err error) InvalidARNError {
return InvalidARNError{
message: "invalid ARN",
origErr: err,
resource: resource,
}
}
// NewInvalidARNWithUnsupportedPartitionError ARN not supported for the target partition
func NewInvalidARNWithUnsupportedPartitionError(resource arn.Resource, err error) InvalidARNError {
return InvalidARNError{
message: "resource ARN not supported for the target ARN partition",
origErr: err,
resource: resource,
}
}
// NewInvalidARNWithFIPSError ARN not supported for FIPS region
//
// Deprecated: FIPS will not appear in the ARN region component.
func NewInvalidARNWithFIPSError(resource arn.Resource, err error) InvalidARNError {
return InvalidARNError{
message: "resource ARN not supported for FIPS region",
resource: resource,
origErr: err,
}
}
// ConfigurationError is used to denote a client configuration error
type ConfigurationError struct {
message string
resource arn.Resource
clientPartitionID string
clientRegion string
origErr error
}
// Error returns the Configuration error string
func (e ConfigurationError) Error() string {
extra := fmt.Sprintf("ARN: %s, client partition: %s, client region: %s",
e.resource, e.clientPartitionID, e.clientRegion)
msg := configurationErrorErrCode + " : " + e.message
if extra != "" {
msg = msg + "\n\t" + extra
}
return msg
}
// OrigErr is the original error wrapped by Configuration Error
func (e ConfigurationError) Unwrap() error {
return e.origErr
}
// NewClientPartitionMismatchError stub
func NewClientPartitionMismatchError(resource arn.Resource, clientPartitionID, clientRegion string, err error) ConfigurationError {
return ConfigurationError{
message: "client partition does not match provided ARN partition",
origErr: err,
resource: resource,
clientPartitionID: clientPartitionID,
clientRegion: clientRegion,
}
}
// NewClientRegionMismatchError denotes cross region access error
func NewClientRegionMismatchError(resource arn.Resource, clientPartitionID, clientRegion string, err error) ConfigurationError {
return ConfigurationError{
message: "client region does not match provided ARN region",
origErr: err,
resource: resource,
clientPartitionID: clientPartitionID,
clientRegion: clientRegion,
}
}
// NewFailedToResolveEndpointError denotes endpoint resolving error
func NewFailedToResolveEndpointError(resource arn.Resource, clientPartitionID, clientRegion string, err error) ConfigurationError {
return ConfigurationError{
message: "endpoint resolver failed to find an endpoint for the provided ARN region",
origErr: err,
resource: resource,
clientPartitionID: clientPartitionID,
clientRegion: clientRegion,
}
}
// NewClientConfiguredForFIPSError denotes client config error for unsupported cross region FIPS access
func NewClientConfiguredForFIPSError(resource arn.Resource, clientPartitionID, clientRegion string, err error) ConfigurationError {
return ConfigurationError{
message: "client configured for fips but cross-region resource ARN provided",
origErr: err,
resource: resource,
clientPartitionID: clientPartitionID,
clientRegion: clientRegion,
}
}
// NewFIPSConfigurationError denotes a configuration error when a client or request is configured for FIPS
func NewFIPSConfigurationError(resource arn.Resource, clientPartitionID, clientRegion string, err error) ConfigurationError {
return ConfigurationError{
message: "use of ARN is not supported when client or request is configured for FIPS",
origErr: err,
resource: resource,
clientPartitionID: clientPartitionID,
clientRegion: clientRegion,
}
}
// NewClientConfiguredForAccelerateError denotes client config error for unsupported S3 accelerate
func NewClientConfiguredForAccelerateError(resource arn.Resource, clientPartitionID, clientRegion string, err error) ConfigurationError {
return ConfigurationError{
message: "client configured for S3 Accelerate but is not supported with resource ARN",
origErr: err,
resource: resource,
clientPartitionID: clientPartitionID,
clientRegion: clientRegion,
}
}
// NewClientConfiguredForCrossRegionFIPSError denotes client config error for unsupported cross region FIPS request
func NewClientConfiguredForCrossRegionFIPSError(resource arn.Resource, clientPartitionID, clientRegion string, err error) ConfigurationError {
return ConfigurationError{
message: "client configured for FIPS with cross-region enabled but is supported with cross-region resource ARN",
origErr: err,
resource: resource,
clientPartitionID: clientPartitionID,
clientRegion: clientRegion,
}
}
// NewClientConfiguredForDualStackError denotes client config error for unsupported S3 Dual-stack
func NewClientConfiguredForDualStackError(resource arn.Resource, clientPartitionID, clientRegion string, err error) ConfigurationError {
return ConfigurationError{
message: "client configured for S3 Dual-stack but is not supported with resource ARN",
origErr: err,
resource: resource,
clientPartitionID: clientPartitionID,
clientRegion: clientRegion,
}
}

View file

@ -0,0 +1,6 @@
// Code generated by internal/repotools/cmd/updatemodulemeta DO NOT EDIT.
package s3shared
// goModuleVersion is the tagged release for this module
const goModuleVersion = "1.17.3"

View file

@ -0,0 +1,29 @@
package s3shared
import (
"github.com/aws/smithy-go/middleware"
)
// hostID is used to retrieve host id from response metadata
type hostID struct {
}
// SetHostIDMetadata sets the provided host id over middleware metadata
func SetHostIDMetadata(metadata *middleware.Metadata, id string) {
metadata.Set(hostID{}, id)
}
// GetHostIDMetadata retrieves the host id from middleware metadata
// returns host id as string along with a boolean indicating presence of
// hostId on middleware metadata.
func GetHostIDMetadata(metadata middleware.Metadata) (string, bool) {
if !metadata.Has(hostID{}) {
return "", false
}
v, ok := metadata.Get(hostID{}).(string)
if !ok {
return "", true
}
return v, true
}

View file

@ -0,0 +1,28 @@
package s3shared
import (
"context"
"github.com/aws/smithy-go/middleware"
)
// clonedInputKey used to denote if request input was cloned.
type clonedInputKey struct{}
// SetClonedInputKey sets a key on context to denote input was cloned previously.
//
// Scoped to stack values. Use github.com/aws/smithy-go/middleware#ClearStackValues
// to clear all stack values.
func SetClonedInputKey(ctx context.Context, value bool) context.Context {
return middleware.WithStackValue(ctx, clonedInputKey{}, value)
}
// IsClonedInput retrieves if context key for cloned input was set.
// If set, we can infer that the reuqest input was cloned previously.
//
// Scoped to stack values. Use github.com/aws/smithy-go/middleware#ClearStackValues
// to clear all stack values.
func IsClonedInput(ctx context.Context) bool {
v, _ := middleware.GetStackValue(ctx, clonedInputKey{}).(bool)
return v
}

View file

@ -0,0 +1,52 @@
package s3shared
import (
"context"
awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
"github.com/aws/smithy-go/middleware"
smithyhttp "github.com/aws/smithy-go/transport/http"
)
const metadataRetrieverID = "S3MetadataRetriever"
// AddMetadataRetrieverMiddleware adds request id, host id retriever middleware
func AddMetadataRetrieverMiddleware(stack *middleware.Stack) error {
// add metadata retriever middleware before operation deserializers so that it can retrieve metadata such as
// host id, request id from response header returned by operation deserializers
return stack.Deserialize.Insert(&metadataRetriever{}, "OperationDeserializer", middleware.Before)
}
type metadataRetriever struct {
}
// ID returns the middleware identifier
func (m *metadataRetriever) ID() string {
return metadataRetrieverID
}
func (m *metadataRetriever) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
) {
out, metadata, err = next.HandleDeserialize(ctx, in)
resp, ok := out.RawResponse.(*smithyhttp.Response)
if !ok {
// No raw response to wrap with.
return out, metadata, err
}
// check for header for Request id
if v := resp.Header.Get("X-Amz-Request-Id"); len(v) != 0 {
// set reqID on metadata for successful responses.
awsmiddleware.SetRequestIDMetadata(&metadata, v)
}
// look up host-id
if v := resp.Header.Get("X-Amz-Id-2"); len(v) != 0 {
// set reqID on metadata for successful responses.
SetHostIDMetadata(&metadata, v)
}
return out, metadata, err
}

View file

@ -0,0 +1,77 @@
package s3shared
import (
"fmt"
"strings"
awsarn "github.com/aws/aws-sdk-go-v2/aws/arn"
"github.com/aws/aws-sdk-go-v2/service/internal/s3shared/arn"
)
// ResourceRequest represents an ARN resource and api request metadata
type ResourceRequest struct {
Resource arn.Resource
// RequestRegion is the region configured on the request config
RequestRegion string
// SigningRegion is the signing region resolved for the request
SigningRegion string
// PartitionID is the resolved partition id for the provided request region
PartitionID string
// UseARNRegion indicates if client should use the region provided in an ARN resource
UseARNRegion bool
// UseFIPS indicates if te client is configured for FIPS
UseFIPS bool
}
// ARN returns the resource ARN
func (r ResourceRequest) ARN() awsarn.ARN {
return r.Resource.GetARN()
}
// ResourceConfiguredForFIPS returns true if resource ARNs region is FIPS
//
// Deprecated: FIPS will not be present in the ARN region
func (r ResourceRequest) ResourceConfiguredForFIPS() bool {
return IsFIPS(r.ARN().Region)
}
// AllowCrossRegion returns a bool value to denote if S3UseARNRegion flag is set
func (r ResourceRequest) AllowCrossRegion() bool {
return r.UseARNRegion
}
// IsCrossPartition returns true if request is configured for region of another partition, than
// the partition that resource ARN region resolves to. IsCrossPartition will not return an error,
// if request is not configured with a specific partition id. This might happen if customer provides
// custom endpoint url, but does not associate a partition id with it.
func (r ResourceRequest) IsCrossPartition() (bool, error) {
rv := r.PartitionID
if len(rv) == 0 {
return false, nil
}
av := r.Resource.GetARN().Partition
if len(av) == 0 {
return false, fmt.Errorf("no partition id for provided ARN")
}
return !strings.EqualFold(rv, av), nil
}
// IsCrossRegion returns true if request signing region is not same as arn region
func (r ResourceRequest) IsCrossRegion() bool {
v := r.SigningRegion
return !strings.EqualFold(v, r.Resource.GetARN().Region)
}
// IsFIPS returns true if region is a fips pseudo-region
//
// Deprecated: FIPS should be specified via EndpointOptions.
func IsFIPS(region string) bool {
return strings.HasPrefix(region, "fips-") ||
strings.HasSuffix(region, "-fips")
}

View file

@ -0,0 +1,33 @@
package s3shared
import (
"errors"
"fmt"
awshttp "github.com/aws/aws-sdk-go-v2/aws/transport/http"
)
// ResponseError provides the HTTP centric error type wrapping the underlying error
// with the HTTP response value and the deserialized RequestID.
type ResponseError struct {
*awshttp.ResponseError
// HostID associated with response error
HostID string
}
// ServiceHostID returns the host id associated with Response Error
func (e *ResponseError) ServiceHostID() string { return e.HostID }
// Error returns the formatted error
func (e *ResponseError) Error() string {
return fmt.Sprintf(
"https response error StatusCode: %d, RequestID: %s, HostID: %s, %v",
e.Response.StatusCode, e.RequestID, e.HostID, e.Err)
}
// As populates target and returns true if the type of target is a error type that
// the ResponseError embeds, (e.g.S3 HTTP ResponseError)
func (e *ResponseError) As(target interface{}) bool {
return errors.As(e.ResponseError, target)
}

View file

@ -0,0 +1,60 @@
package s3shared
import (
"context"
awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
awshttp "github.com/aws/aws-sdk-go-v2/aws/transport/http"
"github.com/aws/smithy-go/middleware"
smithyhttp "github.com/aws/smithy-go/transport/http"
)
// AddResponseErrorMiddleware adds response error wrapper middleware
func AddResponseErrorMiddleware(stack *middleware.Stack) error {
// add error wrapper middleware before request id retriever middleware so that it can wrap the error response
// returned by operation deserializers
return stack.Deserialize.Insert(&errorWrapper{}, metadataRetrieverID, middleware.Before)
}
type errorWrapper struct {
}
// ID returns the middleware identifier
func (m *errorWrapper) ID() string {
return "ResponseErrorWrapper"
}
func (m *errorWrapper) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
) {
out, metadata, err = next.HandleDeserialize(ctx, in)
if err == nil {
// Nothing to do when there is no error.
return out, metadata, err
}
resp, ok := out.RawResponse.(*smithyhttp.Response)
if !ok {
// No raw response to wrap with.
return out, metadata, err
}
// look for request id in metadata
reqID, _ := awsmiddleware.GetRequestIDMetadata(metadata)
// look for host id in metadata
hostID, _ := GetHostIDMetadata(metadata)
// Wrap the returned smithy error with the request id retrieved from the metadata
err = &ResponseError{
ResponseError: &awshttp.ResponseError{
ResponseError: &smithyhttp.ResponseError{
Response: resp,
Err: err,
},
RequestID: reqID,
},
HostID: hostID,
}
return out, metadata, err
}

View file

@ -0,0 +1,54 @@
package s3shared
import (
"context"
"fmt"
"github.com/aws/smithy-go/middleware"
smithyhttp "github.com/aws/smithy-go/transport/http"
)
const s3100ContinueID = "S3100Continue"
const default100ContinueThresholdBytes int64 = 1024 * 1024 * 2
// Add100Continue add middleware, which adds {Expect: 100-continue} header for s3 client HTTP PUT request larger than 2MB
// or with unknown size streaming bodies, during operation builder step
func Add100Continue(stack *middleware.Stack, continueHeaderThresholdBytes int64) error {
return stack.Build.Add(&s3100Continue{
continueHeaderThresholdBytes: continueHeaderThresholdBytes,
}, middleware.After)
}
type s3100Continue struct {
continueHeaderThresholdBytes int64
}
// ID returns the middleware identifier
func (m *s3100Continue) ID() string {
return s3100ContinueID
}
func (m *s3100Continue) HandleBuild(
ctx context.Context, in middleware.BuildInput, next middleware.BuildHandler,
) (
out middleware.BuildOutput, metadata middleware.Metadata, err error,
) {
sizeLimit := default100ContinueThresholdBytes
switch {
case m.continueHeaderThresholdBytes == -1:
return next.HandleBuild(ctx, in)
case m.continueHeaderThresholdBytes > 0:
sizeLimit = m.continueHeaderThresholdBytes
default:
}
req, ok := in.Request.(*smithyhttp.Request)
if !ok {
return out, metadata, fmt.Errorf("unknown request type %T", req)
}
if req.ContentLength == -1 || (req.ContentLength == 0 && req.Body != nil) || req.ContentLength >= sizeLimit {
req.Header.Set("Expect", "100-continue")
}
return next.HandleBuild(ctx, in)
}

View file

@ -0,0 +1,78 @@
package s3shared
import (
"context"
"fmt"
"strings"
"github.com/aws/smithy-go/middleware"
smithyhttp "github.com/aws/smithy-go/transport/http"
awsmiddle "github.com/aws/aws-sdk-go-v2/aws/middleware"
)
// EnableDualstack represents middleware struct for enabling dualstack support
//
// Deprecated: See EndpointResolverOptions' UseDualStackEndpoint support
type EnableDualstack struct {
// UseDualstack indicates if dualstack endpoint resolving is to be enabled
UseDualstack bool
// DefaultServiceID is the service id prefix used in endpoint resolving
// by default service-id is 's3' and 's3-control' for service s3, s3control.
DefaultServiceID string
}
// ID returns the middleware ID.
func (*EnableDualstack) ID() string {
return "EnableDualstack"
}
// HandleSerialize handles serializer middleware behavior when middleware is executed
func (u *EnableDualstack) HandleSerialize(
ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler,
) (
out middleware.SerializeOutput, metadata middleware.Metadata, err error,
) {
// check for host name immutable property
if smithyhttp.GetHostnameImmutable(ctx) {
return next.HandleSerialize(ctx, in)
}
serviceID := awsmiddle.GetServiceID(ctx)
// s3-control may be represented as `S3 Control` as in model
if serviceID == "S3 Control" {
serviceID = "s3-control"
}
if len(serviceID) == 0 {
// default service id
serviceID = u.DefaultServiceID
}
req, ok := in.Request.(*smithyhttp.Request)
if !ok {
return out, metadata, fmt.Errorf("unknown request type %T", req)
}
if u.UseDualstack {
parts := strings.Split(req.URL.Host, ".")
if len(parts) < 3 {
return out, metadata, fmt.Errorf("unable to update endpoint host for dualstack, hostname invalid, %s", req.URL.Host)
}
for i := 0; i+1 < len(parts); i++ {
if strings.EqualFold(parts[i], serviceID) {
parts[i] = parts[i] + ".dualstack"
break
}
}
// construct the url host
req.URL.Host = strings.Join(parts, ".")
}
return next.HandleSerialize(ctx, in)
}

View file

@ -0,0 +1,89 @@
package s3shared
import (
"encoding/xml"
"fmt"
"io"
"net/http"
"strings"
)
// ErrorComponents represents the error response fields
// that will be deserialized from an xml error response body
type ErrorComponents struct {
Code string `xml:"Code"`
Message string `xml:"Message"`
RequestID string `xml:"RequestId"`
HostID string `xml:"HostId"`
}
// GetUnwrappedErrorResponseComponents returns the error fields from an xml error response body
func GetUnwrappedErrorResponseComponents(r io.Reader) (ErrorComponents, error) {
var errComponents ErrorComponents
if err := xml.NewDecoder(r).Decode(&errComponents); err != nil && err != io.EOF {
return ErrorComponents{}, fmt.Errorf("error while deserializing xml error response : %w", err)
}
return errComponents, nil
}
// GetWrappedErrorResponseComponents returns the error fields from an xml error response body
// in which error code, and message are wrapped by a <Error> tag
func GetWrappedErrorResponseComponents(r io.Reader) (ErrorComponents, error) {
var errComponents struct {
Code string `xml:"Error>Code"`
Message string `xml:"Error>Message"`
RequestID string `xml:"RequestId"`
HostID string `xml:"HostId"`
}
if err := xml.NewDecoder(r).Decode(&errComponents); err != nil && err != io.EOF {
return ErrorComponents{}, fmt.Errorf("error while deserializing xml error response : %w", err)
}
return ErrorComponents{
Code: errComponents.Code,
Message: errComponents.Message,
RequestID: errComponents.RequestID,
HostID: errComponents.HostID,
}, nil
}
// GetErrorResponseComponents retrieves error components according to passed in options
func GetErrorResponseComponents(r io.Reader, options ErrorResponseDeserializerOptions) (ErrorComponents, error) {
var errComponents ErrorComponents
var err error
if options.IsWrappedWithErrorTag {
errComponents, err = GetWrappedErrorResponseComponents(r)
} else {
errComponents, err = GetUnwrappedErrorResponseComponents(r)
}
if err != nil {
return ErrorComponents{}, err
}
// If an error code or message is not retrieved, it is derived from the http status code
// eg, for S3 service, we derive err code and message, if none is found
if options.UseStatusCode && len(errComponents.Code) == 0 &&
len(errComponents.Message) == 0 {
// derive code and message from status code
statusText := http.StatusText(options.StatusCode)
errComponents.Code = strings.Replace(statusText, " ", "", -1)
errComponents.Message = statusText
}
return errComponents, nil
}
// ErrorResponseDeserializerOptions represents error response deserializer options for s3 and s3-control service
type ErrorResponseDeserializerOptions struct {
// UseStatusCode denotes if status code should be used to retrieve error code, msg
UseStatusCode bool
// StatusCode is status code of error response
StatusCode int
//IsWrappedWithErrorTag represents if error response's code, msg is wrapped within an
// additional <Error> tag
IsWrappedWithErrorTag bool
}