cmd/stdiscosrv: New discovery server (fixes #4618)

This is a new revision of the discovery server. Relevant changes and
non-changes:

- Protocol towards clients is unchanged.

- Recommended large scale design is still to be deployed nehind nginx (I
  tested, and it's still a lot faster at terminating TLS).

- Database backend is leveldb again, only. It scales enough, is easy to
  setup, and we don't need any backend to take care of.

- Server supports replication. This is a simple TCP channel - protect it
  with a firewall when deploying over the internet. (We deploy this within
  the same datacenter, and with firewall.) Any incoming client announces
  are sent over the replication channel(s) to other peer discosrvs.
  Incoming replication changes are applied to the database as if they came
  from clients, but without the TLS/certificate overhead.

- Metrics are exposed using the prometheus library, when enabled.

- The database values and replication protocol is protobuf, because JSON
  was quite CPU intensive when I tried that and benchmarked it.

- The "Retry-After" value for failed lookups gets slowly increased from
  a default of 120 seconds, by 5 seconds for each failed lookup,
  independently by each discosrv. This lowers the query load over time for
  clients that are never seen. The Retry-After maxes out at 3600 after a
  couple of weeks of this increase. The number of failed lookups is
  stored in the database, now and then (avoiding making each lookup a
  database put).

All in all this means clients can be pointed towards a cluster using
just multiple A / AAAA records to gain both load sharing and redundancy
(if one is down, clients will talk to the remaining ones).

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4648
This commit is contained in:
Jakob Borg
2018-01-14 08:52:31 +00:00
parent 341b9691a7
commit 916ec63af6
864 changed files with 216825 additions and 64540 deletions

202
vendor/github.com/minio/minio-go/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

629
vendor/github.com/minio/minio-go/api-compose-object.go generated vendored Normal file
View File

@@ -0,0 +1,629 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"context"
"encoding/base64"
"fmt"
"net/http"
"net/url"
"strconv"
"strings"
"time"
"github.com/minio/minio-go/pkg/s3utils"
)
// SSEInfo - represents Server-Side-Encryption parameters specified by
// a user.
type SSEInfo struct {
key []byte
algo string
}
// NewSSEInfo - specifies (binary or un-encoded) encryption key and
// algorithm name. If algo is empty, it defaults to "AES256". Ref:
// https://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html
func NewSSEInfo(key []byte, algo string) SSEInfo {
if algo == "" {
algo = "AES256"
}
return SSEInfo{key, algo}
}
// internal method that computes SSE-C headers
func (s *SSEInfo) getSSEHeaders(isCopySource bool) map[string]string {
if s == nil {
return nil
}
cs := ""
if isCopySource {
cs = "copy-source-"
}
return map[string]string{
"x-amz-" + cs + "server-side-encryption-customer-algorithm": s.algo,
"x-amz-" + cs + "server-side-encryption-customer-key": base64.StdEncoding.EncodeToString(s.key),
"x-amz-" + cs + "server-side-encryption-customer-key-MD5": sumMD5Base64(s.key),
}
}
// GetSSEHeaders - computes and returns headers for SSE-C as key-value
// pairs. They can be set as metadata in PutObject* requests (for
// encryption) or be set as request headers in `Core.GetObject` (for
// decryption).
func (s *SSEInfo) GetSSEHeaders() map[string]string {
return s.getSSEHeaders(false)
}
// DestinationInfo - type with information about the object to be
// created via server-side copy requests, using the Compose API.
type DestinationInfo struct {
bucket, object string
// key for encrypting destination
encryption *SSEInfo
// if no user-metadata is provided, it is copied from source
// (when there is only once source object in the compose
// request)
userMetadata map[string]string
}
// NewDestinationInfo - creates a compose-object/copy-source
// destination info object.
//
// `encSSEC` is the key info for server-side-encryption with customer
// provided key. If it is nil, no encryption is performed.
//
// `userMeta` is the user-metadata key-value pairs to be set on the
// destination. The keys are automatically prefixed with `x-amz-meta-`
// if needed. If nil is passed, and if only a single source (of any
// size) is provided in the ComposeObject call, then metadata from the
// source is copied to the destination.
func NewDestinationInfo(bucket, object string, encryptSSEC *SSEInfo,
userMeta map[string]string) (d DestinationInfo, err error) {
// Input validation.
if err = s3utils.CheckValidBucketName(bucket); err != nil {
return d, err
}
if err = s3utils.CheckValidObjectName(object); err != nil {
return d, err
}
// Process custom-metadata to remove a `x-amz-meta-` prefix if
// present and validate that keys are distinct (after this
// prefix removal).
m := make(map[string]string)
for k, v := range userMeta {
if strings.HasPrefix(strings.ToLower(k), "x-amz-meta-") {
k = k[len("x-amz-meta-"):]
}
if _, ok := m[k]; ok {
return d, ErrInvalidArgument(fmt.Sprintf("Cannot add both %s and x-amz-meta-%s keys as custom metadata", k, k))
}
m[k] = v
}
return DestinationInfo{
bucket: bucket,
object: object,
encryption: encryptSSEC,
userMetadata: m,
}, nil
}
// getUserMetaHeadersMap - construct appropriate key-value pairs to send
// as headers from metadata map to pass into copy-object request. For
// single part copy-object (i.e. non-multipart object), enable the
// withCopyDirectiveHeader to set the `x-amz-metadata-directive` to
// `REPLACE`, so that metadata headers from the source are not copied
// over.
func (d *DestinationInfo) getUserMetaHeadersMap(withCopyDirectiveHeader bool) map[string]string {
if len(d.userMetadata) == 0 {
return nil
}
r := make(map[string]string)
if withCopyDirectiveHeader {
r["x-amz-metadata-directive"] = "REPLACE"
}
for k, v := range d.userMetadata {
r["x-amz-meta-"+k] = v
}
return r
}
// SourceInfo - represents a source object to be copied, using
// server-side copying APIs.
type SourceInfo struct {
bucket, object string
start, end int64
decryptKey *SSEInfo
// Headers to send with the upload-part-copy request involving
// this source object.
Headers http.Header
}
// NewSourceInfo - create a compose-object/copy-object source info
// object.
//
// `decryptSSEC` is the decryption key using server-side-encryption
// with customer provided key. It may be nil if the source is not
// encrypted.
func NewSourceInfo(bucket, object string, decryptSSEC *SSEInfo) SourceInfo {
r := SourceInfo{
bucket: bucket,
object: object,
start: -1, // range is unspecified by default
decryptKey: decryptSSEC,
Headers: make(http.Header),
}
// Set the source header
r.Headers.Set("x-amz-copy-source", s3utils.EncodePath(bucket+"/"+object))
// Assemble decryption headers for upload-part-copy request
for k, v := range decryptSSEC.getSSEHeaders(true) {
r.Headers.Set(k, v)
}
return r
}
// SetRange - Set the start and end offset of the source object to be
// copied. If this method is not called, the whole source object is
// copied.
func (s *SourceInfo) SetRange(start, end int64) error {
if start > end || start < 0 {
return ErrInvalidArgument("start must be non-negative, and start must be at most end.")
}
// Note that 0 <= start <= end
s.start, s.end = start, end
return nil
}
// SetMatchETagCond - Set ETag match condition. The object is copied
// only if the etag of the source matches the value given here.
func (s *SourceInfo) SetMatchETagCond(etag string) error {
if etag == "" {
return ErrInvalidArgument("ETag cannot be empty.")
}
s.Headers.Set("x-amz-copy-source-if-match", etag)
return nil
}
// SetMatchETagExceptCond - Set the ETag match exception
// condition. The object is copied only if the etag of the source is
// not the value given here.
func (s *SourceInfo) SetMatchETagExceptCond(etag string) error {
if etag == "" {
return ErrInvalidArgument("ETag cannot be empty.")
}
s.Headers.Set("x-amz-copy-source-if-none-match", etag)
return nil
}
// SetModifiedSinceCond - Set the modified since condition.
func (s *SourceInfo) SetModifiedSinceCond(modTime time.Time) error {
if modTime.IsZero() {
return ErrInvalidArgument("Input time cannot be 0.")
}
s.Headers.Set("x-amz-copy-source-if-modified-since", modTime.Format(http.TimeFormat))
return nil
}
// SetUnmodifiedSinceCond - Set the unmodified since condition.
func (s *SourceInfo) SetUnmodifiedSinceCond(modTime time.Time) error {
if modTime.IsZero() {
return ErrInvalidArgument("Input time cannot be 0.")
}
s.Headers.Set("x-amz-copy-source-if-unmodified-since", modTime.Format(http.TimeFormat))
return nil
}
// Helper to fetch size and etag of an object using a StatObject call.
func (s *SourceInfo) getProps(c Client) (size int64, etag string, userMeta map[string]string, err error) {
// Get object info - need size and etag here. Also, decryption
// headers are added to the stat request if given.
var objInfo ObjectInfo
opts := StatObjectOptions{}
for k, v := range s.decryptKey.getSSEHeaders(false) {
opts.Set(k, v)
}
objInfo, err = c.statObject(context.Background(), s.bucket, s.object, opts)
if err != nil {
err = ErrInvalidArgument(fmt.Sprintf("Could not stat object - %s/%s: %v", s.bucket, s.object, err))
} else {
size = objInfo.Size
etag = objInfo.ETag
userMeta = make(map[string]string)
for k, v := range objInfo.Metadata {
if strings.HasPrefix(k, "x-amz-meta-") {
if len(v) > 0 {
userMeta[k] = v[0]
}
}
}
}
return
}
// Low level implementation of CopyObject API, supports only upto 5GiB worth of copy.
func (c Client) copyObjectDo(ctx context.Context, srcBucket, srcObject, destBucket, destObject string,
metadata map[string]string) (ObjectInfo, error) {
// Build headers.
headers := make(http.Header)
// Set all the metadata headers.
for k, v := range metadata {
headers.Set(k, v)
}
// Set the source header
headers.Set("x-amz-copy-source", s3utils.EncodePath(srcBucket+"/"+srcObject))
// Send upload-part-copy request
resp, err := c.executeMethod(ctx, "PUT", requestMetadata{
bucketName: destBucket,
objectName: destObject,
customHeader: headers,
})
defer closeResponse(resp)
if err != nil {
return ObjectInfo{}, err
}
// Check if we got an error response.
if resp.StatusCode != http.StatusOK {
return ObjectInfo{}, httpRespToErrorResponse(resp, srcBucket, srcObject)
}
cpObjRes := copyObjectResult{}
err = xmlDecoder(resp.Body, &cpObjRes)
if err != nil {
return ObjectInfo{}, err
}
objInfo := ObjectInfo{
Key: destObject,
ETag: strings.Trim(cpObjRes.ETag, "\""),
LastModified: cpObjRes.LastModified,
}
return objInfo, nil
}
func (c Client) copyObjectPartDo(ctx context.Context, srcBucket, srcObject, destBucket, destObject string, uploadID string,
partID int, startOffset int64, length int64, metadata map[string]string) (p CompletePart, err error) {
headers := make(http.Header)
// Set source
headers.Set("x-amz-copy-source", s3utils.EncodePath(srcBucket+"/"+srcObject))
if startOffset < 0 {
return p, ErrInvalidArgument("startOffset must be non-negative")
}
if length >= 0 {
headers.Set("x-amz-copy-source-range", fmt.Sprintf("bytes=%d-%d", startOffset, startOffset+length-1))
}
for k, v := range metadata {
headers.Set(k, v)
}
queryValues := make(url.Values)
queryValues.Set("partNumber", strconv.Itoa(partID))
queryValues.Set("uploadId", uploadID)
resp, err := c.executeMethod(ctx, "PUT", requestMetadata{
bucketName: destBucket,
objectName: destObject,
customHeader: headers,
queryValues: queryValues,
})
defer closeResponse(resp)
if err != nil {
return
}
// Check if we got an error response.
if resp.StatusCode != http.StatusOK {
return p, httpRespToErrorResponse(resp, destBucket, destObject)
}
// Decode copy-part response on success.
cpObjRes := copyObjectResult{}
err = xmlDecoder(resp.Body, &cpObjRes)
if err != nil {
return p, err
}
p.PartNumber, p.ETag = partID, cpObjRes.ETag
return p, nil
}
// uploadPartCopy - helper function to create a part in a multipart
// upload via an upload-part-copy request
// https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPartCopy.html
func (c Client) uploadPartCopy(ctx context.Context, bucket, object, uploadID string, partNumber int,
headers http.Header) (p CompletePart, err error) {
// Build query parameters
urlValues := make(url.Values)
urlValues.Set("partNumber", strconv.Itoa(partNumber))
urlValues.Set("uploadId", uploadID)
// Send upload-part-copy request
resp, err := c.executeMethod(ctx, "PUT", requestMetadata{
bucketName: bucket,
objectName: object,
customHeader: headers,
queryValues: urlValues,
})
defer closeResponse(resp)
if err != nil {
return p, err
}
// Check if we got an error response.
if resp.StatusCode != http.StatusOK {
return p, httpRespToErrorResponse(resp, bucket, object)
}
// Decode copy-part response on success.
cpObjRes := copyObjectResult{}
err = xmlDecoder(resp.Body, &cpObjRes)
if err != nil {
return p, err
}
p.PartNumber, p.ETag = partNumber, cpObjRes.ETag
return p, nil
}
// ComposeObject - creates an object using server-side copying of
// existing objects. It takes a list of source objects (with optional
// offsets) and concatenates them into a new object using only
// server-side copying operations.
func (c Client) ComposeObject(dst DestinationInfo, srcs []SourceInfo) error {
if len(srcs) < 1 || len(srcs) > maxPartsCount {
return ErrInvalidArgument("There must be as least one and up to 10000 source objects.")
}
ctx := context.Background()
srcSizes := make([]int64, len(srcs))
var totalSize, size, totalParts int64
var srcUserMeta map[string]string
var etag string
var err error
for i, src := range srcs {
size, etag, srcUserMeta, err = src.getProps(c)
if err != nil {
return err
}
// Error out if client side encryption is used in this source object when
// more than one source objects are given.
if len(srcs) > 1 && src.Headers.Get("x-amz-meta-x-amz-key") != "" {
return ErrInvalidArgument(
fmt.Sprintf("Client side encryption is used in source object %s/%s", src.bucket, src.object))
}
// Since we did a HEAD to get size, we use the ETag
// value to make sure the object has not changed by
// the time we perform the copy. This is done, only if
// the user has not set their own ETag match
// condition.
if src.Headers.Get("x-amz-copy-source-if-match") == "" {
src.SetMatchETagCond(etag)
}
// Check if a segment is specified, and if so, is the
// segment within object bounds?
if src.start != -1 {
// Since range is specified,
// 0 <= src.start <= src.end
// so only invalid case to check is:
if src.end >= size {
return ErrInvalidArgument(
fmt.Sprintf("SourceInfo %d has invalid segment-to-copy [%d, %d] (size is %d)",
i, src.start, src.end, size))
}
size = src.end - src.start + 1
}
// Only the last source may be less than `absMinPartSize`
if size < absMinPartSize && i < len(srcs)-1 {
return ErrInvalidArgument(
fmt.Sprintf("SourceInfo %d is too small (%d) and it is not the last part", i, size))
}
// Is data to copy too large?
totalSize += size
if totalSize > maxMultipartPutObjectSize {
return ErrInvalidArgument(fmt.Sprintf("Cannot compose an object of size %d (> 5TiB)", totalSize))
}
// record source size
srcSizes[i] = size
// calculate parts needed for current source
totalParts += partsRequired(size)
// Do we need more parts than we are allowed?
if totalParts > maxPartsCount {
return ErrInvalidArgument(fmt.Sprintf(
"Your proposed compose object requires more than %d parts", maxPartsCount))
}
}
// Single source object case (i.e. when only one source is
// involved, it is being copied wholly and at most 5GiB in
// size).
if totalParts == 1 && srcs[0].start == -1 && totalSize <= maxPartSize {
h := srcs[0].Headers
// Add destination encryption headers
for k, v := range dst.encryption.getSSEHeaders(false) {
h.Set(k, v)
}
// If no user metadata is specified (and so, the
// for-loop below is not entered), metadata from the
// source is copied to the destination (due to
// single-part copy-object PUT request behaviour).
for k, v := range dst.getUserMetaHeadersMap(true) {
h.Set(k, v)
}
// Send copy request
resp, err := c.executeMethod(ctx, "PUT", requestMetadata{
bucketName: dst.bucket,
objectName: dst.object,
customHeader: h,
})
defer closeResponse(resp)
if err != nil {
return err
}
// Check if we got an error response.
if resp.StatusCode != http.StatusOK {
return httpRespToErrorResponse(resp, dst.bucket, dst.object)
}
// Return nil on success.
return nil
}
// Now, handle multipart-copy cases.
// 1. Initiate a new multipart upload.
// Set user-metadata on the destination object. If no
// user-metadata is specified, and there is only one source,
// (only) then metadata from source is copied.
userMeta := dst.getUserMetaHeadersMap(false)
metaMap := userMeta
if len(userMeta) == 0 && len(srcs) == 1 {
metaMap = srcUserMeta
}
metaHeaders := make(map[string]string)
for k, v := range metaMap {
metaHeaders[k] = v
}
uploadID, err := c.newUploadID(ctx, dst.bucket, dst.object, PutObjectOptions{UserMetadata: metaHeaders})
if err != nil {
return err
}
// 2. Perform copy part uploads
objParts := []CompletePart{}
partIndex := 1
for i, src := range srcs {
h := src.Headers
// Add destination encryption headers
for k, v := range dst.encryption.getSSEHeaders(false) {
h.Set(k, v)
}
// calculate start/end indices of parts after
// splitting.
startIdx, endIdx := calculateEvenSplits(srcSizes[i], src)
for j, start := range startIdx {
end := endIdx[j]
// Add (or reset) source range header for
// upload part copy request.
h.Set("x-amz-copy-source-range",
fmt.Sprintf("bytes=%d-%d", start, end))
// make upload-part-copy request
complPart, err := c.uploadPartCopy(ctx, dst.bucket,
dst.object, uploadID, partIndex, h)
if err != nil {
return err
}
objParts = append(objParts, complPart)
partIndex++
}
}
// 3. Make final complete-multipart request.
_, err = c.completeMultipartUpload(ctx, dst.bucket, dst.object, uploadID,
completeMultipartUpload{Parts: objParts})
if err != nil {
return err
}
return nil
}
// partsRequired is ceiling(size / copyPartSize)
func partsRequired(size int64) int64 {
r := size / copyPartSize
if size%copyPartSize > 0 {
r++
}
return r
}
// calculateEvenSplits - computes splits for a source and returns
// start and end index slices. Splits happen evenly to be sure that no
// part is less than 5MiB, as that could fail the multipart request if
// it is not the last part.
func calculateEvenSplits(size int64, src SourceInfo) (startIndex, endIndex []int64) {
if size == 0 {
return
}
reqParts := partsRequired(size)
startIndex = make([]int64, reqParts)
endIndex = make([]int64, reqParts)
// Compute number of required parts `k`, as:
//
// k = ceiling(size / copyPartSize)
//
// Now, distribute the `size` bytes in the source into
// k parts as evenly as possible:
//
// r parts sized (q+1) bytes, and
// (k - r) parts sized q bytes, where
//
// size = q * k + r (by simple division of size by k,
// so that 0 <= r < k)
//
start := src.start
if start == -1 {
start = 0
}
quot, rem := size/reqParts, size%reqParts
nextStart := start
for j := int64(0); j < reqParts; j++ {
curPartSize := quot
if j < rem {
curPartSize++
}
cStart := nextStart
cEnd := cStart + curPartSize - 1
nextStart = cEnd + 1
startIndex[j], endIndex[j] = cStart, cEnd
}
return
}

84
vendor/github.com/minio/minio-go/api-datatypes.go generated vendored Normal file
View File

@@ -0,0 +1,84 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"net/http"
"time"
)
// BucketInfo container for bucket metadata.
type BucketInfo struct {
// The name of the bucket.
Name string `json:"name"`
// Date the bucket was created.
CreationDate time.Time `json:"creationDate"`
}
// ObjectInfo container for object metadata.
type ObjectInfo struct {
// An ETag is optionally set to md5sum of an object. In case of multipart objects,
// ETag is of the form MD5SUM-N where MD5SUM is md5sum of all individual md5sums of
// each parts concatenated into one string.
ETag string `json:"etag"`
Key string `json:"name"` // Name of the object
LastModified time.Time `json:"lastModified"` // Date and time the object was last modified.
Size int64 `json:"size"` // Size in bytes of the object.
ContentType string `json:"contentType"` // A standard MIME type describing the format of the object data.
// Collection of additional metadata on the object.
// eg: x-amz-meta-*, content-encoding etc.
Metadata http.Header `json:"metadata" xml:"-"`
// Owner name.
Owner struct {
DisplayName string `json:"name"`
ID string `json:"id"`
} `json:"owner"`
// The class of storage used to store the object.
StorageClass string `json:"storageClass"`
// Error
Err error `json:"-"`
}
// ObjectMultipartInfo container for multipart object metadata.
type ObjectMultipartInfo struct {
// Date and time at which the multipart upload was initiated.
Initiated time.Time `type:"timestamp" timestampFormat:"iso8601"`
Initiator initiator
Owner owner
// The type of storage to use for the object. Defaults to 'STANDARD'.
StorageClass string
// Key of the object for which the multipart upload was initiated.
Key string
// Size in bytes of the object.
Size int64
// Upload ID that identifies the multipart upload.
UploadID string `xml:"UploadId"`
// Error
Err error
}

286
vendor/github.com/minio/minio-go/api-error-response.go generated vendored Normal file
View File

@@ -0,0 +1,286 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"encoding/xml"
"fmt"
"net/http"
)
/* **** SAMPLE ERROR RESPONSE ****
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<BucketName>bucketName</BucketName>
<Key>objectName</Key>
<RequestId>F19772218238A85A</RequestId>
<HostId>GuWkjyviSiGHizehqpmsD1ndz5NClSP19DOT+s2mv7gXGQ8/X1lhbDGiIJEXpGFD</HostId>
</Error>
*/
// ErrorResponse - Is the typed error returned by all API operations.
type ErrorResponse struct {
XMLName xml.Name `xml:"Error" json:"-"`
Code string
Message string
BucketName string
Key string
RequestID string `xml:"RequestId"`
HostID string `xml:"HostId"`
// Region where the bucket is located. This header is returned
// only in HEAD bucket and ListObjects response.
Region string
// Underlying HTTP status code for the returned error
StatusCode int `xml:"-" json:"-"`
// Headers of the returned S3 XML error
Headers http.Header `xml:"-" json:"-"`
}
// ToErrorResponse - Returns parsed ErrorResponse struct from body and
// http headers.
//
// For example:
//
// import s3 "github.com/minio/minio-go"
// ...
// ...
// reader, stat, err := s3.GetObject(...)
// if err != nil {
// resp := s3.ToErrorResponse(err)
// }
// ...
func ToErrorResponse(err error) ErrorResponse {
switch err := err.(type) {
case ErrorResponse:
return err
default:
return ErrorResponse{}
}
}
// Error - Returns S3 error string.
func (e ErrorResponse) Error() string {
if e.Message == "" {
msg, ok := s3ErrorResponseMap[e.Code]
if !ok {
msg = fmt.Sprintf("Error response code %s.", e.Code)
}
return msg
}
return e.Message
}
// Common string for errors to report issue location in unexpected
// cases.
const (
reportIssue = "Please report this issue at https://github.com/minio/minio-go/issues."
)
// httpRespToErrorResponse returns a new encoded ErrorResponse
// structure as error.
func httpRespToErrorResponse(resp *http.Response, bucketName, objectName string) error {
if resp == nil {
msg := "Response is empty. " + reportIssue
return ErrInvalidArgument(msg)
}
errResp := ErrorResponse{
StatusCode: resp.StatusCode,
}
err := xmlDecoder(resp.Body, &errResp)
// Xml decoding failed with no body, fall back to HTTP headers.
if err != nil {
switch resp.StatusCode {
case http.StatusNotFound:
if objectName == "" {
errResp = ErrorResponse{
StatusCode: resp.StatusCode,
Code: "NoSuchBucket",
Message: "The specified bucket does not exist.",
BucketName: bucketName,
}
} else {
errResp = ErrorResponse{
StatusCode: resp.StatusCode,
Code: "NoSuchKey",
Message: "The specified key does not exist.",
BucketName: bucketName,
Key: objectName,
}
}
case http.StatusForbidden:
errResp = ErrorResponse{
StatusCode: resp.StatusCode,
Code: "AccessDenied",
Message: "Access Denied.",
BucketName: bucketName,
Key: objectName,
}
case http.StatusConflict:
errResp = ErrorResponse{
StatusCode: resp.StatusCode,
Code: "Conflict",
Message: "Bucket not empty.",
BucketName: bucketName,
}
case http.StatusPreconditionFailed:
errResp = ErrorResponse{
StatusCode: resp.StatusCode,
Code: "PreconditionFailed",
Message: s3ErrorResponseMap["PreconditionFailed"],
BucketName: bucketName,
Key: objectName,
}
default:
errResp = ErrorResponse{
StatusCode: resp.StatusCode,
Code: resp.Status,
Message: resp.Status,
BucketName: bucketName,
}
}
}
// Save hostID, requestID and region information
// from headers if not available through error XML.
if errResp.RequestID == "" {
errResp.RequestID = resp.Header.Get("x-amz-request-id")
}
if errResp.HostID == "" {
errResp.HostID = resp.Header.Get("x-amz-id-2")
}
if errResp.Region == "" {
errResp.Region = resp.Header.Get("x-amz-bucket-region")
}
if errResp.Code == "InvalidRegion" && errResp.Region != "" {
errResp.Message = fmt.Sprintf("Region does not match, expecting region %s.", errResp.Region)
}
// Save headers returned in the API XML error
errResp.Headers = resp.Header
return errResp
}
// ErrTransferAccelerationBucket - bucket name is invalid to be used with transfer acceleration.
func ErrTransferAccelerationBucket(bucketName string) error {
return ErrorResponse{
StatusCode: http.StatusBadRequest,
Code: "InvalidArgument",
Message: "The name of the bucket used for Transfer Acceleration must be DNS-compliant and must not contain periods ..",
BucketName: bucketName,
}
}
// ErrEntityTooLarge - Input size is larger than supported maximum.
func ErrEntityTooLarge(totalSize, maxObjectSize int64, bucketName, objectName string) error {
msg := fmt.Sprintf("Your proposed upload size %d exceeds the maximum allowed object size %d for single PUT operation.", totalSize, maxObjectSize)
return ErrorResponse{
StatusCode: http.StatusBadRequest,
Code: "EntityTooLarge",
Message: msg,
BucketName: bucketName,
Key: objectName,
}
}
// ErrEntityTooSmall - Input size is smaller than supported minimum.
func ErrEntityTooSmall(totalSize int64, bucketName, objectName string) error {
msg := fmt.Sprintf("Your proposed upload size %d is below the minimum allowed object size 0B for single PUT operation.", totalSize)
return ErrorResponse{
StatusCode: http.StatusBadRequest,
Code: "EntityTooSmall",
Message: msg,
BucketName: bucketName,
Key: objectName,
}
}
// ErrUnexpectedEOF - Unexpected end of file reached.
func ErrUnexpectedEOF(totalRead, totalSize int64, bucketName, objectName string) error {
msg := fmt.Sprintf("Data read %d is not equal to the size %d of the input Reader.", totalRead, totalSize)
return ErrorResponse{
StatusCode: http.StatusBadRequest,
Code: "UnexpectedEOF",
Message: msg,
BucketName: bucketName,
Key: objectName,
}
}
// ErrInvalidBucketName - Invalid bucket name response.
func ErrInvalidBucketName(message string) error {
return ErrorResponse{
StatusCode: http.StatusBadRequest,
Code: "InvalidBucketName",
Message: message,
RequestID: "minio",
}
}
// ErrInvalidObjectName - Invalid object name response.
func ErrInvalidObjectName(message string) error {
return ErrorResponse{
StatusCode: http.StatusNotFound,
Code: "NoSuchKey",
Message: message,
RequestID: "minio",
}
}
// ErrInvalidObjectPrefix - Invalid object prefix response is
// similar to object name response.
var ErrInvalidObjectPrefix = ErrInvalidObjectName
// ErrInvalidArgument - Invalid argument response.
func ErrInvalidArgument(message string) error {
return ErrorResponse{
StatusCode: http.StatusBadRequest,
Code: "InvalidArgument",
Message: message,
RequestID: "minio",
}
}
// ErrNoSuchBucketPolicy - No Such Bucket Policy response
// The specified bucket does not have a bucket policy.
func ErrNoSuchBucketPolicy(message string) error {
return ErrorResponse{
StatusCode: http.StatusNotFound,
Code: "NoSuchBucketPolicy",
Message: message,
RequestID: "minio",
}
}
// ErrAPINotSupported - API not supported response
// The specified API call is not supported
func ErrAPINotSupported(message string) error {
return ErrorResponse{
StatusCode: http.StatusNotImplemented,
Code: "APINotSupported",
Message: message,
RequestID: "minio",
}
}

View File

@@ -0,0 +1,26 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import "context"
// GetObjectWithContext - returns an seekable, readable object.
// The options can be used to specify the GET request further.
func (c Client) GetObjectWithContext(ctx context.Context, bucketName, objectName string, opts GetObjectOptions) (*Object, error) {
return c.getObjectWithContext(ctx, bucketName, objectName, opts)
}

136
vendor/github.com/minio/minio-go/api-get-object-file.go generated vendored Normal file
View File

@@ -0,0 +1,136 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"io"
"os"
"path/filepath"
"github.com/minio/minio-go/pkg/encrypt"
"context"
"github.com/minio/minio-go/pkg/s3utils"
)
// FGetObjectWithContext - download contents of an object to a local file.
// The options can be used to specify the GET request further.
func (c Client) FGetObjectWithContext(ctx context.Context, bucketName, objectName, filePath string, opts GetObjectOptions) error {
return c.fGetObjectWithContext(ctx, bucketName, objectName, filePath, opts)
}
// FGetObject - download contents of an object to a local file.
func (c Client) FGetObject(bucketName, objectName, filePath string, opts GetObjectOptions) error {
return c.fGetObjectWithContext(context.Background(), bucketName, objectName, filePath, opts)
}
// FGetEncryptedObject - Decrypt and store an object at filePath.
func (c Client) FGetEncryptedObject(bucketName, objectName, filePath string, materials encrypt.Materials) error {
if materials == nil {
return ErrInvalidArgument("Unable to recognize empty encryption properties")
}
return c.FGetObject(bucketName, objectName, filePath, GetObjectOptions{Materials: materials})
}
// fGetObjectWithContext - fgetObject wrapper function with context
func (c Client) fGetObjectWithContext(ctx context.Context, bucketName, objectName, filePath string, opts GetObjectOptions) error {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return err
}
if err := s3utils.CheckValidObjectName(objectName); err != nil {
return err
}
// Verify if destination already exists.
st, err := os.Stat(filePath)
if err == nil {
// If the destination exists and is a directory.
if st.IsDir() {
return ErrInvalidArgument("fileName is a directory.")
}
}
// Proceed if file does not exist. return for all other errors.
if err != nil {
if !os.IsNotExist(err) {
return err
}
}
// Extract top level directory.
objectDir, _ := filepath.Split(filePath)
if objectDir != "" {
// Create any missing top level directories.
if err := os.MkdirAll(objectDir, 0700); err != nil {
return err
}
}
// Gather md5sum.
objectStat, err := c.StatObject(bucketName, objectName, StatObjectOptions{opts})
if err != nil {
return err
}
// Write to a temporary file "fileName.part.minio" before saving.
filePartPath := filePath + objectStat.ETag + ".part.minio"
// If exists, open in append mode. If not create it as a part file.
filePart, err := os.OpenFile(filePartPath, os.O_CREATE|os.O_APPEND|os.O_WRONLY, 0600)
if err != nil {
return err
}
// Issue Stat to get the current offset.
st, err = filePart.Stat()
if err != nil {
return err
}
// Initialize get object request headers to set the
// appropriate range offsets to read from.
if st.Size() > 0 {
opts.SetRange(st.Size(), 0)
}
// Seek to current position for incoming reader.
objectReader, objectStat, err := c.getObject(ctx, bucketName, objectName, opts)
if err != nil {
return err
}
// Write to the part file.
if _, err = io.CopyN(filePart, objectReader, objectStat.Size); err != nil {
return err
}
// Close the file before rename, this is specifically needed for Windows users.
if err = filePart.Close(); err != nil {
return err
}
// Safely completed. Now commit by renaming to actual filename.
if err = os.Rename(filePartPath, filePath); err != nil {
return err
}
// Return.
return nil
}

676
vendor/github.com/minio/minio-go/api-get-object.go generated vendored Normal file
View File

@@ -0,0 +1,676 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"context"
"errors"
"fmt"
"io"
"net/http"
"strings"
"sync"
"time"
"github.com/minio/minio-go/pkg/encrypt"
"github.com/minio/minio-go/pkg/s3utils"
)
// GetEncryptedObject deciphers and streams data stored in the server after applying a specified encryption materials,
// returned stream should be closed by the caller.
func (c Client) GetEncryptedObject(bucketName, objectName string, encryptMaterials encrypt.Materials) (io.ReadCloser, error) {
if encryptMaterials == nil {
return nil, ErrInvalidArgument("Unable to recognize empty encryption properties")
}
return c.GetObject(bucketName, objectName, GetObjectOptions{Materials: encryptMaterials})
}
// GetObject - returns an seekable, readable object.
func (c Client) GetObject(bucketName, objectName string, opts GetObjectOptions) (*Object, error) {
return c.getObjectWithContext(context.Background(), bucketName, objectName, opts)
}
// GetObject wrapper function that accepts a request context
func (c Client) getObjectWithContext(ctx context.Context, bucketName, objectName string, opts GetObjectOptions) (*Object, error) {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return nil, err
}
if err := s3utils.CheckValidObjectName(objectName); err != nil {
return nil, err
}
var httpReader io.ReadCloser
var objectInfo ObjectInfo
var err error
// Create request channel.
reqCh := make(chan getRequest)
// Create response channel.
resCh := make(chan getResponse)
// Create done channel.
doneCh := make(chan struct{})
// This routine feeds partial object data as and when the caller reads.
go func() {
defer close(reqCh)
defer close(resCh)
// Used to verify if etag of object has changed since last read.
var etag string
// Loop through the incoming control messages and read data.
for {
select {
// When the done channel is closed exit our routine.
case <-doneCh:
// Close the http response body before returning.
// This ends the connection with the server.
if httpReader != nil {
httpReader.Close()
}
return
// Gather incoming request.
case req := <-reqCh:
// If this is the first request we may not need to do a getObject request yet.
if req.isFirstReq {
// First request is a Read/ReadAt.
if req.isReadOp {
// Differentiate between wanting the whole object and just a range.
if req.isReadAt {
// If this is a ReadAt request only get the specified range.
// Range is set with respect to the offset and length of the buffer requested.
// Do not set objectInfo from the first readAt request because it will not get
// the whole object.
opts.SetRange(req.Offset, req.Offset+int64(len(req.Buffer))-1)
} else if req.Offset > 0 {
opts.SetRange(req.Offset, 0)
}
httpReader, objectInfo, err = c.getObject(ctx, bucketName, objectName, opts)
if err != nil {
resCh <- getResponse{Error: err}
return
}
etag = objectInfo.ETag
// Read at least firstReq.Buffer bytes, if not we have
// reached our EOF.
size, err := io.ReadFull(httpReader, req.Buffer)
if size > 0 && err == io.ErrUnexpectedEOF {
// If an EOF happens after reading some but not
// all the bytes ReadFull returns ErrUnexpectedEOF
err = io.EOF
}
// Send back the first response.
resCh <- getResponse{
objectInfo: objectInfo,
Size: int(size),
Error: err,
didRead: true,
}
} else {
// First request is a Stat or Seek call.
// Only need to run a StatObject until an actual Read or ReadAt request comes through.
objectInfo, err = c.statObject(ctx, bucketName, objectName, StatObjectOptions{opts})
if err != nil {
resCh <- getResponse{
Error: err,
}
// Exit the go-routine.
return
}
etag = objectInfo.ETag
// Send back the first response.
resCh <- getResponse{
objectInfo: objectInfo,
}
}
} else if req.settingObjectInfo { // Request is just to get objectInfo.
if etag != "" {
opts.SetMatchETag(etag)
}
objectInfo, err := c.statObject(ctx, bucketName, objectName, StatObjectOptions{opts})
if err != nil {
resCh <- getResponse{
Error: err,
}
// Exit the goroutine.
return
}
// Send back the objectInfo.
resCh <- getResponse{
objectInfo: objectInfo,
}
} else {
// Offset changes fetch the new object at an Offset.
// Because the httpReader may not be set by the first
// request if it was a stat or seek it must be checked
// if the object has been read or not to only initialize
// new ones when they haven't been already.
// All readAt requests are new requests.
if req.DidOffsetChange || !req.beenRead {
if etag != "" {
opts.SetMatchETag(etag)
}
if httpReader != nil {
// Close previously opened http reader.
httpReader.Close()
}
// If this request is a readAt only get the specified range.
if req.isReadAt {
// Range is set with respect to the offset and length of the buffer requested.
opts.SetRange(req.Offset, req.Offset+int64(len(req.Buffer))-1)
} else if req.Offset > 0 { // Range is set with respect to the offset.
opts.SetRange(req.Offset, 0)
}
httpReader, objectInfo, err = c.getObject(ctx, bucketName, objectName, opts)
if err != nil {
resCh <- getResponse{
Error: err,
}
return
}
}
// Read at least req.Buffer bytes, if not we have
// reached our EOF.
size, err := io.ReadFull(httpReader, req.Buffer)
if err == io.ErrUnexpectedEOF {
// If an EOF happens after reading some but not
// all the bytes ReadFull returns ErrUnexpectedEOF
err = io.EOF
}
// Reply back how much was read.
resCh <- getResponse{
Size: int(size),
Error: err,
didRead: true,
objectInfo: objectInfo,
}
}
}
}
}()
// Create a newObject through the information sent back by reqCh.
return newObject(reqCh, resCh, doneCh), nil
}
// get request message container to communicate with internal
// go-routine.
type getRequest struct {
Buffer []byte
Offset int64 // readAt offset.
DidOffsetChange bool // Tracks the offset changes for Seek requests.
beenRead bool // Determines if this is the first time an object is being read.
isReadAt bool // Determines if this request is a request to a specific range
isReadOp bool // Determines if this request is a Read or Read/At request.
isFirstReq bool // Determines if this request is the first time an object is being accessed.
settingObjectInfo bool // Determines if this request is to set the objectInfo of an object.
}
// get response message container to reply back for the request.
type getResponse struct {
Size int
Error error
didRead bool // Lets subsequent calls know whether or not httpReader has been initiated.
objectInfo ObjectInfo // Used for the first request.
}
// Object represents an open object. It implements
// Reader, ReaderAt, Seeker, Closer for a HTTP stream.
type Object struct {
// Mutex.
mutex *sync.Mutex
// User allocated and defined.
reqCh chan<- getRequest
resCh <-chan getResponse
doneCh chan<- struct{}
currOffset int64
objectInfo ObjectInfo
// Ask lower level to initiate data fetching based on currOffset
seekData bool
// Keeps track of closed call.
isClosed bool
// Keeps track of if this is the first call.
isStarted bool
// Previous error saved for future calls.
prevErr error
// Keeps track of if this object has been read yet.
beenRead bool
// Keeps track of if objectInfo has been set yet.
objectInfoSet bool
}
// doGetRequest - sends and blocks on the firstReqCh and reqCh of an object.
// Returns back the size of the buffer read, if anything was read, as well
// as any error encountered. For all first requests sent on the object
// it is also responsible for sending back the objectInfo.
func (o *Object) doGetRequest(request getRequest) (getResponse, error) {
o.reqCh <- request
response := <-o.resCh
// Return any error to the top level.
if response.Error != nil {
return response, response.Error
}
// This was the first request.
if !o.isStarted {
// The object has been operated on.
o.isStarted = true
}
// Set the objectInfo if the request was not readAt
// and it hasn't been set before.
if !o.objectInfoSet && !request.isReadAt {
o.objectInfo = response.objectInfo
o.objectInfoSet = true
}
// Set beenRead only if it has not been set before.
if !o.beenRead {
o.beenRead = response.didRead
}
// Data are ready on the wire, no need to reinitiate connection in lower level
o.seekData = false
return response, nil
}
// setOffset - handles the setting of offsets for
// Read/ReadAt/Seek requests.
func (o *Object) setOffset(bytesRead int64) error {
// Update the currentOffset.
o.currOffset += bytesRead
if o.objectInfo.Size > -1 && o.currOffset >= o.objectInfo.Size {
return io.EOF
}
return nil
}
// Read reads up to len(b) bytes into b. It returns the number of
// bytes read (0 <= n <= len(b)) and any error encountered. Returns
// io.EOF upon end of file.
func (o *Object) Read(b []byte) (n int, err error) {
if o == nil {
return 0, ErrInvalidArgument("Object is nil")
}
// Locking.
o.mutex.Lock()
defer o.mutex.Unlock()
// prevErr is previous error saved from previous operation.
if o.prevErr != nil || o.isClosed {
return 0, o.prevErr
}
// Create a new request.
readReq := getRequest{
isReadOp: true,
beenRead: o.beenRead,
Buffer: b,
}
// Alert that this is the first request.
if !o.isStarted {
readReq.isFirstReq = true
}
// Ask to establish a new data fetch routine based on seekData flag
readReq.DidOffsetChange = o.seekData
readReq.Offset = o.currOffset
// Send and receive from the first request.
response, err := o.doGetRequest(readReq)
if err != nil && err != io.EOF {
// Save the error for future calls.
o.prevErr = err
return response.Size, err
}
// Bytes read.
bytesRead := int64(response.Size)
// Set the new offset.
oerr := o.setOffset(bytesRead)
if oerr != nil {
// Save the error for future calls.
o.prevErr = oerr
return response.Size, oerr
}
// Return the response.
return response.Size, err
}
// Stat returns the ObjectInfo structure describing Object.
func (o *Object) Stat() (ObjectInfo, error) {
if o == nil {
return ObjectInfo{}, ErrInvalidArgument("Object is nil")
}
// Locking.
o.mutex.Lock()
defer o.mutex.Unlock()
if o.prevErr != nil && o.prevErr != io.EOF || o.isClosed {
return ObjectInfo{}, o.prevErr
}
// This is the first request.
if !o.isStarted || !o.objectInfoSet {
statReq := getRequest{
isFirstReq: !o.isStarted,
settingObjectInfo: !o.objectInfoSet,
}
// Send the request and get the response.
_, err := o.doGetRequest(statReq)
if err != nil {
o.prevErr = err
return ObjectInfo{}, err
}
}
return o.objectInfo, nil
}
// ReadAt reads len(b) bytes from the File starting at byte offset
// off. It returns the number of bytes read and the error, if any.
// ReadAt always returns a non-nil error when n < len(b). At end of
// file, that error is io.EOF.
func (o *Object) ReadAt(b []byte, offset int64) (n int, err error) {
if o == nil {
return 0, ErrInvalidArgument("Object is nil")
}
// Locking.
o.mutex.Lock()
defer o.mutex.Unlock()
// prevErr is error which was saved in previous operation.
if o.prevErr != nil || o.isClosed {
return 0, o.prevErr
}
// Can only compare offsets to size when size has been set.
if o.objectInfoSet {
// If offset is negative than we return io.EOF.
// If offset is greater than or equal to object size we return io.EOF.
if (o.objectInfo.Size > -1 && offset >= o.objectInfo.Size) || offset < 0 {
return 0, io.EOF
}
}
// Create the new readAt request.
readAtReq := getRequest{
isReadOp: true,
isReadAt: true,
DidOffsetChange: true, // Offset always changes.
beenRead: o.beenRead, // Set if this is the first request to try and read.
Offset: offset, // Set the offset.
Buffer: b,
}
// Alert that this is the first request.
if !o.isStarted {
readAtReq.isFirstReq = true
}
// Send and receive from the first request.
response, err := o.doGetRequest(readAtReq)
if err != nil && err != io.EOF {
// Save the error.
o.prevErr = err
return response.Size, err
}
// Bytes read.
bytesRead := int64(response.Size)
// There is no valid objectInfo yet
// to compare against for EOF.
if !o.objectInfoSet {
// Update the currentOffset.
o.currOffset += bytesRead
} else {
// If this was not the first request update
// the offsets and compare against objectInfo
// for EOF.
oerr := o.setOffset(bytesRead)
if oerr != nil {
o.prevErr = oerr
return response.Size, oerr
}
}
return response.Size, err
}
// Seek sets the offset for the next Read or Write to offset,
// interpreted according to whence: 0 means relative to the
// origin of the file, 1 means relative to the current offset,
// and 2 means relative to the end.
// Seek returns the new offset and an error, if any.
//
// Seeking to a negative offset is an error. Seeking to any positive
// offset is legal, subsequent io operations succeed until the
// underlying object is not closed.
func (o *Object) Seek(offset int64, whence int) (n int64, err error) {
if o == nil {
return 0, ErrInvalidArgument("Object is nil")
}
// Locking.
o.mutex.Lock()
defer o.mutex.Unlock()
if o.prevErr != nil {
// At EOF seeking is legal allow only io.EOF, for any other errors we return.
if o.prevErr != io.EOF {
return 0, o.prevErr
}
}
// Negative offset is valid for whence of '2'.
if offset < 0 && whence != 2 {
return 0, ErrInvalidArgument(fmt.Sprintf("Negative position not allowed for %d.", whence))
}
// This is the first request. So before anything else
// get the ObjectInfo.
if !o.isStarted || !o.objectInfoSet {
// Create the new Seek request.
seekReq := getRequest{
isReadOp: false,
Offset: offset,
isFirstReq: true,
}
// Send and receive from the seek request.
_, err := o.doGetRequest(seekReq)
if err != nil {
// Save the error.
o.prevErr = err
return 0, err
}
}
// Switch through whence.
switch whence {
default:
return 0, ErrInvalidArgument(fmt.Sprintf("Invalid whence %d", whence))
case 0:
if o.objectInfo.Size > -1 && offset > o.objectInfo.Size {
return 0, io.EOF
}
o.currOffset = offset
case 1:
if o.objectInfo.Size > -1 && o.currOffset+offset > o.objectInfo.Size {
return 0, io.EOF
}
o.currOffset += offset
case 2:
// If we don't know the object size return an error for io.SeekEnd
if o.objectInfo.Size < 0 {
return 0, ErrInvalidArgument("Whence END is not supported when the object size is unknown")
}
// Seeking to positive offset is valid for whence '2', but
// since we are backing a Reader we have reached 'EOF' if
// offset is positive.
if offset > 0 {
return 0, io.EOF
}
// Seeking to negative position not allowed for whence.
if o.objectInfo.Size+offset < 0 {
return 0, ErrInvalidArgument(fmt.Sprintf("Seeking at negative offset not allowed for %d", whence))
}
o.currOffset = o.objectInfo.Size + offset
}
// Reset the saved error since we successfully seeked, let the Read
// and ReadAt decide.
if o.prevErr == io.EOF {
o.prevErr = nil
}
// Ask lower level to fetch again from source
o.seekData = true
// Return the effective offset.
return o.currOffset, nil
}
// Close - The behavior of Close after the first call returns error
// for subsequent Close() calls.
func (o *Object) Close() (err error) {
if o == nil {
return ErrInvalidArgument("Object is nil")
}
// Locking.
o.mutex.Lock()
defer o.mutex.Unlock()
// if already closed return an error.
if o.isClosed {
return o.prevErr
}
// Close successfully.
close(o.doneCh)
// Save for future operations.
errMsg := "Object is already closed. Bad file descriptor."
o.prevErr = errors.New(errMsg)
// Save here that we closed done channel successfully.
o.isClosed = true
return nil
}
// newObject instantiates a new *minio.Object*
// ObjectInfo will be set by setObjectInfo
func newObject(reqCh chan<- getRequest, resCh <-chan getResponse, doneCh chan<- struct{}) *Object {
return &Object{
mutex: &sync.Mutex{},
reqCh: reqCh,
resCh: resCh,
doneCh: doneCh,
}
}
// getObject - retrieve object from Object Storage.
//
// Additionally this function also takes range arguments to download the specified
// range bytes of an object. Setting offset and length = 0 will download the full object.
//
// For more information about the HTTP Range header.
// go to http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.35.
func (c Client) getObject(ctx context.Context, bucketName, objectName string, opts GetObjectOptions) (io.ReadCloser, ObjectInfo, error) {
// Validate input arguments.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return nil, ObjectInfo{}, err
}
if err := s3utils.CheckValidObjectName(objectName); err != nil {
return nil, ObjectInfo{}, err
}
// Execute GET on objectName.
resp, err := c.executeMethod(ctx, "GET", requestMetadata{
bucketName: bucketName,
objectName: objectName,
customHeader: opts.Header(),
contentSHA256Hex: emptySHA256Hex,
})
if err != nil {
return nil, ObjectInfo{}, err
}
if resp != nil {
if resp.StatusCode != http.StatusOK && resp.StatusCode != http.StatusPartialContent {
return nil, ObjectInfo{}, httpRespToErrorResponse(resp, bucketName, objectName)
}
}
// Trim off the odd double quotes from ETag in the beginning and end.
md5sum := strings.TrimPrefix(resp.Header.Get("ETag"), "\"")
md5sum = strings.TrimSuffix(md5sum, "\"")
// Parse the date.
date, err := time.Parse(http.TimeFormat, resp.Header.Get("Last-Modified"))
if err != nil {
msg := "Last-Modified time format not recognized. " + reportIssue
return nil, ObjectInfo{}, ErrorResponse{
Code: "InternalError",
Message: msg,
RequestID: resp.Header.Get("x-amz-request-id"),
HostID: resp.Header.Get("x-amz-id-2"),
Region: resp.Header.Get("x-amz-bucket-region"),
}
}
// Get content-type.
contentType := strings.TrimSpace(resp.Header.Get("Content-Type"))
if contentType == "" {
contentType = "application/octet-stream"
}
objectStat := ObjectInfo{
ETag: md5sum,
Key: objectName,
Size: resp.ContentLength,
LastModified: date,
ContentType: contentType,
// Extract only the relevant header keys describing the object.
// following function filters out a list of standard set of keys
// which are not part of object metadata.
Metadata: extractObjMetadata(resp.Header),
}
reader := resp.Body
if opts.Materials != nil {
err = opts.Materials.SetupDecryptMode(reader, objectStat.Metadata.Get(amzHeaderIV), objectStat.Metadata.Get(amzHeaderKey))
if err != nil {
return nil, ObjectInfo{}, err
}
reader = opts.Materials
}
// do not close body here, caller will close
return reader, objectStat, nil
}

126
vendor/github.com/minio/minio-go/api-get-options.go generated vendored Normal file
View File

@@ -0,0 +1,126 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"fmt"
"net/http"
"time"
"github.com/minio/minio-go/pkg/encrypt"
)
// GetObjectOptions are used to specify additional headers or options
// during GET requests.
type GetObjectOptions struct {
headers map[string]string
Materials encrypt.Materials
}
// StatObjectOptions are used to specify additional headers or options
// during GET info/stat requests.
type StatObjectOptions struct {
GetObjectOptions
}
// Header returns the http.Header representation of the GET options.
func (o GetObjectOptions) Header() http.Header {
headers := make(http.Header, len(o.headers))
for k, v := range o.headers {
headers.Set(k, v)
}
return headers
}
// Set adds a key value pair to the options. The
// key-value pair will be part of the HTTP GET request
// headers.
func (o *GetObjectOptions) Set(key, value string) {
if o.headers == nil {
o.headers = make(map[string]string)
}
o.headers[http.CanonicalHeaderKey(key)] = value
}
// SetMatchETag - set match etag.
func (o *GetObjectOptions) SetMatchETag(etag string) error {
if etag == "" {
return ErrInvalidArgument("ETag cannot be empty.")
}
o.Set("If-Match", "\""+etag+"\"")
return nil
}
// SetMatchETagExcept - set match etag except.
func (o *GetObjectOptions) SetMatchETagExcept(etag string) error {
if etag == "" {
return ErrInvalidArgument("ETag cannot be empty.")
}
o.Set("If-None-Match", "\""+etag+"\"")
return nil
}
// SetUnmodified - set unmodified time since.
func (o *GetObjectOptions) SetUnmodified(modTime time.Time) error {
if modTime.IsZero() {
return ErrInvalidArgument("Modified since cannot be empty.")
}
o.Set("If-Unmodified-Since", modTime.Format(http.TimeFormat))
return nil
}
// SetModified - set modified time since.
func (o *GetObjectOptions) SetModified(modTime time.Time) error {
if modTime.IsZero() {
return ErrInvalidArgument("Modified since cannot be empty.")
}
o.Set("If-Modified-Since", modTime.Format(http.TimeFormat))
return nil
}
// SetRange - set the start and end offset of the object to be read.
// See https://tools.ietf.org/html/rfc7233#section-3.1 for reference.
func (o *GetObjectOptions) SetRange(start, end int64) error {
switch {
case start == 0 && end < 0:
// Read last '-end' bytes. `bytes=-N`.
o.Set("Range", fmt.Sprintf("bytes=%d", end))
case 0 < start && end == 0:
// Read everything starting from offset
// 'start'. `bytes=N-`.
o.Set("Range", fmt.Sprintf("bytes=%d-", start))
case 0 <= start && start <= end:
// Read everything starting at 'start' till the
// 'end'. `bytes=N-M`
o.Set("Range", fmt.Sprintf("bytes=%d-%d", start, end))
default:
// All other cases such as
// bytes=-3-
// bytes=5-3
// bytes=-2-4
// bytes=-3-0
// bytes=-3--2
// are invalid.
return ErrInvalidArgument(
fmt.Sprintf(
"Invalid range specified: start=%d end=%d",
start, end))
}
return nil
}

109
vendor/github.com/minio/minio-go/api-get-policy.go generated vendored Normal file
View File

@@ -0,0 +1,109 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"context"
"encoding/json"
"io/ioutil"
"net/http"
"net/url"
"github.com/minio/minio-go/pkg/policy"
"github.com/minio/minio-go/pkg/s3utils"
)
// GetBucketPolicy - get bucket policy at a given path.
func (c Client) GetBucketPolicy(bucketName, objectPrefix string) (bucketPolicy policy.BucketPolicy, err error) {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return policy.BucketPolicyNone, err
}
if err := s3utils.CheckValidObjectNamePrefix(objectPrefix); err != nil {
return policy.BucketPolicyNone, err
}
policyInfo, err := c.getBucketPolicy(bucketName)
if err != nil {
errResponse := ToErrorResponse(err)
if errResponse.Code == "NoSuchBucketPolicy" {
return policy.BucketPolicyNone, nil
}
return policy.BucketPolicyNone, err
}
return policy.GetPolicy(policyInfo.Statements, bucketName, objectPrefix), nil
}
// ListBucketPolicies - list all policies for a given prefix and all its children.
func (c Client) ListBucketPolicies(bucketName, objectPrefix string) (bucketPolicies map[string]policy.BucketPolicy, err error) {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return map[string]policy.BucketPolicy{}, err
}
if err := s3utils.CheckValidObjectNamePrefix(objectPrefix); err != nil {
return map[string]policy.BucketPolicy{}, err
}
policyInfo, err := c.getBucketPolicy(bucketName)
if err != nil {
errResponse := ToErrorResponse(err)
if errResponse.Code == "NoSuchBucketPolicy" {
return map[string]policy.BucketPolicy{}, nil
}
return map[string]policy.BucketPolicy{}, err
}
return policy.GetPolicies(policyInfo.Statements, bucketName), nil
}
// Default empty bucket access policy.
var emptyBucketAccessPolicy = policy.BucketAccessPolicy{
Version: "2012-10-17",
}
// Request server for current bucket policy.
func (c Client) getBucketPolicy(bucketName string) (policy.BucketAccessPolicy, error) {
// Get resources properly escaped and lined up before
// using them in http request.
urlValues := make(url.Values)
urlValues.Set("policy", "")
// Execute GET on bucket to list objects.
resp, err := c.executeMethod(context.Background(), "GET", requestMetadata{
bucketName: bucketName,
queryValues: urlValues,
contentSHA256Hex: emptySHA256Hex,
})
defer closeResponse(resp)
if err != nil {
return emptyBucketAccessPolicy, err
}
if resp != nil {
if resp.StatusCode != http.StatusOK {
return emptyBucketAccessPolicy, httpRespToErrorResponse(resp, bucketName, "")
}
}
bucketPolicyBuf, err := ioutil.ReadAll(resp.Body)
if err != nil {
return emptyBucketAccessPolicy, err
}
policy := policy.BucketAccessPolicy{}
err = json.Unmarshal(bucketPolicyBuf, &policy)
return policy, err
}

717
vendor/github.com/minio/minio-go/api-list.go generated vendored Normal file
View File

@@ -0,0 +1,717 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"context"
"errors"
"fmt"
"net/http"
"net/url"
"strings"
"github.com/minio/minio-go/pkg/s3utils"
)
// ListBuckets list all buckets owned by this authenticated user.
//
// This call requires explicit authentication, no anonymous requests are
// allowed for listing buckets.
//
// api := client.New(....)
// for message := range api.ListBuckets() {
// fmt.Println(message)
// }
//
func (c Client) ListBuckets() ([]BucketInfo, error) {
// Execute GET on service.
resp, err := c.executeMethod(context.Background(), "GET", requestMetadata{contentSHA256Hex: emptySHA256Hex})
defer closeResponse(resp)
if err != nil {
return nil, err
}
if resp != nil {
if resp.StatusCode != http.StatusOK {
return nil, httpRespToErrorResponse(resp, "", "")
}
}
listAllMyBucketsResult := listAllMyBucketsResult{}
err = xmlDecoder(resp.Body, &listAllMyBucketsResult)
if err != nil {
return nil, err
}
return listAllMyBucketsResult.Buckets.Bucket, nil
}
/// Bucket Read Operations.
// ListObjectsV2 lists all objects matching the objectPrefix from
// the specified bucket. If recursion is enabled it would list
// all subdirectories and all its contents.
//
// Your input parameters are just bucketName, objectPrefix, recursive
// and a done channel for pro-actively closing the internal go
// routine. If you enable recursive as 'true' this function will
// return back all the objects in a given bucket name and object
// prefix.
//
// api := client.New(....)
// // Create a done channel.
// doneCh := make(chan struct{})
// defer close(doneCh)
// // Recursively list all objects in 'mytestbucket'
// recursive := true
// for message := range api.ListObjectsV2("mytestbucket", "starthere", recursive, doneCh) {
// fmt.Println(message)
// }
//
func (c Client) ListObjectsV2(bucketName, objectPrefix string, recursive bool, doneCh <-chan struct{}) <-chan ObjectInfo {
// Allocate new list objects channel.
objectStatCh := make(chan ObjectInfo, 1)
// Default listing is delimited at "/"
delimiter := "/"
if recursive {
// If recursive we do not delimit.
delimiter = ""
}
// Return object owner information by default
fetchOwner := true
// Validate bucket name.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
defer close(objectStatCh)
objectStatCh <- ObjectInfo{
Err: err,
}
return objectStatCh
}
// Validate incoming object prefix.
if err := s3utils.CheckValidObjectNamePrefix(objectPrefix); err != nil {
defer close(objectStatCh)
objectStatCh <- ObjectInfo{
Err: err,
}
return objectStatCh
}
// Initiate list objects goroutine here.
go func(objectStatCh chan<- ObjectInfo) {
defer close(objectStatCh)
// Save continuationToken for next request.
var continuationToken string
for {
// Get list of objects a maximum of 1000 per request.
result, err := c.listObjectsV2Query(bucketName, objectPrefix, continuationToken, fetchOwner, delimiter, 1000)
if err != nil {
objectStatCh <- ObjectInfo{
Err: err,
}
return
}
// If contents are available loop through and send over channel.
for _, object := range result.Contents {
select {
// Send object content.
case objectStatCh <- object:
// If receives done from the caller, return here.
case <-doneCh:
return
}
}
// Send all common prefixes if any.
// NOTE: prefixes are only present if the request is delimited.
for _, obj := range result.CommonPrefixes {
select {
// Send object prefixes.
case objectStatCh <- ObjectInfo{
Key: obj.Prefix,
Size: 0,
}:
// If receives done from the caller, return here.
case <-doneCh:
return
}
}
// If continuation token present, save it for next request.
if result.NextContinuationToken != "" {
continuationToken = result.NextContinuationToken
}
// Listing ends result is not truncated, return right here.
if !result.IsTruncated {
return
}
}
}(objectStatCh)
return objectStatCh
}
// listObjectsV2Query - (List Objects V2) - List some or all (up to 1000) of the objects in a bucket.
//
// You can use the request parameters as selection criteria to return a subset of the objects in a bucket.
// request parameters :-
// ---------
// ?continuation-token - Specifies the key to start with when listing objects in a bucket.
// ?delimiter - A delimiter is a character you use to group keys.
// ?prefix - Limits the response to keys that begin with the specified prefix.
// ?max-keys - Sets the maximum number of keys returned in the response body.
func (c Client) listObjectsV2Query(bucketName, objectPrefix, continuationToken string, fetchOwner bool, delimiter string, maxkeys int) (ListBucketV2Result, error) {
// Validate bucket name.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return ListBucketV2Result{}, err
}
// Validate object prefix.
if err := s3utils.CheckValidObjectNamePrefix(objectPrefix); err != nil {
return ListBucketV2Result{}, err
}
// Get resources properly escaped and lined up before
// using them in http request.
urlValues := make(url.Values)
// Always set list-type in ListObjects V2
urlValues.Set("list-type", "2")
// Set object prefix.
if objectPrefix != "" {
urlValues.Set("prefix", objectPrefix)
}
// Set continuation token
if continuationToken != "" {
urlValues.Set("continuation-token", continuationToken)
}
// Set delimiter.
if delimiter != "" {
urlValues.Set("delimiter", delimiter)
}
// Fetch owner when listing
if fetchOwner {
urlValues.Set("fetch-owner", "true")
}
// maxkeys should default to 1000 or less.
if maxkeys == 0 || maxkeys > 1000 {
maxkeys = 1000
}
// Set max keys.
urlValues.Set("max-keys", fmt.Sprintf("%d", maxkeys))
// Execute GET on bucket to list objects.
resp, err := c.executeMethod(context.Background(), "GET", requestMetadata{
bucketName: bucketName,
queryValues: urlValues,
contentSHA256Hex: emptySHA256Hex,
})
defer closeResponse(resp)
if err != nil {
return ListBucketV2Result{}, err
}
if resp != nil {
if resp.StatusCode != http.StatusOK {
return ListBucketV2Result{}, httpRespToErrorResponse(resp, bucketName, "")
}
}
// Decode listBuckets XML.
listBucketResult := ListBucketV2Result{}
if err = xmlDecoder(resp.Body, &listBucketResult); err != nil {
return listBucketResult, err
}
// This is an additional verification check to make
// sure proper responses are received.
if listBucketResult.IsTruncated && listBucketResult.NextContinuationToken == "" {
return listBucketResult, errors.New("Truncated response should have continuation token set")
}
// Success.
return listBucketResult, nil
}
// ListObjects - (List Objects) - List some objects or all recursively.
//
// ListObjects lists all objects matching the objectPrefix from
// the specified bucket. If recursion is enabled it would list
// all subdirectories and all its contents.
//
// Your input parameters are just bucketName, objectPrefix, recursive
// and a done channel for pro-actively closing the internal go
// routine. If you enable recursive as 'true' this function will
// return back all the objects in a given bucket name and object
// prefix.
//
// api := client.New(....)
// // Create a done channel.
// doneCh := make(chan struct{})
// defer close(doneCh)
// // Recurively list all objects in 'mytestbucket'
// recursive := true
// for message := range api.ListObjects("mytestbucket", "starthere", recursive, doneCh) {
// fmt.Println(message)
// }
//
func (c Client) ListObjects(bucketName, objectPrefix string, recursive bool, doneCh <-chan struct{}) <-chan ObjectInfo {
// Allocate new list objects channel.
objectStatCh := make(chan ObjectInfo, 1)
// Default listing is delimited at "/"
delimiter := "/"
if recursive {
// If recursive we do not delimit.
delimiter = ""
}
// Validate bucket name.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
defer close(objectStatCh)
objectStatCh <- ObjectInfo{
Err: err,
}
return objectStatCh
}
// Validate incoming object prefix.
if err := s3utils.CheckValidObjectNamePrefix(objectPrefix); err != nil {
defer close(objectStatCh)
objectStatCh <- ObjectInfo{
Err: err,
}
return objectStatCh
}
// Initiate list objects goroutine here.
go func(objectStatCh chan<- ObjectInfo) {
defer close(objectStatCh)
// Save marker for next request.
var marker string
for {
// Get list of objects a maximum of 1000 per request.
result, err := c.listObjectsQuery(bucketName, objectPrefix, marker, delimiter, 1000)
if err != nil {
objectStatCh <- ObjectInfo{
Err: err,
}
return
}
// If contents are available loop through and send over channel.
for _, object := range result.Contents {
// Save the marker.
marker = object.Key
select {
// Send object content.
case objectStatCh <- object:
// If receives done from the caller, return here.
case <-doneCh:
return
}
}
// Send all common prefixes if any.
// NOTE: prefixes are only present if the request is delimited.
for _, obj := range result.CommonPrefixes {
object := ObjectInfo{}
object.Key = obj.Prefix
object.Size = 0
select {
// Send object prefixes.
case objectStatCh <- object:
// If receives done from the caller, return here.
case <-doneCh:
return
}
}
// If next marker present, save it for next request.
if result.NextMarker != "" {
marker = result.NextMarker
}
// Listing ends result is not truncated, return right here.
if !result.IsTruncated {
return
}
}
}(objectStatCh)
return objectStatCh
}
// listObjects - (List Objects) - List some or all (up to 1000) of the objects in a bucket.
//
// You can use the request parameters as selection criteria to return a subset of the objects in a bucket.
// request parameters :-
// ---------
// ?marker - Specifies the key to start with when listing objects in a bucket.
// ?delimiter - A delimiter is a character you use to group keys.
// ?prefix - Limits the response to keys that begin with the specified prefix.
// ?max-keys - Sets the maximum number of keys returned in the response body.
func (c Client) listObjectsQuery(bucketName, objectPrefix, objectMarker, delimiter string, maxkeys int) (ListBucketResult, error) {
// Validate bucket name.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return ListBucketResult{}, err
}
// Validate object prefix.
if err := s3utils.CheckValidObjectNamePrefix(objectPrefix); err != nil {
return ListBucketResult{}, err
}
// Get resources properly escaped and lined up before
// using them in http request.
urlValues := make(url.Values)
// Set object prefix.
if objectPrefix != "" {
urlValues.Set("prefix", objectPrefix)
}
// Set object marker.
if objectMarker != "" {
urlValues.Set("marker", objectMarker)
}
// Set delimiter.
if delimiter != "" {
urlValues.Set("delimiter", delimiter)
}
// maxkeys should default to 1000 or less.
if maxkeys == 0 || maxkeys > 1000 {
maxkeys = 1000
}
// Set max keys.
urlValues.Set("max-keys", fmt.Sprintf("%d", maxkeys))
// Execute GET on bucket to list objects.
resp, err := c.executeMethod(context.Background(), "GET", requestMetadata{
bucketName: bucketName,
queryValues: urlValues,
contentSHA256Hex: emptySHA256Hex,
})
defer closeResponse(resp)
if err != nil {
return ListBucketResult{}, err
}
if resp != nil {
if resp.StatusCode != http.StatusOK {
return ListBucketResult{}, httpRespToErrorResponse(resp, bucketName, "")
}
}
// Decode listBuckets XML.
listBucketResult := ListBucketResult{}
err = xmlDecoder(resp.Body, &listBucketResult)
if err != nil {
return listBucketResult, err
}
return listBucketResult, nil
}
// ListIncompleteUploads - List incompletely uploaded multipart objects.
//
// ListIncompleteUploads lists all incompleted objects matching the
// objectPrefix from the specified bucket. If recursion is enabled
// it would list all subdirectories and all its contents.
//
// Your input parameters are just bucketName, objectPrefix, recursive
// and a done channel to pro-actively close the internal go routine.
// If you enable recursive as 'true' this function will return back all
// the multipart objects in a given bucket name.
//
// api := client.New(....)
// // Create a done channel.
// doneCh := make(chan struct{})
// defer close(doneCh)
// // Recurively list all objects in 'mytestbucket'
// recursive := true
// for message := range api.ListIncompleteUploads("mytestbucket", "starthere", recursive) {
// fmt.Println(message)
// }
//
func (c Client) ListIncompleteUploads(bucketName, objectPrefix string, recursive bool, doneCh <-chan struct{}) <-chan ObjectMultipartInfo {
// Turn on size aggregation of individual parts.
isAggregateSize := true
return c.listIncompleteUploads(bucketName, objectPrefix, recursive, isAggregateSize, doneCh)
}
// listIncompleteUploads lists all incomplete uploads.
func (c Client) listIncompleteUploads(bucketName, objectPrefix string, recursive, aggregateSize bool, doneCh <-chan struct{}) <-chan ObjectMultipartInfo {
// Allocate channel for multipart uploads.
objectMultipartStatCh := make(chan ObjectMultipartInfo, 1)
// Delimiter is set to "/" by default.
delimiter := "/"
if recursive {
// If recursive do not delimit.
delimiter = ""
}
// Validate bucket name.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
defer close(objectMultipartStatCh)
objectMultipartStatCh <- ObjectMultipartInfo{
Err: err,
}
return objectMultipartStatCh
}
// Validate incoming object prefix.
if err := s3utils.CheckValidObjectNamePrefix(objectPrefix); err != nil {
defer close(objectMultipartStatCh)
objectMultipartStatCh <- ObjectMultipartInfo{
Err: err,
}
return objectMultipartStatCh
}
go func(objectMultipartStatCh chan<- ObjectMultipartInfo) {
defer close(objectMultipartStatCh)
// object and upload ID marker for future requests.
var objectMarker string
var uploadIDMarker string
for {
// list all multipart uploads.
result, err := c.listMultipartUploadsQuery(bucketName, objectMarker, uploadIDMarker, objectPrefix, delimiter, 1000)
if err != nil {
objectMultipartStatCh <- ObjectMultipartInfo{
Err: err,
}
return
}
// Save objectMarker and uploadIDMarker for next request.
objectMarker = result.NextKeyMarker
uploadIDMarker = result.NextUploadIDMarker
// Send all multipart uploads.
for _, obj := range result.Uploads {
// Calculate total size of the uploaded parts if 'aggregateSize' is enabled.
if aggregateSize {
// Get total multipart size.
obj.Size, err = c.getTotalMultipartSize(bucketName, obj.Key, obj.UploadID)
if err != nil {
objectMultipartStatCh <- ObjectMultipartInfo{
Err: err,
}
continue
}
}
select {
// Send individual uploads here.
case objectMultipartStatCh <- obj:
// If done channel return here.
case <-doneCh:
return
}
}
// Send all common prefixes if any.
// NOTE: prefixes are only present if the request is delimited.
for _, obj := range result.CommonPrefixes {
object := ObjectMultipartInfo{}
object.Key = obj.Prefix
object.Size = 0
select {
// Send delimited prefixes here.
case objectMultipartStatCh <- object:
// If done channel return here.
case <-doneCh:
return
}
}
// Listing ends if result not truncated, return right here.
if !result.IsTruncated {
return
}
}
}(objectMultipartStatCh)
// return.
return objectMultipartStatCh
}
// listMultipartUploads - (List Multipart Uploads).
// - Lists some or all (up to 1000) in-progress multipart uploads in a bucket.
//
// You can use the request parameters as selection criteria to return a subset of the uploads in a bucket.
// request parameters. :-
// ---------
// ?key-marker - Specifies the multipart upload after which listing should begin.
// ?upload-id-marker - Together with key-marker specifies the multipart upload after which listing should begin.
// ?delimiter - A delimiter is a character you use to group keys.
// ?prefix - Limits the response to keys that begin with the specified prefix.
// ?max-uploads - Sets the maximum number of multipart uploads returned in the response body.
func (c Client) listMultipartUploadsQuery(bucketName, keyMarker, uploadIDMarker, prefix, delimiter string, maxUploads int) (ListMultipartUploadsResult, error) {
// Get resources properly escaped and lined up before using them in http request.
urlValues := make(url.Values)
// Set uploads.
urlValues.Set("uploads", "")
// Set object key marker.
if keyMarker != "" {
urlValues.Set("key-marker", keyMarker)
}
// Set upload id marker.
if uploadIDMarker != "" {
urlValues.Set("upload-id-marker", uploadIDMarker)
}
// Set prefix marker.
if prefix != "" {
urlValues.Set("prefix", prefix)
}
// Set delimiter.
if delimiter != "" {
urlValues.Set("delimiter", delimiter)
}
// maxUploads should be 1000 or less.
if maxUploads == 0 || maxUploads > 1000 {
maxUploads = 1000
}
// Set max-uploads.
urlValues.Set("max-uploads", fmt.Sprintf("%d", maxUploads))
// Execute GET on bucketName to list multipart uploads.
resp, err := c.executeMethod(context.Background(), "GET", requestMetadata{
bucketName: bucketName,
queryValues: urlValues,
contentSHA256Hex: emptySHA256Hex,
})
defer closeResponse(resp)
if err != nil {
return ListMultipartUploadsResult{}, err
}
if resp != nil {
if resp.StatusCode != http.StatusOK {
return ListMultipartUploadsResult{}, httpRespToErrorResponse(resp, bucketName, "")
}
}
// Decode response body.
listMultipartUploadsResult := ListMultipartUploadsResult{}
err = xmlDecoder(resp.Body, &listMultipartUploadsResult)
if err != nil {
return listMultipartUploadsResult, err
}
return listMultipartUploadsResult, nil
}
// listObjectParts list all object parts recursively.
func (c Client) listObjectParts(bucketName, objectName, uploadID string) (partsInfo map[int]ObjectPart, err error) {
// Part number marker for the next batch of request.
var nextPartNumberMarker int
partsInfo = make(map[int]ObjectPart)
for {
// Get list of uploaded parts a maximum of 1000 per request.
listObjPartsResult, err := c.listObjectPartsQuery(bucketName, objectName, uploadID, nextPartNumberMarker, 1000)
if err != nil {
return nil, err
}
// Append to parts info.
for _, part := range listObjPartsResult.ObjectParts {
// Trim off the odd double quotes from ETag in the beginning and end.
part.ETag = strings.TrimPrefix(part.ETag, "\"")
part.ETag = strings.TrimSuffix(part.ETag, "\"")
partsInfo[part.PartNumber] = part
}
// Keep part number marker, for the next iteration.
nextPartNumberMarker = listObjPartsResult.NextPartNumberMarker
// Listing ends result is not truncated, return right here.
if !listObjPartsResult.IsTruncated {
break
}
}
// Return all the parts.
return partsInfo, nil
}
// findUploadID lists all incomplete uploads and finds the uploadID of the matching object name.
func (c Client) findUploadID(bucketName, objectName string) (uploadID string, err error) {
// Make list incomplete uploads recursive.
isRecursive := true
// Turn off size aggregation of individual parts, in this request.
isAggregateSize := false
// latestUpload to track the latest multipart info for objectName.
var latestUpload ObjectMultipartInfo
// Create done channel to cleanup the routine.
doneCh := make(chan struct{})
defer close(doneCh)
// List all incomplete uploads.
for mpUpload := range c.listIncompleteUploads(bucketName, objectName, isRecursive, isAggregateSize, doneCh) {
if mpUpload.Err != nil {
return "", mpUpload.Err
}
if objectName == mpUpload.Key {
if mpUpload.Initiated.Sub(latestUpload.Initiated) > 0 {
latestUpload = mpUpload
}
}
}
// Return the latest upload id.
return latestUpload.UploadID, nil
}
// getTotalMultipartSize - calculate total uploaded size for the a given multipart object.
func (c Client) getTotalMultipartSize(bucketName, objectName, uploadID string) (size int64, err error) {
// Iterate over all parts and aggregate the size.
partsInfo, err := c.listObjectParts(bucketName, objectName, uploadID)
if err != nil {
return 0, err
}
for _, partInfo := range partsInfo {
size += partInfo.Size
}
return size, nil
}
// listObjectPartsQuery (List Parts query)
// - lists some or all (up to 1000) parts that have been uploaded
// for a specific multipart upload
//
// You can use the request parameters as selection criteria to return
// a subset of the uploads in a bucket, request parameters :-
// ---------
// ?part-number-marker - Specifies the part after which listing should
// begin.
// ?max-parts - Maximum parts to be listed per request.
func (c Client) listObjectPartsQuery(bucketName, objectName, uploadID string, partNumberMarker, maxParts int) (ListObjectPartsResult, error) {
// Get resources properly escaped and lined up before using them in http request.
urlValues := make(url.Values)
// Set part number marker.
urlValues.Set("part-number-marker", fmt.Sprintf("%d", partNumberMarker))
// Set upload id.
urlValues.Set("uploadId", uploadID)
// maxParts should be 1000 or less.
if maxParts == 0 || maxParts > 1000 {
maxParts = 1000
}
// Set max parts.
urlValues.Set("max-parts", fmt.Sprintf("%d", maxParts))
// Execute GET on objectName to get list of parts.
resp, err := c.executeMethod(context.Background(), "GET", requestMetadata{
bucketName: bucketName,
objectName: objectName,
queryValues: urlValues,
contentSHA256Hex: emptySHA256Hex,
})
defer closeResponse(resp)
if err != nil {
return ListObjectPartsResult{}, err
}
if resp != nil {
if resp.StatusCode != http.StatusOK {
return ListObjectPartsResult{}, httpRespToErrorResponse(resp, bucketName, objectName)
}
}
// Decode list object parts XML.
listObjectPartsResult := ListObjectPartsResult{}
err = xmlDecoder(resp.Body, &listObjectPartsResult)
if err != nil {
return listObjectPartsResult, err
}
return listObjectPartsResult, nil
}

230
vendor/github.com/minio/minio-go/api-notification.go generated vendored Normal file
View File

@@ -0,0 +1,230 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"bufio"
"context"
"encoding/json"
"io"
"net/http"
"net/url"
"time"
"github.com/minio/minio-go/pkg/s3utils"
)
// GetBucketNotification - get bucket notification at a given path.
func (c Client) GetBucketNotification(bucketName string) (bucketNotification BucketNotification, err error) {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return BucketNotification{}, err
}
notification, err := c.getBucketNotification(bucketName)
if err != nil {
return BucketNotification{}, err
}
return notification, nil
}
// Request server for notification rules.
func (c Client) getBucketNotification(bucketName string) (BucketNotification, error) {
urlValues := make(url.Values)
urlValues.Set("notification", "")
// Execute GET on bucket to list objects.
resp, err := c.executeMethod(context.Background(), "GET", requestMetadata{
bucketName: bucketName,
queryValues: urlValues,
contentSHA256Hex: emptySHA256Hex,
})
defer closeResponse(resp)
if err != nil {
return BucketNotification{}, err
}
return processBucketNotificationResponse(bucketName, resp)
}
// processes the GetNotification http response from the server.
func processBucketNotificationResponse(bucketName string, resp *http.Response) (BucketNotification, error) {
if resp.StatusCode != http.StatusOK {
errResponse := httpRespToErrorResponse(resp, bucketName, "")
return BucketNotification{}, errResponse
}
var bucketNotification BucketNotification
err := xmlDecoder(resp.Body, &bucketNotification)
if err != nil {
return BucketNotification{}, err
}
return bucketNotification, nil
}
// Indentity represents the user id, this is a compliance field.
type identity struct {
PrincipalID string `json:"principalId"`
}
// Notification event bucket metadata.
type bucketMeta struct {
Name string `json:"name"`
OwnerIdentity identity `json:"ownerIdentity"`
ARN string `json:"arn"`
}
// Notification event object metadata.
type objectMeta struct {
Key string `json:"key"`
Size int64 `json:"size,omitempty"`
ETag string `json:"eTag,omitempty"`
VersionID string `json:"versionId,omitempty"`
Sequencer string `json:"sequencer"`
}
// Notification event server specific metadata.
type eventMeta struct {
SchemaVersion string `json:"s3SchemaVersion"`
ConfigurationID string `json:"configurationId"`
Bucket bucketMeta `json:"bucket"`
Object objectMeta `json:"object"`
}
// sourceInfo represents information on the client that
// triggered the event notification.
type sourceInfo struct {
Host string `json:"host"`
Port string `json:"port"`
UserAgent string `json:"userAgent"`
}
// NotificationEvent represents an Amazon an S3 bucket notification event.
type NotificationEvent struct {
EventVersion string `json:"eventVersion"`
EventSource string `json:"eventSource"`
AwsRegion string `json:"awsRegion"`
EventTime string `json:"eventTime"`
EventName string `json:"eventName"`
UserIdentity identity `json:"userIdentity"`
RequestParameters map[string]string `json:"requestParameters"`
ResponseElements map[string]string `json:"responseElements"`
S3 eventMeta `json:"s3"`
Source sourceInfo `json:"source"`
}
// NotificationInfo - represents the collection of notification events, additionally
// also reports errors if any while listening on bucket notifications.
type NotificationInfo struct {
Records []NotificationEvent
Err error
}
// ListenBucketNotification - listen on bucket notifications.
func (c Client) ListenBucketNotification(bucketName, prefix, suffix string, events []string, doneCh <-chan struct{}) <-chan NotificationInfo {
notificationInfoCh := make(chan NotificationInfo, 1)
// Only success, start a routine to start reading line by line.
go func(notificationInfoCh chan<- NotificationInfo) {
defer close(notificationInfoCh)
// Validate the bucket name.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
notificationInfoCh <- NotificationInfo{
Err: err,
}
return
}
// Check ARN partition to verify if listening bucket is supported
if s3utils.IsAmazonEndpoint(c.endpointURL) || s3utils.IsGoogleEndpoint(c.endpointURL) {
notificationInfoCh <- NotificationInfo{
Err: ErrAPINotSupported("Listening for bucket notification is specific only to `minio` server endpoints"),
}
return
}
// Continuously run and listen on bucket notification.
// Create a done channel to control 'ListObjects' go routine.
retryDoneCh := make(chan struct{}, 1)
// Indicate to our routine to exit cleanly upon return.
defer close(retryDoneCh)
// Wait on the jitter retry loop.
for range c.newRetryTimerContinous(time.Second, time.Second*30, MaxJitter, retryDoneCh) {
urlValues := make(url.Values)
urlValues.Set("prefix", prefix)
urlValues.Set("suffix", suffix)
urlValues["events"] = events
// Execute GET on bucket to list objects.
resp, err := c.executeMethod(context.Background(), "GET", requestMetadata{
bucketName: bucketName,
queryValues: urlValues,
contentSHA256Hex: emptySHA256Hex,
})
if err != nil {
notificationInfoCh <- NotificationInfo{
Err: err,
}
return
}
// Validate http response, upon error return quickly.
if resp.StatusCode != http.StatusOK {
errResponse := httpRespToErrorResponse(resp, bucketName, "")
notificationInfoCh <- NotificationInfo{
Err: errResponse,
}
return
}
// Initialize a new bufio scanner, to read line by line.
bio := bufio.NewScanner(resp.Body)
// Close the response body.
defer resp.Body.Close()
// Unmarshal each line, returns marshalled values.
for bio.Scan() {
var notificationInfo NotificationInfo
if err = json.Unmarshal(bio.Bytes(), &notificationInfo); err != nil {
continue
}
// Send notifications on channel only if there are events received.
if len(notificationInfo.Records) > 0 {
select {
case notificationInfoCh <- notificationInfo:
case <-doneCh:
return
}
}
}
// Look for any underlying errors.
if err = bio.Err(); err != nil {
// For an unexpected connection drop from server, we close the body
// and re-connect.
if err == io.ErrUnexpectedEOF {
resp.Body.Close()
}
}
}
}(notificationInfoCh)
// Returns the notification info channel, for caller to start reading from.
return notificationInfoCh
}

213
vendor/github.com/minio/minio-go/api-presigned.go generated vendored Normal file
View File

@@ -0,0 +1,213 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"errors"
"net/http"
"net/url"
"time"
"github.com/minio/minio-go/pkg/s3signer"
"github.com/minio/minio-go/pkg/s3utils"
)
// presignURL - Returns a presigned URL for an input 'method'.
// Expires maximum is 7days - ie. 604800 and minimum is 1.
func (c Client) presignURL(method string, bucketName string, objectName string, expires time.Duration, reqParams url.Values) (u *url.URL, err error) {
// Input validation.
if method == "" {
return nil, ErrInvalidArgument("method cannot be empty.")
}
if err = s3utils.CheckValidBucketName(bucketName); err != nil {
return nil, err
}
if err = isValidExpiry(expires); err != nil {
return nil, err
}
// Convert expires into seconds.
expireSeconds := int64(expires / time.Second)
reqMetadata := requestMetadata{
presignURL: true,
bucketName: bucketName,
objectName: objectName,
expires: expireSeconds,
queryValues: reqParams,
}
// Instantiate a new request.
// Since expires is set newRequest will presign the request.
var req *http.Request
if req, err = c.newRequest(method, reqMetadata); err != nil {
return nil, err
}
return req.URL, nil
}
// PresignedGetObject - Returns a presigned URL to access an object
// data without credentials. URL can have a maximum expiry of
// upto 7days or a minimum of 1sec. Additionally you can override
// a set of response headers using the query parameters.
func (c Client) PresignedGetObject(bucketName string, objectName string, expires time.Duration, reqParams url.Values) (u *url.URL, err error) {
if err = s3utils.CheckValidObjectName(objectName); err != nil {
return nil, err
}
return c.presignURL("GET", bucketName, objectName, expires, reqParams)
}
// PresignedHeadObject - Returns a presigned URL to access object
// metadata without credentials. URL can have a maximum expiry of
// upto 7days or a minimum of 1sec. Additionally you can override
// a set of response headers using the query parameters.
func (c Client) PresignedHeadObject(bucketName string, objectName string, expires time.Duration, reqParams url.Values) (u *url.URL, err error) {
if err = s3utils.CheckValidObjectName(objectName); err != nil {
return nil, err
}
return c.presignURL("HEAD", bucketName, objectName, expires, reqParams)
}
// PresignedPutObject - Returns a presigned URL to upload an object
// without credentials. URL can have a maximum expiry of upto 7days
// or a minimum of 1sec.
func (c Client) PresignedPutObject(bucketName string, objectName string, expires time.Duration) (u *url.URL, err error) {
if err = s3utils.CheckValidObjectName(objectName); err != nil {
return nil, err
}
return c.presignURL("PUT", bucketName, objectName, expires, nil)
}
// Presign - returns a presigned URL for any http method of your choice
// along with custom request params. URL can have a maximum expiry of
// upto 7days or a minimum of 1sec.
func (c Client) Presign(method string, bucketName string, objectName string, expires time.Duration, reqParams url.Values) (u *url.URL, err error) {
return c.presignURL(method, bucketName, objectName, expires, reqParams)
}
// PresignedPostPolicy - Returns POST urlString, form data to upload an object.
func (c Client) PresignedPostPolicy(p *PostPolicy) (u *url.URL, formData map[string]string, err error) {
// Validate input arguments.
if p.expiration.IsZero() {
return nil, nil, errors.New("Expiration time must be specified")
}
if _, ok := p.formData["key"]; !ok {
return nil, nil, errors.New("object key must be specified")
}
if _, ok := p.formData["bucket"]; !ok {
return nil, nil, errors.New("bucket name must be specified")
}
bucketName := p.formData["bucket"]
// Fetch the bucket location.
location, err := c.getBucketLocation(bucketName)
if err != nil {
return nil, nil, err
}
u, err = c.makeTargetURL(bucketName, "", location, nil)
if err != nil {
return nil, nil, err
}
// Get credentials from the configured credentials provider.
credValues, err := c.credsProvider.Get()
if err != nil {
return nil, nil, err
}
var (
signerType = credValues.SignerType
sessionToken = credValues.SessionToken
accessKeyID = credValues.AccessKeyID
secretAccessKey = credValues.SecretAccessKey
)
if signerType.IsAnonymous() {
return nil, nil, ErrInvalidArgument("Presigned operations are not supported for anonymous credentials")
}
// Keep time.
t := time.Now().UTC()
// For signature version '2' handle here.
if signerType.IsV2() {
policyBase64 := p.base64()
p.formData["policy"] = policyBase64
// For Google endpoint set this value to be 'GoogleAccessId'.
if s3utils.IsGoogleEndpoint(c.endpointURL) {
p.formData["GoogleAccessId"] = accessKeyID
} else {
// For all other endpoints set this value to be 'AWSAccessKeyId'.
p.formData["AWSAccessKeyId"] = accessKeyID
}
// Sign the policy.
p.formData["signature"] = s3signer.PostPresignSignatureV2(policyBase64, secretAccessKey)
return u, p.formData, nil
}
// Add date policy.
if err = p.addNewPolicy(policyCondition{
matchType: "eq",
condition: "$x-amz-date",
value: t.Format(iso8601DateFormat),
}); err != nil {
return nil, nil, err
}
// Add algorithm policy.
if err = p.addNewPolicy(policyCondition{
matchType: "eq",
condition: "$x-amz-algorithm",
value: signV4Algorithm,
}); err != nil {
return nil, nil, err
}
// Add a credential policy.
credential := s3signer.GetCredential(accessKeyID, location, t)
if err = p.addNewPolicy(policyCondition{
matchType: "eq",
condition: "$x-amz-credential",
value: credential,
}); err != nil {
return nil, nil, err
}
if sessionToken != "" {
if err = p.addNewPolicy(policyCondition{
matchType: "eq",
condition: "$x-amz-security-token",
value: sessionToken,
}); err != nil {
return nil, nil, err
}
}
// Get base64 encoded policy.
policyBase64 := p.base64()
// Fill in the form data.
p.formData["policy"] = policyBase64
p.formData["x-amz-algorithm"] = signV4Algorithm
p.formData["x-amz-credential"] = credential
p.formData["x-amz-date"] = t.Format(iso8601DateFormat)
if sessionToken != "" {
p.formData["x-amz-security-token"] = sessionToken
}
p.formData["x-amz-signature"] = s3signer.PostPresignSignatureV4(policyBase64, t, secretAccessKey, location)
return u, p.formData, nil
}

255
vendor/github.com/minio/minio-go/api-put-bucket.go generated vendored Normal file
View File

@@ -0,0 +1,255 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"bytes"
"context"
"encoding/json"
"encoding/xml"
"fmt"
"net/http"
"net/url"
"github.com/minio/minio-go/pkg/policy"
"github.com/minio/minio-go/pkg/s3utils"
)
/// Bucket operations
// MakeBucket creates a new bucket with bucketName.
//
// Location is an optional argument, by default all buckets are
// created in US Standard Region.
//
// For Amazon S3 for more supported regions - http://docs.aws.amazon.com/general/latest/gr/rande.html
// For Google Cloud Storage for more supported regions - https://cloud.google.com/storage/docs/bucket-locations
func (c Client) MakeBucket(bucketName string, location string) (err error) {
defer func() {
// Save the location into cache on a successful makeBucket response.
if err == nil {
c.bucketLocCache.Set(bucketName, location)
}
}()
// Validate the input arguments.
if err := s3utils.CheckValidBucketNameStrict(bucketName); err != nil {
return err
}
// If location is empty, treat is a default region 'us-east-1'.
if location == "" {
location = "us-east-1"
// For custom region clients, default
// to custom region instead not 'us-east-1'.
if c.region != "" {
location = c.region
}
}
// PUT bucket request metadata.
reqMetadata := requestMetadata{
bucketName: bucketName,
bucketLocation: location,
}
// If location is not 'us-east-1' create bucket location config.
if location != "us-east-1" && location != "" {
createBucketConfig := createBucketConfiguration{}
createBucketConfig.Location = location
var createBucketConfigBytes []byte
createBucketConfigBytes, err = xml.Marshal(createBucketConfig)
if err != nil {
return err
}
reqMetadata.contentMD5Base64 = sumMD5Base64(createBucketConfigBytes)
reqMetadata.contentSHA256Hex = sum256Hex(createBucketConfigBytes)
reqMetadata.contentBody = bytes.NewReader(createBucketConfigBytes)
reqMetadata.contentLength = int64(len(createBucketConfigBytes))
}
// Execute PUT to create a new bucket.
resp, err := c.executeMethod(context.Background(), "PUT", reqMetadata)
defer closeResponse(resp)
if err != nil {
return err
}
if resp != nil {
if resp.StatusCode != http.StatusOK {
return httpRespToErrorResponse(resp, bucketName, "")
}
}
// Success.
return nil
}
// SetBucketPolicy set the access permissions on an existing bucket.
//
// For example
//
// none - owner gets full access [default].
// readonly - anonymous get access for everyone at a given object prefix.
// readwrite - anonymous list/put/delete access to a given object prefix.
// writeonly - anonymous put/delete access to a given object prefix.
func (c Client) SetBucketPolicy(bucketName string, objectPrefix string, bucketPolicy policy.BucketPolicy) error {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return err
}
if err := s3utils.CheckValidObjectNamePrefix(objectPrefix); err != nil {
return err
}
if !bucketPolicy.IsValidBucketPolicy() {
return ErrInvalidArgument(fmt.Sprintf("Invalid bucket policy provided. %s", bucketPolicy))
}
policyInfo, err := c.getBucketPolicy(bucketName)
errResponse := ToErrorResponse(err)
if err != nil && errResponse.Code != "NoSuchBucketPolicy" {
return err
}
if bucketPolicy == policy.BucketPolicyNone && policyInfo.Statements == nil {
// As the request is for removing policy and the bucket
// has empty policy statements, just return success.
return nil
}
policyInfo.Statements = policy.SetPolicy(policyInfo.Statements, bucketPolicy, bucketName, objectPrefix)
// Save the updated policies.
return c.putBucketPolicy(bucketName, policyInfo)
}
// Saves a new bucket policy.
func (c Client) putBucketPolicy(bucketName string, policyInfo policy.BucketAccessPolicy) error {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return err
}
// If there are no policy statements, we should remove entire policy.
if len(policyInfo.Statements) == 0 {
return c.removeBucketPolicy(bucketName)
}
// Get resources properly escaped and lined up before
// using them in http request.
urlValues := make(url.Values)
urlValues.Set("policy", "")
policyBytes, err := json.Marshal(&policyInfo)
if err != nil {
return err
}
policyBuffer := bytes.NewReader(policyBytes)
reqMetadata := requestMetadata{
bucketName: bucketName,
queryValues: urlValues,
contentBody: policyBuffer,
contentLength: int64(len(policyBytes)),
contentMD5Base64: sumMD5Base64(policyBytes),
contentSHA256Hex: sum256Hex(policyBytes),
}
// Execute PUT to upload a new bucket policy.
resp, err := c.executeMethod(context.Background(), "PUT", reqMetadata)
defer closeResponse(resp)
if err != nil {
return err
}
if resp != nil {
if resp.StatusCode != http.StatusNoContent {
return httpRespToErrorResponse(resp, bucketName, "")
}
}
return nil
}
// Removes all policies on a bucket.
func (c Client) removeBucketPolicy(bucketName string) error {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return err
}
// Get resources properly escaped and lined up before
// using them in http request.
urlValues := make(url.Values)
urlValues.Set("policy", "")
// Execute DELETE on objectName.
resp, err := c.executeMethod(context.Background(), "DELETE", requestMetadata{
bucketName: bucketName,
queryValues: urlValues,
contentSHA256Hex: emptySHA256Hex,
})
defer closeResponse(resp)
if err != nil {
return err
}
return nil
}
// SetBucketNotification saves a new bucket notification.
func (c Client) SetBucketNotification(bucketName string, bucketNotification BucketNotification) error {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return err
}
// Get resources properly escaped and lined up before
// using them in http request.
urlValues := make(url.Values)
urlValues.Set("notification", "")
notifBytes, err := xml.Marshal(bucketNotification)
if err != nil {
return err
}
notifBuffer := bytes.NewReader(notifBytes)
reqMetadata := requestMetadata{
bucketName: bucketName,
queryValues: urlValues,
contentBody: notifBuffer,
contentLength: int64(len(notifBytes)),
contentMD5Base64: sumMD5Base64(notifBytes),
contentSHA256Hex: sum256Hex(notifBytes),
}
// Execute PUT to upload a new bucket notification.
resp, err := c.executeMethod(context.Background(), "PUT", reqMetadata)
defer closeResponse(resp)
if err != nil {
return err
}
if resp != nil {
if resp.StatusCode != http.StatusOK {
return httpRespToErrorResponse(resp, bucketName, "")
}
}
return nil
}
// RemoveAllBucketNotification - Remove bucket notification clears all previously specified config
func (c Client) RemoveAllBucketNotification(bucketName string) error {
return c.SetBucketNotification(bucketName, BucketNotification{})
}

View File

@@ -0,0 +1,111 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"context"
"io"
"math"
"os"
"github.com/minio/minio-go/pkg/s3utils"
)
// Verify if reader is *minio.Object
func isObject(reader io.Reader) (ok bool) {
_, ok = reader.(*Object)
return
}
// Verify if reader is a generic ReaderAt
func isReadAt(reader io.Reader) (ok bool) {
_, ok = reader.(io.ReaderAt)
if ok {
var v *os.File
v, ok = reader.(*os.File)
if ok {
// Stdin, Stdout and Stderr all have *os.File type
// which happen to also be io.ReaderAt compatible
// we need to add special conditions for them to
// be ignored by this function.
for _, f := range []string{
"/dev/stdin",
"/dev/stdout",
"/dev/stderr",
} {
if f == v.Name() {
ok = false
break
}
}
}
}
return
}
// optimalPartInfo - calculate the optimal part info for a given
// object size.
//
// NOTE: Assumption here is that for any object to be uploaded to any S3 compatible
// object storage it will have the following parameters as constants.
//
// maxPartsCount - 10000
// minPartSize - 64MiB
// maxMultipartPutObjectSize - 5TiB
//
func optimalPartInfo(objectSize int64) (totalPartsCount int, partSize int64, lastPartSize int64, err error) {
// object size is '-1' set it to 5TiB.
if objectSize == -1 {
objectSize = maxMultipartPutObjectSize
}
// object size is larger than supported maximum.
if objectSize > maxMultipartPutObjectSize {
err = ErrEntityTooLarge(objectSize, maxMultipartPutObjectSize, "", "")
return
}
// Use floats for part size for all calculations to avoid
// overflows during float64 to int64 conversions.
partSizeFlt := math.Ceil(float64(objectSize / maxPartsCount))
partSizeFlt = math.Ceil(partSizeFlt/minPartSize) * minPartSize
// Total parts count.
totalPartsCount = int(math.Ceil(float64(objectSize) / partSizeFlt))
// Part size.
partSize = int64(partSizeFlt)
// Last part size.
lastPartSize = objectSize - int64(totalPartsCount-1)*partSize
return totalPartsCount, partSize, lastPartSize, nil
}
// getUploadID - fetch upload id if already present for an object name
// or initiate a new request to fetch a new upload id.
func (c Client) newUploadID(ctx context.Context, bucketName, objectName string, opts PutObjectOptions) (uploadID string, err error) {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return "", err
}
if err := s3utils.CheckValidObjectName(objectName); err != nil {
return "", err
}
// Initiate multipart upload for an object.
initMultipartUploadResult, err := c.initiateMultipartUpload(ctx, bucketName, objectName, opts)
if err != nil {
return "", err
}
return initMultipartUploadResult.UploadID, nil
}

View File

@@ -0,0 +1,39 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"context"
"io"
)
// PutObjectWithContext - Identical to PutObject call, but accepts context to facilitate request cancellation.
func (c Client) PutObjectWithContext(ctx context.Context, bucketName, objectName string, reader io.Reader, objectSize int64,
opts PutObjectOptions) (n int64, err error) {
err = opts.validate()
if err != nil {
return 0, err
}
if opts.EncryptMaterials != nil {
if err = opts.EncryptMaterials.SetupEncryptMode(reader); err != nil {
return 0, err
}
return c.putObjectMultipartStreamNoLength(ctx, bucketName, objectName, opts.EncryptMaterials, opts)
}
return c.putObjectCommon(ctx, bucketName, objectName, reader, objectSize, opts)
}

View File

@@ -0,0 +1,23 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
// CopyObject - copy a source object into a new object
func (c Client) CopyObject(dst DestinationInfo, src SourceInfo) error {
return c.ComposeObject(dst, []SourceInfo{src})
}

View File

@@ -0,0 +1,44 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"context"
"io"
"github.com/minio/minio-go/pkg/encrypt"
)
// PutEncryptedObject - Encrypt and store object.
func (c Client) PutEncryptedObject(bucketName, objectName string, reader io.Reader, encryptMaterials encrypt.Materials) (n int64, err error) {
if encryptMaterials == nil {
return 0, ErrInvalidArgument("Unable to recognize empty encryption properties")
}
if err := encryptMaterials.SetupEncryptMode(reader); err != nil {
return 0, err
}
return c.PutObjectWithContext(context.Background(), bucketName, objectName, reader, -1, PutObjectOptions{EncryptMaterials: encryptMaterials})
}
// FPutEncryptedObject - Encrypt and store an object with contents from file at filePath.
func (c Client) FPutEncryptedObject(bucketName, objectName, filePath string, encryptMaterials encrypt.Materials) (n int64, err error) {
return c.FPutObjectWithContext(context.Background(), bucketName, objectName, filePath, PutObjectOptions{EncryptMaterials: encryptMaterials})
}

View File

@@ -0,0 +1,64 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"context"
"mime"
"os"
"path/filepath"
"github.com/minio/minio-go/pkg/s3utils"
)
// FPutObjectWithContext - Create an object in a bucket, with contents from file at filePath. Allows request cancellation.
func (c Client) FPutObjectWithContext(ctx context.Context, bucketName, objectName, filePath string, opts PutObjectOptions) (n int64, err error) {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return 0, err
}
if err := s3utils.CheckValidObjectName(objectName); err != nil {
return 0, err
}
// Open the referenced file.
fileReader, err := os.Open(filePath)
// If any error fail quickly here.
if err != nil {
return 0, err
}
defer fileReader.Close()
// Save the file stat.
fileStat, err := fileReader.Stat()
if err != nil {
return 0, err
}
// Save the file size.
fileSize := fileStat.Size()
// Set contentType based on filepath extension if not given or default
// value of "application/octet-stream" if the extension has no associated type.
if opts.ContentType == "" {
if opts.ContentType = mime.TypeByExtension(filepath.Ext(filePath)); opts.ContentType == "" {
opts.ContentType = "application/octet-stream"
}
}
return c.PutObjectWithContext(ctx, bucketName, objectName, fileReader, fileSize, opts)
}

View File

@@ -0,0 +1,27 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"context"
)
// FPutObject - Create an object in a bucket, with contents from file at filePath
func (c Client) FPutObject(bucketName, objectName, filePath string, opts PutObjectOptions) (n int64, err error) {
return c.FPutObjectWithContext(context.Background(), bucketName, objectName, filePath, opts)
}

View File

@@ -0,0 +1,373 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"bytes"
"context"
"encoding/base64"
"encoding/hex"
"encoding/xml"
"fmt"
"io"
"io/ioutil"
"net/http"
"net/url"
"runtime/debug"
"sort"
"strconv"
"strings"
"github.com/minio/minio-go/pkg/s3utils"
)
func (c Client) putObjectMultipart(ctx context.Context, bucketName, objectName string, reader io.Reader, size int64,
opts PutObjectOptions) (n int64, err error) {
n, err = c.putObjectMultipartNoStream(ctx, bucketName, objectName, reader, opts)
if err != nil {
errResp := ToErrorResponse(err)
// Verify if multipart functionality is not available, if not
// fall back to single PutObject operation.
if errResp.Code == "AccessDenied" && strings.Contains(errResp.Message, "Access Denied") {
// Verify if size of reader is greater than '5GiB'.
if size > maxSinglePutObjectSize {
return 0, ErrEntityTooLarge(size, maxSinglePutObjectSize, bucketName, objectName)
}
// Fall back to uploading as single PutObject operation.
return c.putObjectNoChecksum(ctx, bucketName, objectName, reader, size, opts)
}
}
return n, err
}
func (c Client) putObjectMultipartNoStream(ctx context.Context, bucketName, objectName string, reader io.Reader, opts PutObjectOptions) (n int64, err error) {
// Input validation.
if err = s3utils.CheckValidBucketName(bucketName); err != nil {
return 0, err
}
if err = s3utils.CheckValidObjectName(objectName); err != nil {
return 0, err
}
// Total data read and written to server. should be equal to
// 'size' at the end of the call.
var totalUploadedSize int64
// Complete multipart upload.
var complMultipartUpload completeMultipartUpload
// Calculate the optimal parts info for a given size.
totalPartsCount, partSize, _, err := optimalPartInfo(-1)
if err != nil {
return 0, err
}
// Initiate a new multipart upload.
uploadID, err := c.newUploadID(ctx, bucketName, objectName, opts)
if err != nil {
return 0, err
}
defer func() {
if err != nil {
c.abortMultipartUpload(ctx, bucketName, objectName, uploadID)
}
}()
// Part number always starts with '1'.
partNumber := 1
// Initialize parts uploaded map.
partsInfo := make(map[int]ObjectPart)
// Create a buffer.
buf := make([]byte, partSize)
defer debug.FreeOSMemory()
for partNumber <= totalPartsCount {
// Choose hash algorithms to be calculated by hashCopyN,
// avoid sha256 with non-v4 signature request or
// HTTPS connection.
hashAlgos, hashSums := c.hashMaterials()
length, rErr := io.ReadFull(reader, buf)
if rErr == io.EOF {
break
}
if rErr != nil && rErr != io.ErrUnexpectedEOF {
return 0, rErr
}
// Calculates hash sums while copying partSize bytes into cw.
for k, v := range hashAlgos {
v.Write(buf[:length])
hashSums[k] = v.Sum(nil)
}
// Update progress reader appropriately to the latest offset
// as we read from the source.
rd := newHook(bytes.NewReader(buf[:length]), opts.Progress)
// Checksums..
var (
md5Base64 string
sha256Hex string
)
if hashSums["md5"] != nil {
md5Base64 = base64.StdEncoding.EncodeToString(hashSums["md5"])
}
if hashSums["sha256"] != nil {
sha256Hex = hex.EncodeToString(hashSums["sha256"])
}
// Proceed to upload the part.
var objPart ObjectPart
objPart, err = c.uploadPart(ctx, bucketName, objectName, uploadID, rd, partNumber,
md5Base64, sha256Hex, int64(length), opts.UserMetadata)
if err != nil {
return totalUploadedSize, err
}
// Save successfully uploaded part metadata.
partsInfo[partNumber] = objPart
// Save successfully uploaded size.
totalUploadedSize += int64(length)
// Increment part number.
partNumber++
// For unknown size, Read EOF we break away.
// We do not have to upload till totalPartsCount.
if rErr == io.EOF {
break
}
}
// Loop over total uploaded parts to save them in
// Parts array before completing the multipart request.
for i := 1; i < partNumber; i++ {
part, ok := partsInfo[i]
if !ok {
return 0, ErrInvalidArgument(fmt.Sprintf("Missing part number %d", i))
}
complMultipartUpload.Parts = append(complMultipartUpload.Parts, CompletePart{
ETag: part.ETag,
PartNumber: part.PartNumber,
})
}
// Sort all completed parts.
sort.Sort(completedParts(complMultipartUpload.Parts))
if _, err = c.completeMultipartUpload(ctx, bucketName, objectName, uploadID, complMultipartUpload); err != nil {
return totalUploadedSize, err
}
// Return final size.
return totalUploadedSize, nil
}
// initiateMultipartUpload - Initiates a multipart upload and returns an upload ID.
func (c Client) initiateMultipartUpload(ctx context.Context, bucketName, objectName string, opts PutObjectOptions) (initiateMultipartUploadResult, error) {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return initiateMultipartUploadResult{}, err
}
if err := s3utils.CheckValidObjectName(objectName); err != nil {
return initiateMultipartUploadResult{}, err
}
// Initialize url queries.
urlValues := make(url.Values)
urlValues.Set("uploads", "")
// Set ContentType header.
customHeader := opts.Header()
reqMetadata := requestMetadata{
bucketName: bucketName,
objectName: objectName,
queryValues: urlValues,
customHeader: customHeader,
}
// Execute POST on an objectName to initiate multipart upload.
resp, err := c.executeMethod(ctx, "POST", reqMetadata)
defer closeResponse(resp)
if err != nil {
return initiateMultipartUploadResult{}, err
}
if resp != nil {
if resp.StatusCode != http.StatusOK {
return initiateMultipartUploadResult{}, httpRespToErrorResponse(resp, bucketName, objectName)
}
}
// Decode xml for new multipart upload.
initiateMultipartUploadResult := initiateMultipartUploadResult{}
err = xmlDecoder(resp.Body, &initiateMultipartUploadResult)
if err != nil {
return initiateMultipartUploadResult, err
}
return initiateMultipartUploadResult, nil
}
const serverEncryptionKeyPrefix = "x-amz-server-side-encryption"
// uploadPart - Uploads a part in a multipart upload.
func (c Client) uploadPart(ctx context.Context, bucketName, objectName, uploadID string, reader io.Reader,
partNumber int, md5Base64, sha256Hex string, size int64, metadata map[string]string) (ObjectPart, error) {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return ObjectPart{}, err
}
if err := s3utils.CheckValidObjectName(objectName); err != nil {
return ObjectPart{}, err
}
if size > maxPartSize {
return ObjectPart{}, ErrEntityTooLarge(size, maxPartSize, bucketName, objectName)
}
if size <= -1 {
return ObjectPart{}, ErrEntityTooSmall(size, bucketName, objectName)
}
if partNumber <= 0 {
return ObjectPart{}, ErrInvalidArgument("Part number cannot be negative or equal to zero.")
}
if uploadID == "" {
return ObjectPart{}, ErrInvalidArgument("UploadID cannot be empty.")
}
// Get resources properly escaped and lined up before using them in http request.
urlValues := make(url.Values)
// Set part number.
urlValues.Set("partNumber", strconv.Itoa(partNumber))
// Set upload id.
urlValues.Set("uploadId", uploadID)
// Set encryption headers, if any.
customHeader := make(http.Header)
for k, v := range metadata {
if len(v) > 0 {
if strings.HasPrefix(strings.ToLower(k), serverEncryptionKeyPrefix) {
customHeader.Set(k, v)
}
}
}
reqMetadata := requestMetadata{
bucketName: bucketName,
objectName: objectName,
queryValues: urlValues,
customHeader: customHeader,
contentBody: reader,
contentLength: size,
contentMD5Base64: md5Base64,
contentSHA256Hex: sha256Hex,
}
// Execute PUT on each part.
resp, err := c.executeMethod(ctx, "PUT", reqMetadata)
defer closeResponse(resp)
if err != nil {
return ObjectPart{}, err
}
if resp != nil {
if resp.StatusCode != http.StatusOK {
return ObjectPart{}, httpRespToErrorResponse(resp, bucketName, objectName)
}
}
// Once successfully uploaded, return completed part.
objPart := ObjectPart{}
objPart.Size = size
objPart.PartNumber = partNumber
// Trim off the odd double quotes from ETag in the beginning and end.
objPart.ETag = strings.TrimPrefix(resp.Header.Get("ETag"), "\"")
objPart.ETag = strings.TrimSuffix(objPart.ETag, "\"")
return objPart, nil
}
// completeMultipartUpload - Completes a multipart upload by assembling previously uploaded parts.
func (c Client) completeMultipartUpload(ctx context.Context, bucketName, objectName, uploadID string,
complete completeMultipartUpload) (completeMultipartUploadResult, error) {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return completeMultipartUploadResult{}, err
}
if err := s3utils.CheckValidObjectName(objectName); err != nil {
return completeMultipartUploadResult{}, err
}
// Initialize url queries.
urlValues := make(url.Values)
urlValues.Set("uploadId", uploadID)
// Marshal complete multipart body.
completeMultipartUploadBytes, err := xml.Marshal(complete)
if err != nil {
return completeMultipartUploadResult{}, err
}
// Instantiate all the complete multipart buffer.
completeMultipartUploadBuffer := bytes.NewReader(completeMultipartUploadBytes)
reqMetadata := requestMetadata{
bucketName: bucketName,
objectName: objectName,
queryValues: urlValues,
contentBody: completeMultipartUploadBuffer,
contentLength: int64(len(completeMultipartUploadBytes)),
contentSHA256Hex: sum256Hex(completeMultipartUploadBytes),
}
// Execute POST to complete multipart upload for an objectName.
resp, err := c.executeMethod(ctx, "POST", reqMetadata)
defer closeResponse(resp)
if err != nil {
return completeMultipartUploadResult{}, err
}
if resp != nil {
if resp.StatusCode != http.StatusOK {
return completeMultipartUploadResult{}, httpRespToErrorResponse(resp, bucketName, objectName)
}
}
// Read resp.Body into a []bytes to parse for Error response inside the body
var b []byte
b, err = ioutil.ReadAll(resp.Body)
if err != nil {
return completeMultipartUploadResult{}, err
}
// Decode completed multipart upload response on success.
completeMultipartUploadResult := completeMultipartUploadResult{}
err = xmlDecoder(bytes.NewReader(b), &completeMultipartUploadResult)
if err != nil {
// xml parsing failure due to presence an ill-formed xml fragment
return completeMultipartUploadResult, err
} else if completeMultipartUploadResult.Bucket == "" {
// xml's Decode method ignores well-formed xml that don't apply to the type of value supplied.
// In this case, it would leave completeMultipartUploadResult with the corresponding zero-values
// of the members.
// Decode completed multipart upload response on failure
completeMultipartUploadErr := ErrorResponse{}
err = xmlDecoder(bytes.NewReader(b), &completeMultipartUploadErr)
if err != nil {
// xml parsing failure due to presence an ill-formed xml fragment
return completeMultipartUploadResult, err
}
return completeMultipartUploadResult, completeMultipartUploadErr
}
return completeMultipartUploadResult, nil
}

View File

@@ -0,0 +1,417 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"context"
"fmt"
"io"
"net/http"
"sort"
"strings"
"github.com/minio/minio-go/pkg/s3utils"
)
// putObjectMultipartStream - upload a large object using
// multipart upload and streaming signature for signing payload.
// Comprehensive put object operation involving multipart uploads.
//
// Following code handles these types of readers.
//
// - *minio.Object
// - Any reader which has a method 'ReadAt()'
//
func (c Client) putObjectMultipartStream(ctx context.Context, bucketName, objectName string,
reader io.Reader, size int64, opts PutObjectOptions) (n int64, err error) {
if !isObject(reader) && isReadAt(reader) {
// Verify if the reader implements ReadAt and it is not a *minio.Object then we will use parallel uploader.
n, err = c.putObjectMultipartStreamFromReadAt(ctx, bucketName, objectName, reader.(io.ReaderAt), size, opts)
} else {
n, err = c.putObjectMultipartStreamNoChecksum(ctx, bucketName, objectName, reader, size, opts)
}
if err != nil {
errResp := ToErrorResponse(err)
// Verify if multipart functionality is not available, if not
// fall back to single PutObject operation.
if errResp.Code == "AccessDenied" && strings.Contains(errResp.Message, "Access Denied") {
// Verify if size of reader is greater than '5GiB'.
if size > maxSinglePutObjectSize {
return 0, ErrEntityTooLarge(size, maxSinglePutObjectSize, bucketName, objectName)
}
// Fall back to uploading as single PutObject operation.
return c.putObjectNoChecksum(ctx, bucketName, objectName, reader, size, opts)
}
}
return n, err
}
// uploadedPartRes - the response received from a part upload.
type uploadedPartRes struct {
Error error // Any error encountered while uploading the part.
PartNum int // Number of the part uploaded.
Size int64 // Size of the part uploaded.
Part *ObjectPart
}
type uploadPartReq struct {
PartNum int // Number of the part uploaded.
Part *ObjectPart // Size of the part uploaded.
}
// putObjectMultipartFromReadAt - Uploads files bigger than 64MiB.
// Supports all readers which implements io.ReaderAt interface
// (ReadAt method).
//
// NOTE: This function is meant to be used for all readers which
// implement io.ReaderAt which allows us for resuming multipart
// uploads but reading at an offset, which would avoid re-read the
// data which was already uploaded. Internally this function uses
// temporary files for staging all the data, these temporary files are
// cleaned automatically when the caller i.e http client closes the
// stream after uploading all the contents successfully.
func (c Client) putObjectMultipartStreamFromReadAt(ctx context.Context, bucketName, objectName string,
reader io.ReaderAt, size int64, opts PutObjectOptions) (n int64, err error) {
// Input validation.
if err = s3utils.CheckValidBucketName(bucketName); err != nil {
return 0, err
}
if err = s3utils.CheckValidObjectName(objectName); err != nil {
return 0, err
}
// Calculate the optimal parts info for a given size.
totalPartsCount, partSize, lastPartSize, err := optimalPartInfo(size)
if err != nil {
return 0, err
}
// Initiate a new multipart upload.
uploadID, err := c.newUploadID(ctx, bucketName, objectName, opts)
if err != nil {
return 0, err
}
// Aborts the multipart upload in progress, if the
// function returns any error, since we do not resume
// we should purge the parts which have been uploaded
// to relinquish storage space.
defer func() {
if err != nil {
c.abortMultipartUpload(ctx, bucketName, objectName, uploadID)
}
}()
// Total data read and written to server. should be equal to 'size' at the end of the call.
var totalUploadedSize int64
// Complete multipart upload.
var complMultipartUpload completeMultipartUpload
// Declare a channel that sends the next part number to be uploaded.
// Buffered to 10000 because thats the maximum number of parts allowed
// by S3.
uploadPartsCh := make(chan uploadPartReq, 10000)
// Declare a channel that sends back the response of a part upload.
// Buffered to 10000 because thats the maximum number of parts allowed
// by S3.
uploadedPartsCh := make(chan uploadedPartRes, 10000)
// Used for readability, lastPartNumber is always totalPartsCount.
lastPartNumber := totalPartsCount
// Send each part number to the channel to be processed.
for p := 1; p <= totalPartsCount; p++ {
uploadPartsCh <- uploadPartReq{PartNum: p, Part: nil}
}
close(uploadPartsCh)
// Receive each part number from the channel allowing three parallel uploads.
for w := 1; w <= opts.getNumThreads(); w++ {
go func(partSize int64) {
// Each worker will draw from the part channel and upload in parallel.
for uploadReq := range uploadPartsCh {
// If partNumber was not uploaded we calculate the missing
// part offset and size. For all other part numbers we
// calculate offset based on multiples of partSize.
readOffset := int64(uploadReq.PartNum-1) * partSize
// As a special case if partNumber is lastPartNumber, we
// calculate the offset based on the last part size.
if uploadReq.PartNum == lastPartNumber {
readOffset = (size - lastPartSize)
partSize = lastPartSize
}
// Get a section reader on a particular offset.
sectionReader := newHook(io.NewSectionReader(reader, readOffset, partSize), opts.Progress)
// Proceed to upload the part.
var objPart ObjectPart
objPart, err = c.uploadPart(ctx, bucketName, objectName, uploadID,
sectionReader, uploadReq.PartNum,
"", "", partSize, opts.UserMetadata)
if err != nil {
uploadedPartsCh <- uploadedPartRes{
Size: 0,
Error: err,
}
// Exit the goroutine.
return
}
// Save successfully uploaded part metadata.
uploadReq.Part = &objPart
// Send successful part info through the channel.
uploadedPartsCh <- uploadedPartRes{
Size: objPart.Size,
PartNum: uploadReq.PartNum,
Part: uploadReq.Part,
Error: nil,
}
}
}(partSize)
}
// Gather the responses as they occur and update any
// progress bar.
for u := 1; u <= totalPartsCount; u++ {
uploadRes := <-uploadedPartsCh
if uploadRes.Error != nil {
return totalUploadedSize, uploadRes.Error
}
// Retrieve each uploaded part and store it to be completed.
// part, ok := partsInfo[uploadRes.PartNum]
part := uploadRes.Part
if part == nil {
return 0, ErrInvalidArgument(fmt.Sprintf("Missing part number %d", uploadRes.PartNum))
}
// Update the totalUploadedSize.
totalUploadedSize += uploadRes.Size
// Store the parts to be completed in order.
complMultipartUpload.Parts = append(complMultipartUpload.Parts, CompletePart{
ETag: part.ETag,
PartNumber: part.PartNumber,
})
}
// Verify if we uploaded all the data.
if totalUploadedSize != size {
return totalUploadedSize, ErrUnexpectedEOF(totalUploadedSize, size, bucketName, objectName)
}
// Sort all completed parts.
sort.Sort(completedParts(complMultipartUpload.Parts))
_, err = c.completeMultipartUpload(ctx, bucketName, objectName, uploadID, complMultipartUpload)
if err != nil {
return totalUploadedSize, err
}
// Return final size.
return totalUploadedSize, nil
}
func (c Client) putObjectMultipartStreamNoChecksum(ctx context.Context, bucketName, objectName string,
reader io.Reader, size int64, opts PutObjectOptions) (n int64, err error) {
// Input validation.
if err = s3utils.CheckValidBucketName(bucketName); err != nil {
return 0, err
}
if err = s3utils.CheckValidObjectName(objectName); err != nil {
return 0, err
}
// Calculate the optimal parts info for a given size.
totalPartsCount, partSize, lastPartSize, err := optimalPartInfo(size)
if err != nil {
return 0, err
}
// Initiates a new multipart request
uploadID, err := c.newUploadID(ctx, bucketName, objectName, opts)
if err != nil {
return 0, err
}
// Aborts the multipart upload if the function returns
// any error, since we do not resume we should purge
// the parts which have been uploaded to relinquish
// storage space.
defer func() {
if err != nil {
c.abortMultipartUpload(ctx, bucketName, objectName, uploadID)
}
}()
// Total data read and written to server. should be equal to 'size' at the end of the call.
var totalUploadedSize int64
// Initialize parts uploaded map.
partsInfo := make(map[int]ObjectPart)
// Part number always starts with '1'.
var partNumber int
for partNumber = 1; partNumber <= totalPartsCount; partNumber++ {
// Update progress reader appropriately to the latest offset
// as we read from the source.
hookReader := newHook(reader, opts.Progress)
// Proceed to upload the part.
if partNumber == totalPartsCount {
partSize = lastPartSize
}
var objPart ObjectPart
objPart, err = c.uploadPart(ctx, bucketName, objectName, uploadID,
io.LimitReader(hookReader, partSize),
partNumber, "", "", partSize, opts.UserMetadata)
if err != nil {
return totalUploadedSize, err
}
// Save successfully uploaded part metadata.
partsInfo[partNumber] = objPart
// Save successfully uploaded size.
totalUploadedSize += partSize
}
// Verify if we uploaded all the data.
if size > 0 {
if totalUploadedSize != size {
return totalUploadedSize, ErrUnexpectedEOF(totalUploadedSize, size, bucketName, objectName)
}
}
// Complete multipart upload.
var complMultipartUpload completeMultipartUpload
// Loop over total uploaded parts to save them in
// Parts array before completing the multipart request.
for i := 1; i < partNumber; i++ {
part, ok := partsInfo[i]
if !ok {
return 0, ErrInvalidArgument(fmt.Sprintf("Missing part number %d", i))
}
complMultipartUpload.Parts = append(complMultipartUpload.Parts, CompletePart{
ETag: part.ETag,
PartNumber: part.PartNumber,
})
}
// Sort all completed parts.
sort.Sort(completedParts(complMultipartUpload.Parts))
_, err = c.completeMultipartUpload(ctx, bucketName, objectName, uploadID, complMultipartUpload)
if err != nil {
return totalUploadedSize, err
}
// Return final size.
return totalUploadedSize, nil
}
// putObjectNoChecksum special function used Google Cloud Storage. This special function
// is used for Google Cloud Storage since Google's multipart API is not S3 compatible.
func (c Client) putObjectNoChecksum(ctx context.Context, bucketName, objectName string, reader io.Reader, size int64, opts PutObjectOptions) (n int64, err error) {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return 0, err
}
if err := s3utils.CheckValidObjectName(objectName); err != nil {
return 0, err
}
// Size -1 is only supported on Google Cloud Storage, we error
// out in all other situations.
if size < 0 && !s3utils.IsGoogleEndpoint(c.endpointURL) {
return 0, ErrEntityTooSmall(size, bucketName, objectName)
}
if size > 0 {
if isReadAt(reader) && !isObject(reader) {
seeker, _ := reader.(io.Seeker)
offset, err := seeker.Seek(0, io.SeekCurrent)
if err != nil {
return 0, ErrInvalidArgument(err.Error())
}
reader = io.NewSectionReader(reader.(io.ReaderAt), offset, size)
}
}
// Update progress reader appropriately to the latest offset as we
// read from the source.
readSeeker := newHook(reader, opts.Progress)
// This function does not calculate sha256 and md5sum for payload.
// Execute put object.
st, err := c.putObjectDo(ctx, bucketName, objectName, readSeeker, "", "", size, opts)
if err != nil {
return 0, err
}
if st.Size != size {
return 0, ErrUnexpectedEOF(st.Size, size, bucketName, objectName)
}
return size, nil
}
// putObjectDo - executes the put object http operation.
// NOTE: You must have WRITE permissions on a bucket to add an object to it.
func (c Client) putObjectDo(ctx context.Context, bucketName, objectName string, reader io.Reader, md5Base64, sha256Hex string, size int64, opts PutObjectOptions) (ObjectInfo, error) {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return ObjectInfo{}, err
}
if err := s3utils.CheckValidObjectName(objectName); err != nil {
return ObjectInfo{}, err
}
// Set headers.
customHeader := opts.Header()
// Populate request metadata.
reqMetadata := requestMetadata{
bucketName: bucketName,
objectName: objectName,
customHeader: customHeader,
contentBody: reader,
contentLength: size,
contentMD5Base64: md5Base64,
contentSHA256Hex: sha256Hex,
}
// Execute PUT an objectName.
resp, err := c.executeMethod(ctx, "PUT", reqMetadata)
defer closeResponse(resp)
if err != nil {
return ObjectInfo{}, err
}
if resp != nil {
if resp.StatusCode != http.StatusOK {
return ObjectInfo{}, httpRespToErrorResponse(resp, bucketName, objectName)
}
}
var objInfo ObjectInfo
// Trim off the odd double quotes from ETag in the beginning and end.
objInfo.ETag = strings.TrimPrefix(resp.Header.Get("ETag"), "\"")
objInfo.ETag = strings.TrimSuffix(objInfo.ETag, "\"")
// A success here means data was written to server successfully.
objInfo.Size = size
// Return here.
return objInfo, nil
}

258
vendor/github.com/minio/minio-go/api-put-object.go generated vendored Normal file
View File

@@ -0,0 +1,258 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"bytes"
"context"
"fmt"
"io"
"net/http"
"runtime/debug"
"sort"
"github.com/minio/minio-go/pkg/encrypt"
"github.com/minio/minio-go/pkg/s3utils"
)
// PutObjectOptions represents options specified by user for PutObject call
type PutObjectOptions struct {
UserMetadata map[string]string
Progress io.Reader
ContentType string
ContentEncoding string
ContentDisposition string
CacheControl string
EncryptMaterials encrypt.Materials
NumThreads uint
StorageClass string
}
// getNumThreads - gets the number of threads to be used in the multipart
// put object operation
func (opts PutObjectOptions) getNumThreads() (numThreads int) {
if opts.NumThreads > 0 {
numThreads = int(opts.NumThreads)
} else {
numThreads = totalWorkers
}
return
}
// Header - constructs the headers from metadata entered by user in
// PutObjectOptions struct
func (opts PutObjectOptions) Header() (header http.Header) {
header = make(http.Header)
if opts.ContentType != "" {
header["Content-Type"] = []string{opts.ContentType}
} else {
header["Content-Type"] = []string{"application/octet-stream"}
}
if opts.ContentEncoding != "" {
header["Content-Encoding"] = []string{opts.ContentEncoding}
}
if opts.ContentDisposition != "" {
header["Content-Disposition"] = []string{opts.ContentDisposition}
}
if opts.CacheControl != "" {
header["Cache-Control"] = []string{opts.CacheControl}
}
if opts.EncryptMaterials != nil {
header[amzHeaderIV] = []string{opts.EncryptMaterials.GetIV()}
header[amzHeaderKey] = []string{opts.EncryptMaterials.GetKey()}
header[amzHeaderMatDesc] = []string{opts.EncryptMaterials.GetDesc()}
}
if opts.StorageClass != "" {
header[amzStorageClass] = []string{opts.StorageClass}
}
for k, v := range opts.UserMetadata {
if !isAmzHeader(k) && !isStandardHeader(k) && !isSSEHeader(k) && !isStorageClassHeader(k) {
header["X-Amz-Meta-"+k] = []string{v}
} else {
header[k] = []string{v}
}
}
return
}
// validate() checks if the UserMetadata map has standard headers or client side
// encryption headers and raises an error if so.
func (opts PutObjectOptions) validate() (err error) {
for k := range opts.UserMetadata {
if isStandardHeader(k) || isCSEHeader(k) || isStorageClassHeader(k) {
return ErrInvalidArgument(k + " unsupported request parameter for user defined metadata from minio-go")
}
}
return nil
}
// completedParts is a collection of parts sortable by their part numbers.
// used for sorting the uploaded parts before completing the multipart request.
type completedParts []CompletePart
func (a completedParts) Len() int { return len(a) }
func (a completedParts) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
func (a completedParts) Less(i, j int) bool { return a[i].PartNumber < a[j].PartNumber }
// PutObject creates an object in a bucket.
//
// You must have WRITE permissions on a bucket to create an object.
//
// - For size smaller than 64MiB PutObject automatically does a
// single atomic Put operation.
// - For size larger than 64MiB PutObject automatically does a
// multipart Put operation.
// - For size input as -1 PutObject does a multipart Put operation
// until input stream reaches EOF. Maximum object size that can
// be uploaded through this operation will be 5TiB.
func (c Client) PutObject(bucketName, objectName string, reader io.Reader, objectSize int64,
opts PutObjectOptions) (n int64, err error) {
return c.PutObjectWithContext(context.Background(), bucketName, objectName, reader, objectSize, opts)
}
func (c Client) putObjectCommon(ctx context.Context, bucketName, objectName string, reader io.Reader, size int64, opts PutObjectOptions) (n int64, err error) {
// Check for largest object size allowed.
if size > int64(maxMultipartPutObjectSize) {
return 0, ErrEntityTooLarge(size, maxMultipartPutObjectSize, bucketName, objectName)
}
// NOTE: Streaming signature is not supported by GCS.
if s3utils.IsGoogleEndpoint(c.endpointURL) {
// Do not compute MD5 for Google Cloud Storage.
return c.putObjectNoChecksum(ctx, bucketName, objectName, reader, size, opts)
}
if c.overrideSignerType.IsV2() {
if size >= 0 && size < minPartSize {
return c.putObjectNoChecksum(ctx, bucketName, objectName, reader, size, opts)
}
return c.putObjectMultipart(ctx, bucketName, objectName, reader, size, opts)
}
if size < 0 {
return c.putObjectMultipartStreamNoLength(ctx, bucketName, objectName, reader, opts)
}
if size < minPartSize {
return c.putObjectNoChecksum(ctx, bucketName, objectName, reader, size, opts)
}
// For all sizes greater than 64MiB do multipart.
return c.putObjectMultipartStream(ctx, bucketName, objectName, reader, size, opts)
}
func (c Client) putObjectMultipartStreamNoLength(ctx context.Context, bucketName, objectName string, reader io.Reader, opts PutObjectOptions) (n int64, err error) {
// Input validation.
if err = s3utils.CheckValidBucketName(bucketName); err != nil {
return 0, err
}
if err = s3utils.CheckValidObjectName(objectName); err != nil {
return 0, err
}
// Total data read and written to server. should be equal to
// 'size' at the end of the call.
var totalUploadedSize int64
// Complete multipart upload.
var complMultipartUpload completeMultipartUpload
// Calculate the optimal parts info for a given size.
totalPartsCount, partSize, _, err := optimalPartInfo(-1)
if err != nil {
return 0, err
}
// Initiate a new multipart upload.
uploadID, err := c.newUploadID(ctx, bucketName, objectName, opts)
if err != nil {
return 0, err
}
defer func() {
if err != nil {
c.abortMultipartUpload(ctx, bucketName, objectName, uploadID)
}
}()
// Part number always starts with '1'.
partNumber := 1
// Initialize parts uploaded map.
partsInfo := make(map[int]ObjectPart)
// Create a buffer.
buf := make([]byte, partSize)
defer debug.FreeOSMemory()
for partNumber <= totalPartsCount {
length, rErr := io.ReadFull(reader, buf)
if rErr == io.EOF && partNumber > 1 {
break
}
if rErr != nil && rErr != io.ErrUnexpectedEOF {
return 0, rErr
}
// Update progress reader appropriately to the latest offset
// as we read from the source.
rd := newHook(bytes.NewReader(buf[:length]), opts.Progress)
// Proceed to upload the part.
var objPart ObjectPart
objPart, err = c.uploadPart(ctx, bucketName, objectName, uploadID, rd, partNumber,
"", "", int64(length), opts.UserMetadata)
if err != nil {
return totalUploadedSize, err
}
// Save successfully uploaded part metadata.
partsInfo[partNumber] = objPart
// Save successfully uploaded size.
totalUploadedSize += int64(length)
// Increment part number.
partNumber++
// For unknown size, Read EOF we break away.
// We do not have to upload till totalPartsCount.
if rErr == io.EOF {
break
}
}
// Loop over total uploaded parts to save them in
// Parts array before completing the multipart request.
for i := 1; i < partNumber; i++ {
part, ok := partsInfo[i]
if !ok {
return 0, ErrInvalidArgument(fmt.Sprintf("Missing part number %d", i))
}
complMultipartUpload.Parts = append(complMultipartUpload.Parts, CompletePart{
ETag: part.ETag,
PartNumber: part.PartNumber,
})
}
// Sort all completed parts.
sort.Sort(completedParts(complMultipartUpload.Parts))
if _, err = c.completeMultipartUpload(ctx, bucketName, objectName, uploadID, complMultipartUpload); err != nil {
return totalUploadedSize, err
}
// Return final size.
return totalUploadedSize, nil
}

290
vendor/github.com/minio/minio-go/api-remove.go generated vendored Normal file
View File

@@ -0,0 +1,290 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"bytes"
"context"
"encoding/xml"
"io"
"net/http"
"net/url"
"github.com/minio/minio-go/pkg/s3utils"
)
// RemoveBucket deletes the bucket name.
//
// All objects (including all object versions and delete markers).
// in the bucket must be deleted before successfully attempting this request.
func (c Client) RemoveBucket(bucketName string) error {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return err
}
// Execute DELETE on bucket.
resp, err := c.executeMethod(context.Background(), "DELETE", requestMetadata{
bucketName: bucketName,
contentSHA256Hex: emptySHA256Hex,
})
defer closeResponse(resp)
if err != nil {
return err
}
if resp != nil {
if resp.StatusCode != http.StatusNoContent {
return httpRespToErrorResponse(resp, bucketName, "")
}
}
// Remove the location from cache on a successful delete.
c.bucketLocCache.Delete(bucketName)
return nil
}
// RemoveObject remove an object from a bucket.
func (c Client) RemoveObject(bucketName, objectName string) error {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return err
}
if err := s3utils.CheckValidObjectName(objectName); err != nil {
return err
}
// Execute DELETE on objectName.
resp, err := c.executeMethod(context.Background(), "DELETE", requestMetadata{
bucketName: bucketName,
objectName: objectName,
contentSHA256Hex: emptySHA256Hex,
})
defer closeResponse(resp)
if err != nil {
return err
}
if resp != nil {
// if some unexpected error happened and max retry is reached, we want to let client know
if resp.StatusCode != http.StatusNoContent {
return httpRespToErrorResponse(resp, bucketName, objectName)
}
}
// DeleteObject always responds with http '204' even for
// objects which do not exist. So no need to handle them
// specifically.
return nil
}
// RemoveObjectError - container of Multi Delete S3 API error
type RemoveObjectError struct {
ObjectName string
Err error
}
// generateRemoveMultiObjects - generate the XML request for remove multi objects request
func generateRemoveMultiObjectsRequest(objects []string) []byte {
rmObjects := []deleteObject{}
for _, obj := range objects {
rmObjects = append(rmObjects, deleteObject{Key: obj})
}
xmlBytes, _ := xml.Marshal(deleteMultiObjects{Objects: rmObjects, Quiet: true})
return xmlBytes
}
// processRemoveMultiObjectsResponse - parse the remove multi objects web service
// and return the success/failure result status for each object
func processRemoveMultiObjectsResponse(body io.Reader, objects []string, errorCh chan<- RemoveObjectError) {
// Parse multi delete XML response
rmResult := &deleteMultiObjectsResult{}
err := xmlDecoder(body, rmResult)
if err != nil {
errorCh <- RemoveObjectError{ObjectName: "", Err: err}
return
}
// Fill deletion that returned an error.
for _, obj := range rmResult.UnDeletedObjects {
errorCh <- RemoveObjectError{
ObjectName: obj.Key,
Err: ErrorResponse{
Code: obj.Code,
Message: obj.Message,
},
}
}
}
// RemoveObjects remove multiples objects from a bucket.
// The list of objects to remove are received from objectsCh.
// Remove failures are sent back via error channel.
func (c Client) RemoveObjects(bucketName string, objectsCh <-chan string) <-chan RemoveObjectError {
errorCh := make(chan RemoveObjectError, 1)
// Validate if bucket name is valid.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
defer close(errorCh)
errorCh <- RemoveObjectError{
Err: err,
}
return errorCh
}
// Validate objects channel to be properly allocated.
if objectsCh == nil {
defer close(errorCh)
errorCh <- RemoveObjectError{
Err: ErrInvalidArgument("Objects channel cannot be nil"),
}
return errorCh
}
// Generate and call MultiDelete S3 requests based on entries received from objectsCh
go func(errorCh chan<- RemoveObjectError) {
maxEntries := 1000
finish := false
urlValues := make(url.Values)
urlValues.Set("delete", "")
// Close error channel when Multi delete finishes.
defer close(errorCh)
// Loop over entries by 1000 and call MultiDelete requests
for {
if finish {
break
}
count := 0
var batch []string
// Try to gather 1000 entries
for object := range objectsCh {
batch = append(batch, object)
if count++; count >= maxEntries {
break
}
}
if count == 0 {
// Multi Objects Delete API doesn't accept empty object list, quit immediately
break
}
if count < maxEntries {
// We didn't have 1000 entries, so this is the last batch
finish = true
}
// Generate remove multi objects XML request
removeBytes := generateRemoveMultiObjectsRequest(batch)
// Execute GET on bucket to list objects.
resp, err := c.executeMethod(context.Background(), "POST", requestMetadata{
bucketName: bucketName,
queryValues: urlValues,
contentBody: bytes.NewReader(removeBytes),
contentLength: int64(len(removeBytes)),
contentMD5Base64: sumMD5Base64(removeBytes),
contentSHA256Hex: sum256Hex(removeBytes),
})
if err != nil {
for _, b := range batch {
errorCh <- RemoveObjectError{ObjectName: b, Err: err}
}
continue
}
// Process multiobjects remove xml response
processRemoveMultiObjectsResponse(resp.Body, batch, errorCh)
closeResponse(resp)
}
}(errorCh)
return errorCh
}
// RemoveIncompleteUpload aborts an partially uploaded object.
func (c Client) RemoveIncompleteUpload(bucketName, objectName string) error {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return err
}
if err := s3utils.CheckValidObjectName(objectName); err != nil {
return err
}
// Find multipart upload id of the object to be aborted.
uploadID, err := c.findUploadID(bucketName, objectName)
if err != nil {
return err
}
if uploadID != "" {
// Upload id found, abort the incomplete multipart upload.
err := c.abortMultipartUpload(context.Background(), bucketName, objectName, uploadID)
if err != nil {
return err
}
}
return nil
}
// abortMultipartUpload aborts a multipart upload for the given
// uploadID, all previously uploaded parts are deleted.
func (c Client) abortMultipartUpload(ctx context.Context, bucketName, objectName, uploadID string) error {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return err
}
if err := s3utils.CheckValidObjectName(objectName); err != nil {
return err
}
// Initialize url queries.
urlValues := make(url.Values)
urlValues.Set("uploadId", uploadID)
// Execute DELETE on multipart upload.
resp, err := c.executeMethod(ctx, "DELETE", requestMetadata{
bucketName: bucketName,
objectName: objectName,
queryValues: urlValues,
contentSHA256Hex: emptySHA256Hex,
})
defer closeResponse(resp)
if err != nil {
return err
}
if resp != nil {
if resp.StatusCode != http.StatusNoContent {
// Abort has no response body, handle it for any errors.
var errorResponse ErrorResponse
switch resp.StatusCode {
case http.StatusNotFound:
// This is needed specifically for abort and it cannot
// be converged into default case.
errorResponse = ErrorResponse{
Code: "NoSuchUpload",
Message: "The specified multipart upload does not exist.",
BucketName: bucketName,
Key: objectName,
RequestID: resp.Header.Get("x-amz-request-id"),
HostID: resp.Header.Get("x-amz-id-2"),
Region: resp.Header.Get("x-amz-bucket-region"),
}
default:
return httpRespToErrorResponse(resp, bucketName, objectName)
}
return errorResponse
}
}
return nil
}

245
vendor/github.com/minio/minio-go/api-s3-datatypes.go generated vendored Normal file
View File

@@ -0,0 +1,245 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"encoding/xml"
"time"
)
// listAllMyBucketsResult container for listBuckets response.
type listAllMyBucketsResult struct {
// Container for one or more buckets.
Buckets struct {
Bucket []BucketInfo
}
Owner owner
}
// owner container for bucket owner information.
type owner struct {
DisplayName string
ID string
}
// CommonPrefix container for prefix response.
type CommonPrefix struct {
Prefix string
}
// ListBucketV2Result container for listObjects response version 2.
type ListBucketV2Result struct {
// A response can contain CommonPrefixes only if you have
// specified a delimiter.
CommonPrefixes []CommonPrefix
// Metadata about each object returned.
Contents []ObjectInfo
Delimiter string
// Encoding type used to encode object keys in the response.
EncodingType string
// A flag that indicates whether or not ListObjects returned all of the results
// that satisfied the search criteria.
IsTruncated bool
MaxKeys int64
Name string
// Hold the token that will be sent in the next request to fetch the next group of keys
NextContinuationToken string
ContinuationToken string
Prefix string
// FetchOwner and StartAfter are currently not used
FetchOwner string
StartAfter string
}
// ListBucketResult container for listObjects response.
type ListBucketResult struct {
// A response can contain CommonPrefixes only if you have
// specified a delimiter.
CommonPrefixes []CommonPrefix
// Metadata about each object returned.
Contents []ObjectInfo
Delimiter string
// Encoding type used to encode object keys in the response.
EncodingType string
// A flag that indicates whether or not ListObjects returned all of the results
// that satisfied the search criteria.
IsTruncated bool
Marker string
MaxKeys int64
Name string
// When response is truncated (the IsTruncated element value in
// the response is true), you can use the key name in this field
// as marker in the subsequent request to get next set of objects.
// Object storage lists objects in alphabetical order Note: This
// element is returned only if you have delimiter request
// parameter specified. If response does not include the NextMaker
// and it is truncated, you can use the value of the last Key in
// the response as the marker in the subsequent request to get the
// next set of object keys.
NextMarker string
Prefix string
}
// ListMultipartUploadsResult container for ListMultipartUploads response
type ListMultipartUploadsResult struct {
Bucket string
KeyMarker string
UploadIDMarker string `xml:"UploadIdMarker"`
NextKeyMarker string
NextUploadIDMarker string `xml:"NextUploadIdMarker"`
EncodingType string
MaxUploads int64
IsTruncated bool
Uploads []ObjectMultipartInfo `xml:"Upload"`
Prefix string
Delimiter string
// A response can contain CommonPrefixes only if you specify a delimiter.
CommonPrefixes []CommonPrefix
}
// initiator container for who initiated multipart upload.
type initiator struct {
ID string
DisplayName string
}
// copyObjectResult container for copy object response.
type copyObjectResult struct {
ETag string
LastModified time.Time // time string format "2006-01-02T15:04:05.000Z"
}
// ObjectPart container for particular part of an object.
type ObjectPart struct {
// Part number identifies the part.
PartNumber int
// Date and time the part was uploaded.
LastModified time.Time
// Entity tag returned when the part was uploaded, usually md5sum
// of the part.
ETag string
// Size of the uploaded part data.
Size int64
}
// ListObjectPartsResult container for ListObjectParts response.
type ListObjectPartsResult struct {
Bucket string
Key string
UploadID string `xml:"UploadId"`
Initiator initiator
Owner owner
StorageClass string
PartNumberMarker int
NextPartNumberMarker int
MaxParts int
// Indicates whether the returned list of parts is truncated.
IsTruncated bool
ObjectParts []ObjectPart `xml:"Part"`
EncodingType string
}
// initiateMultipartUploadResult container for InitiateMultiPartUpload
// response.
type initiateMultipartUploadResult struct {
Bucket string
Key string
UploadID string `xml:"UploadId"`
}
// completeMultipartUploadResult container for completed multipart
// upload response.
type completeMultipartUploadResult struct {
Location string
Bucket string
Key string
ETag string
}
// CompletePart sub container lists individual part numbers and their
// md5sum, part of completeMultipartUpload.
type CompletePart struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ Part" json:"-"`
// Part number identifies the part.
PartNumber int
ETag string
}
// completeMultipartUpload container for completing multipart upload.
type completeMultipartUpload struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ CompleteMultipartUpload" json:"-"`
Parts []CompletePart `xml:"Part"`
}
// createBucketConfiguration container for bucket configuration.
type createBucketConfiguration struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ CreateBucketConfiguration" json:"-"`
Location string `xml:"LocationConstraint"`
}
// deleteObject container for Delete element in MultiObjects Delete XML request
type deleteObject struct {
Key string
VersionID string `xml:"VersionId,omitempty"`
}
// deletedObject container for Deleted element in MultiObjects Delete XML response
type deletedObject struct {
Key string
VersionID string `xml:"VersionId,omitempty"`
// These fields are ignored.
DeleteMarker bool
DeleteMarkerVersionID string
}
// nonDeletedObject container for Error element (failed deletion) in MultiObjects Delete XML response
type nonDeletedObject struct {
Key string
Code string
Message string
}
// deletedMultiObjects container for MultiObjects Delete XML request
type deleteMultiObjects struct {
XMLName xml.Name `xml:"Delete"`
Quiet bool
Objects []deleteObject `xml:"Object"`
}
// deletedMultiObjectsResult container for MultiObjects Delete XML response
type deleteMultiObjectsResult struct {
XMLName xml.Name `xml:"DeleteResult"`
DeletedObjects []deletedObject `xml:"Deleted"`
UnDeletedObjects []nonDeletedObject `xml:"Error"`
}

178
vendor/github.com/minio/minio-go/api-stat.go generated vendored Normal file
View File

@@ -0,0 +1,178 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"context"
"net/http"
"strconv"
"strings"
"time"
"github.com/minio/minio-go/pkg/s3utils"
)
// BucketExists verify if bucket exists and you have permission to access it.
func (c Client) BucketExists(bucketName string) (bool, error) {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return false, err
}
// Execute HEAD on bucketName.
resp, err := c.executeMethod(context.Background(), "HEAD", requestMetadata{
bucketName: bucketName,
contentSHA256Hex: emptySHA256Hex,
})
defer closeResponse(resp)
if err != nil {
if ToErrorResponse(err).Code == "NoSuchBucket" {
return false, nil
}
return false, err
}
if resp != nil {
if resp.StatusCode != http.StatusOK {
return false, httpRespToErrorResponse(resp, bucketName, "")
}
}
return true, nil
}
// List of header keys to be filtered, usually
// from all S3 API http responses.
var defaultFilterKeys = []string{
"Connection",
"Transfer-Encoding",
"Accept-Ranges",
"Date",
"Server",
"Vary",
"x-amz-bucket-region",
"x-amz-request-id",
"x-amz-id-2",
// Add new headers to be ignored.
}
// Extract only necessary metadata header key/values by
// filtering them out with a list of custom header keys.
func extractObjMetadata(header http.Header) http.Header {
filterKeys := append([]string{
"ETag",
"Content-Length",
"Last-Modified",
"Content-Type",
}, defaultFilterKeys...)
return filterHeader(header, filterKeys)
}
// StatObject verifies if object exists and you have permission to access.
func (c Client) StatObject(bucketName, objectName string, opts StatObjectOptions) (ObjectInfo, error) {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return ObjectInfo{}, err
}
if err := s3utils.CheckValidObjectName(objectName); err != nil {
return ObjectInfo{}, err
}
return c.statObject(context.Background(), bucketName, objectName, opts)
}
// Lower level API for statObject supporting pre-conditions and range headers.
func (c Client) statObject(ctx context.Context, bucketName, objectName string, opts StatObjectOptions) (ObjectInfo, error) {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return ObjectInfo{}, err
}
if err := s3utils.CheckValidObjectName(objectName); err != nil {
return ObjectInfo{}, err
}
// Execute HEAD on objectName.
resp, err := c.executeMethod(ctx, "HEAD", requestMetadata{
bucketName: bucketName,
objectName: objectName,
contentSHA256Hex: emptySHA256Hex,
customHeader: opts.Header(),
})
defer closeResponse(resp)
if err != nil {
return ObjectInfo{}, err
}
if resp != nil {
if resp.StatusCode != http.StatusOK {
return ObjectInfo{}, httpRespToErrorResponse(resp, bucketName, objectName)
}
}
// Trim off the odd double quotes from ETag in the beginning and end.
md5sum := strings.TrimPrefix(resp.Header.Get("ETag"), "\"")
md5sum = strings.TrimSuffix(md5sum, "\"")
// Parse content length is exists
var size int64 = -1
contentLengthStr := resp.Header.Get("Content-Length")
if contentLengthStr != "" {
size, err = strconv.ParseInt(contentLengthStr, 10, 64)
if err != nil {
// Content-Length is not valid
return ObjectInfo{}, ErrorResponse{
Code: "InternalError",
Message: "Content-Length is invalid. " + reportIssue,
BucketName: bucketName,
Key: objectName,
RequestID: resp.Header.Get("x-amz-request-id"),
HostID: resp.Header.Get("x-amz-id-2"),
Region: resp.Header.Get("x-amz-bucket-region"),
}
}
}
// Parse Last-Modified has http time format.
date, err := time.Parse(http.TimeFormat, resp.Header.Get("Last-Modified"))
if err != nil {
return ObjectInfo{}, ErrorResponse{
Code: "InternalError",
Message: "Last-Modified time format is invalid. " + reportIssue,
BucketName: bucketName,
Key: objectName,
RequestID: resp.Header.Get("x-amz-request-id"),
HostID: resp.Header.Get("x-amz-id-2"),
Region: resp.Header.Get("x-amz-bucket-region"),
}
}
// Fetch content type if any present.
contentType := strings.TrimSpace(resp.Header.Get("Content-Type"))
if contentType == "" {
contentType = "application/octet-stream"
}
// Save object metadata info.
return ObjectInfo{
ETag: md5sum,
Key: objectName,
Size: size,
LastModified: date,
ContentType: contentType,
// Extract only the relevant header keys describing the object.
// following function filters out a list of standard set of keys
// which are not part of object metadata.
Metadata: extractObjMetadata(resp.Header),
}, nil
}

832
vendor/github.com/minio/minio-go/api.go generated vendored Normal file
View File

@@ -0,0 +1,832 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"bytes"
"context"
"crypto/md5"
"crypto/sha256"
"errors"
"fmt"
"hash"
"io"
"io/ioutil"
"math/rand"
"net"
"net/http"
"net/http/httputil"
"net/url"
"os"
"runtime"
"strings"
"sync"
"time"
"github.com/minio/minio-go/pkg/credentials"
"github.com/minio/minio-go/pkg/s3signer"
"github.com/minio/minio-go/pkg/s3utils"
)
// Client implements Amazon S3 compatible methods.
type Client struct {
/// Standard options.
// Parsed endpoint url provided by the user.
endpointURL url.URL
// Holds various credential providers.
credsProvider *credentials.Credentials
// Custom signerType value overrides all credentials.
overrideSignerType credentials.SignatureType
// User supplied.
appInfo struct {
appName string
appVersion string
}
// Indicate whether we are using https or not
secure bool
// Needs allocation.
httpClient *http.Client
bucketLocCache *bucketLocationCache
// Advanced functionality.
isTraceEnabled bool
traceOutput io.Writer
// S3 specific accelerated endpoint.
s3AccelerateEndpoint string
// Region endpoint
region string
// Random seed.
random *rand.Rand
}
// Global constants.
const (
libraryName = "minio-go"
libraryVersion = "4.0.6"
)
// User Agent should always following the below style.
// Please open an issue to discuss any new changes here.
//
// Minio (OS; ARCH) LIB/VER APP/VER
const (
libraryUserAgentPrefix = "Minio (" + runtime.GOOS + "; " + runtime.GOARCH + ") "
libraryUserAgent = libraryUserAgentPrefix + libraryName + "/" + libraryVersion
)
// NewV2 - instantiate minio client with Amazon S3 signature version
// '2' compatibility.
func NewV2(endpoint string, accessKeyID, secretAccessKey string, secure bool) (*Client, error) {
creds := credentials.NewStaticV2(accessKeyID, secretAccessKey, "")
clnt, err := privateNew(endpoint, creds, secure, "")
if err != nil {
return nil, err
}
clnt.overrideSignerType = credentials.SignatureV2
return clnt, nil
}
// NewV4 - instantiate minio client with Amazon S3 signature version
// '4' compatibility.
func NewV4(endpoint string, accessKeyID, secretAccessKey string, secure bool) (*Client, error) {
creds := credentials.NewStaticV4(accessKeyID, secretAccessKey, "")
clnt, err := privateNew(endpoint, creds, secure, "")
if err != nil {
return nil, err
}
clnt.overrideSignerType = credentials.SignatureV4
return clnt, nil
}
// New - instantiate minio client, adds automatic verification of signature.
func New(endpoint, accessKeyID, secretAccessKey string, secure bool) (*Client, error) {
creds := credentials.NewStaticV4(accessKeyID, secretAccessKey, "")
clnt, err := privateNew(endpoint, creds, secure, "")
if err != nil {
return nil, err
}
// Google cloud storage should be set to signature V2, force it if not.
if s3utils.IsGoogleEndpoint(clnt.endpointURL) {
clnt.overrideSignerType = credentials.SignatureV2
}
// If Amazon S3 set to signature v4.
if s3utils.IsAmazonEndpoint(clnt.endpointURL) {
clnt.overrideSignerType = credentials.SignatureV4
}
return clnt, nil
}
// NewWithCredentials - instantiate minio client with credentials provider
// for retrieving credentials from various credentials provider such as
// IAM, File, Env etc.
func NewWithCredentials(endpoint string, creds *credentials.Credentials, secure bool, region string) (*Client, error) {
return privateNew(endpoint, creds, secure, region)
}
// NewWithRegion - instantiate minio client, with region configured. Unlike New(),
// NewWithRegion avoids bucket-location lookup operations and it is slightly faster.
// Use this function when if your application deals with single region.
func NewWithRegion(endpoint, accessKeyID, secretAccessKey string, secure bool, region string) (*Client, error) {
creds := credentials.NewStaticV4(accessKeyID, secretAccessKey, "")
return privateNew(endpoint, creds, secure, region)
}
// lockedRandSource provides protected rand source, implements rand.Source interface.
type lockedRandSource struct {
lk sync.Mutex
src rand.Source
}
// Int63 returns a non-negative pseudo-random 63-bit integer as an int64.
func (r *lockedRandSource) Int63() (n int64) {
r.lk.Lock()
n = r.src.Int63()
r.lk.Unlock()
return
}
// Seed uses the provided seed value to initialize the generator to a
// deterministic state.
func (r *lockedRandSource) Seed(seed int64) {
r.lk.Lock()
r.src.Seed(seed)
r.lk.Unlock()
}
// getRegionFromURL - parse region from URL if present.
func getRegionFromURL(u url.URL) (region string) {
region = ""
if s3utils.IsGoogleEndpoint(u) {
return
} else if s3utils.IsAmazonChinaEndpoint(u) {
// For china specifically we need to set everything to
// cn-north-1 for now, there is no easier way until AWS S3
// provides a cleaner compatible API across "us-east-1" and
// China region.
return "cn-north-1"
} else if s3utils.IsAmazonGovCloudEndpoint(u) {
// For us-gov specifically we need to set everything to
// us-gov-west-1 for now, there is no easier way until AWS S3
// provides a cleaner compatible API across "us-east-1" and
// Gov cloud region.
return "us-gov-west-1"
}
parts := s3utils.AmazonS3Host.FindStringSubmatch(u.Host)
if len(parts) > 1 {
region = parts[1]
}
return region
}
func privateNew(endpoint string, creds *credentials.Credentials, secure bool, region string) (*Client, error) {
// construct endpoint.
endpointURL, err := getEndpointURL(endpoint, secure)
if err != nil {
return nil, err
}
// instantiate new Client.
clnt := new(Client)
// Save the credentials.
clnt.credsProvider = creds
// Remember whether we are using https or not
clnt.secure = secure
// Save endpoint URL, user agent for future uses.
clnt.endpointURL = *endpointURL
// Instantiate http client and bucket location cache.
clnt.httpClient = &http.Client{
Transport: defaultMinioTransport,
}
// Sets custom region, if region is empty bucket location cache is used automatically.
if region == "" {
region = getRegionFromURL(clnt.endpointURL)
}
clnt.region = region
// Instantiate bucket location cache.
clnt.bucketLocCache = newBucketLocationCache()
// Introduce a new locked random seed.
clnt.random = rand.New(&lockedRandSource{src: rand.NewSource(time.Now().UTC().UnixNano())})
// Return.
return clnt, nil
}
// SetAppInfo - add application details to user agent.
func (c *Client) SetAppInfo(appName string, appVersion string) {
// if app name and version not set, we do not set a new user agent.
if appName != "" && appVersion != "" {
c.appInfo = struct {
appName string
appVersion string
}{}
c.appInfo.appName = appName
c.appInfo.appVersion = appVersion
}
}
// SetCustomTransport - set new custom transport.
func (c *Client) SetCustomTransport(customHTTPTransport http.RoundTripper) {
// Set this to override default transport
// ``http.DefaultTransport``.
//
// This transport is usually needed for debugging OR to add your
// own custom TLS certificates on the client transport, for custom
// CA's and certs which are not part of standard certificate
// authority follow this example :-
//
// tr := &http.Transport{
// TLSClientConfig: &tls.Config{RootCAs: pool},
// DisableCompression: true,
// }
// api.SetTransport(tr)
//
if c.httpClient != nil {
c.httpClient.Transport = customHTTPTransport
}
}
// TraceOn - enable HTTP tracing.
func (c *Client) TraceOn(outputStream io.Writer) {
// if outputStream is nil then default to os.Stdout.
if outputStream == nil {
outputStream = os.Stdout
}
// Sets a new output stream.
c.traceOutput = outputStream
// Enable tracing.
c.isTraceEnabled = true
}
// TraceOff - disable HTTP tracing.
func (c *Client) TraceOff() {
// Disable tracing.
c.isTraceEnabled = false
}
// SetS3TransferAccelerate - turns s3 accelerated endpoint on or off for all your
// requests. This feature is only specific to S3 for all other endpoints this
// function does nothing. To read further details on s3 transfer acceleration
// please vist -
// http://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html
func (c *Client) SetS3TransferAccelerate(accelerateEndpoint string) {
if s3utils.IsAmazonEndpoint(c.endpointURL) {
c.s3AccelerateEndpoint = accelerateEndpoint
}
}
// Hash materials provides relevant initialized hash algo writers
// based on the expected signature type.
//
// - For signature v4 request if the connection is insecure compute only sha256.
// - For signature v4 request if the connection is secure compute only md5.
// - For anonymous request compute md5.
func (c *Client) hashMaterials() (hashAlgos map[string]hash.Hash, hashSums map[string][]byte) {
hashSums = make(map[string][]byte)
hashAlgos = make(map[string]hash.Hash)
if c.overrideSignerType.IsV4() {
if c.secure {
hashAlgos["md5"] = md5.New()
} else {
hashAlgos["sha256"] = sha256.New()
}
} else {
if c.overrideSignerType.IsAnonymous() {
hashAlgos["md5"] = md5.New()
}
}
return hashAlgos, hashSums
}
// requestMetadata - is container for all the values to make a request.
type requestMetadata struct {
// If set newRequest presigns the URL.
presignURL bool
// User supplied.
bucketName string
objectName string
queryValues url.Values
customHeader http.Header
expires int64
// Generated by our internal code.
bucketLocation string
contentBody io.Reader
contentLength int64
contentMD5Base64 string // carries base64 encoded md5sum
contentSHA256Hex string // carries hex encoded sha256sum
}
// dumpHTTP - dump HTTP request and response.
func (c Client) dumpHTTP(req *http.Request, resp *http.Response) error {
// Starts http dump.
_, err := fmt.Fprintln(c.traceOutput, "---------START-HTTP---------")
if err != nil {
return err
}
// Filter out Signature field from Authorization header.
origAuth := req.Header.Get("Authorization")
if origAuth != "" {
req.Header.Set("Authorization", redactSignature(origAuth))
}
// Only display request header.
reqTrace, err := httputil.DumpRequestOut(req, false)
if err != nil {
return err
}
// Write request to trace output.
_, err = fmt.Fprint(c.traceOutput, string(reqTrace))
if err != nil {
return err
}
// Only display response header.
var respTrace []byte
// For errors we make sure to dump response body as well.
if resp.StatusCode != http.StatusOK &&
resp.StatusCode != http.StatusPartialContent &&
resp.StatusCode != http.StatusNoContent {
respTrace, err = httputil.DumpResponse(resp, true)
if err != nil {
return err
}
} else {
// WORKAROUND for https://github.com/golang/go/issues/13942.
// httputil.DumpResponse does not print response headers for
// all successful calls which have response ContentLength set
// to zero. Keep this workaround until the above bug is fixed.
if resp.ContentLength == 0 {
var buffer bytes.Buffer
if err = resp.Header.Write(&buffer); err != nil {
return err
}
respTrace = buffer.Bytes()
respTrace = append(respTrace, []byte("\r\n")...)
} else {
respTrace, err = httputil.DumpResponse(resp, false)
if err != nil {
return err
}
}
}
// Write response to trace output.
_, err = fmt.Fprint(c.traceOutput, strings.TrimSuffix(string(respTrace), "\r\n"))
if err != nil {
return err
}
// Ends the http dump.
_, err = fmt.Fprintln(c.traceOutput, "---------END-HTTP---------")
if err != nil {
return err
}
// Returns success.
return nil
}
// do - execute http request.
func (c Client) do(req *http.Request) (*http.Response, error) {
var resp *http.Response
var err error
// Do the request in a loop in case of 307 http is met since golang still doesn't
// handle properly this situation (https://github.com/golang/go/issues/7912)
for {
resp, err = c.httpClient.Do(req)
if err != nil {
// Handle this specifically for now until future Golang
// versions fix this issue properly.
urlErr, ok := err.(*url.Error)
if ok && strings.Contains(urlErr.Err.Error(), "EOF") {
return nil, &url.Error{
Op: urlErr.Op,
URL: urlErr.URL,
Err: errors.New("Connection closed by foreign host " + urlErr.URL + ". Retry again."),
}
}
return nil, err
}
// Redo the request with the new redirect url if http 307 is returned, quit the loop otherwise
if resp != nil && resp.StatusCode == http.StatusTemporaryRedirect {
newURL, err := url.Parse(resp.Header.Get("Location"))
if err != nil {
break
}
req.URL = newURL
} else {
break
}
}
// Response cannot be non-nil, report if its the case.
if resp == nil {
msg := "Response is empty. " + reportIssue
return nil, ErrInvalidArgument(msg)
}
// If trace is enabled, dump http request and response.
if c.isTraceEnabled {
err = c.dumpHTTP(req, resp)
if err != nil {
return nil, err
}
}
return resp, nil
}
// List of success status.
var successStatus = []int{
http.StatusOK,
http.StatusNoContent,
http.StatusPartialContent,
}
// executeMethod - instantiates a given method, and retries the
// request upon any error up to maxRetries attempts in a binomially
// delayed manner using a standard back off algorithm.
func (c Client) executeMethod(ctx context.Context, method string, metadata requestMetadata) (res *http.Response, err error) {
var isRetryable bool // Indicates if request can be retried.
var bodySeeker io.Seeker // Extracted seeker from io.Reader.
var reqRetry = MaxRetry // Indicates how many times we can retry the request
if metadata.contentBody != nil {
// Check if body is seekable then it is retryable.
bodySeeker, isRetryable = metadata.contentBody.(io.Seeker)
switch bodySeeker {
case os.Stdin, os.Stdout, os.Stderr:
isRetryable = false
}
// Retry only when reader is seekable
if !isRetryable {
reqRetry = 1
}
// Figure out if the body can be closed - if yes
// we will definitely close it upon the function
// return.
bodyCloser, ok := metadata.contentBody.(io.Closer)
if ok {
defer bodyCloser.Close()
}
}
// Create a done channel to control 'newRetryTimer' go routine.
doneCh := make(chan struct{}, 1)
// Indicate to our routine to exit cleanly upon return.
defer close(doneCh)
// Blank indentifier is kept here on purpose since 'range' without
// blank identifiers is only supported since go1.4
// https://golang.org/doc/go1.4#forrange.
for range c.newRetryTimer(reqRetry, DefaultRetryUnit, DefaultRetryCap, MaxJitter, doneCh) {
// Retry executes the following function body if request has an
// error until maxRetries have been exhausted, retry attempts are
// performed after waiting for a given period of time in a
// binomial fashion.
if isRetryable {
// Seek back to beginning for each attempt.
if _, err = bodySeeker.Seek(0, 0); err != nil {
// If seek failed, no need to retry.
return nil, err
}
}
// Instantiate a new request.
var req *http.Request
req, err = c.newRequest(method, metadata)
if err != nil {
errResponse := ToErrorResponse(err)
if isS3CodeRetryable(errResponse.Code) {
continue // Retry.
}
return nil, err
}
// Add context to request
req = req.WithContext(ctx)
// Initiate the request.
res, err = c.do(req)
if err != nil {
// For supported network errors verify.
if isNetErrorRetryable(err) {
continue // Retry.
}
// For other errors, return here no need to retry.
return nil, err
}
// For any known successful http status, return quickly.
for _, httpStatus := range successStatus {
if httpStatus == res.StatusCode {
return res, nil
}
}
// Read the body to be saved later.
errBodyBytes, err := ioutil.ReadAll(res.Body)
// res.Body should be closed
closeResponse(res)
if err != nil {
return nil, err
}
// Save the body.
errBodySeeker := bytes.NewReader(errBodyBytes)
res.Body = ioutil.NopCloser(errBodySeeker)
// For errors verify if its retryable otherwise fail quickly.
errResponse := ToErrorResponse(httpRespToErrorResponse(res, metadata.bucketName, metadata.objectName))
// Save the body back again.
errBodySeeker.Seek(0, 0) // Seek back to starting point.
res.Body = ioutil.NopCloser(errBodySeeker)
// Bucket region if set in error response and the error
// code dictates invalid region, we can retry the request
// with the new region.
//
// Additionally we should only retry if bucketLocation and custom
// region is empty.
if metadata.bucketLocation == "" && c.region == "" {
if errResponse.Code == "AuthorizationHeaderMalformed" || errResponse.Code == "InvalidRegion" {
if metadata.bucketName != "" && errResponse.Region != "" {
// Gather Cached location only if bucketName is present.
if _, cachedLocationError := c.bucketLocCache.Get(metadata.bucketName); cachedLocationError != false {
c.bucketLocCache.Set(metadata.bucketName, errResponse.Region)
continue // Retry.
}
}
}
}
// Verify if error response code is retryable.
if isS3CodeRetryable(errResponse.Code) {
continue // Retry.
}
// Verify if http status code is retryable.
if isHTTPStatusRetryable(res.StatusCode) {
continue // Retry.
}
// For all other cases break out of the retry loop.
break
}
return res, err
}
// newRequest - instantiate a new HTTP request for a given method.
func (c Client) newRequest(method string, metadata requestMetadata) (req *http.Request, err error) {
// If no method is supplied default to 'POST'.
if method == "" {
method = "POST"
}
location := metadata.bucketLocation
if location == "" {
if metadata.bucketName != "" {
// Gather location only if bucketName is present.
location, err = c.getBucketLocation(metadata.bucketName)
if err != nil {
if ToErrorResponse(err).Code != "AccessDenied" {
return nil, err
}
}
// Upon AccessDenied error on fetching bucket location, default
// to possible locations based on endpoint URL. This can usually
// happen when GetBucketLocation() is disabled using IAM policies.
}
if location == "" {
location = getDefaultLocation(c.endpointURL, c.region)
}
}
// Construct a new target URL.
targetURL, err := c.makeTargetURL(metadata.bucketName, metadata.objectName, location, metadata.queryValues)
if err != nil {
return nil, err
}
// Initialize a new HTTP request for the method.
req, err = http.NewRequest(method, targetURL.String(), nil)
if err != nil {
return nil, err
}
// Get credentials from the configured credentials provider.
value, err := c.credsProvider.Get()
if err != nil {
return nil, err
}
var (
signerType = value.SignerType
accessKeyID = value.AccessKeyID
secretAccessKey = value.SecretAccessKey
sessionToken = value.SessionToken
)
// Custom signer set then override the behavior.
if c.overrideSignerType != credentials.SignatureDefault {
signerType = c.overrideSignerType
}
// If signerType returned by credentials helper is anonymous,
// then do not sign regardless of signerType override.
if value.SignerType == credentials.SignatureAnonymous {
signerType = credentials.SignatureAnonymous
}
// Generate presign url if needed, return right here.
if metadata.expires != 0 && metadata.presignURL {
if signerType.IsAnonymous() {
return nil, ErrInvalidArgument("Presigned URLs cannot be generated with anonymous credentials.")
}
if signerType.IsV2() {
// Presign URL with signature v2.
req = s3signer.PreSignV2(*req, accessKeyID, secretAccessKey, metadata.expires)
} else if signerType.IsV4() {
// Presign URL with signature v4.
req = s3signer.PreSignV4(*req, accessKeyID, secretAccessKey, sessionToken, location, metadata.expires)
}
return req, nil
}
// Set 'User-Agent' header for the request.
c.setUserAgent(req)
// Set all headers.
for k, v := range metadata.customHeader {
req.Header.Set(k, v[0])
}
// Go net/http notoriously closes the request body.
// - The request Body, if non-nil, will be closed by the underlying Transport, even on errors.
// This can cause underlying *os.File seekers to fail, avoid that
// by making sure to wrap the closer as a nop.
if metadata.contentLength == 0 {
req.Body = nil
} else {
req.Body = ioutil.NopCloser(metadata.contentBody)
}
// Set incoming content-length.
req.ContentLength = metadata.contentLength
if req.ContentLength <= -1 {
// For unknown content length, we upload using transfer-encoding: chunked.
req.TransferEncoding = []string{"chunked"}
}
// set md5Sum for content protection.
if len(metadata.contentMD5Base64) > 0 {
req.Header.Set("Content-Md5", metadata.contentMD5Base64)
}
// For anonymous requests just return.
if signerType.IsAnonymous() {
return req, nil
}
switch {
case signerType.IsV2():
// Add signature version '2' authorization header.
req = s3signer.SignV2(*req, accessKeyID, secretAccessKey)
case metadata.objectName != "" && method == "PUT" && metadata.customHeader.Get("X-Amz-Copy-Source") == "" && !c.secure:
// Streaming signature is used by default for a PUT object request. Additionally we also
// look if the initialized client is secure, if yes then we don't need to perform
// streaming signature.
req = s3signer.StreamingSignV4(req, accessKeyID,
secretAccessKey, sessionToken, location, metadata.contentLength, time.Now().UTC())
default:
// Set sha256 sum for signature calculation only with signature version '4'.
shaHeader := unsignedPayload
if metadata.contentSHA256Hex != "" {
shaHeader = metadata.contentSHA256Hex
}
req.Header.Set("X-Amz-Content-Sha256", shaHeader)
// Add signature version '4' authorization header.
req = s3signer.SignV4(*req, accessKeyID, secretAccessKey, sessionToken, location)
}
// Return request.
return req, nil
}
// set User agent.
func (c Client) setUserAgent(req *http.Request) {
req.Header.Set("User-Agent", libraryUserAgent)
if c.appInfo.appName != "" && c.appInfo.appVersion != "" {
req.Header.Set("User-Agent", libraryUserAgent+" "+c.appInfo.appName+"/"+c.appInfo.appVersion)
}
}
// makeTargetURL make a new target url.
func (c Client) makeTargetURL(bucketName, objectName, bucketLocation string, queryValues url.Values) (*url.URL, error) {
host := c.endpointURL.Host
// For Amazon S3 endpoint, try to fetch location based endpoint.
if s3utils.IsAmazonEndpoint(c.endpointURL) {
if c.s3AccelerateEndpoint != "" && bucketName != "" {
// http://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html
// Disable transfer acceleration for non-compliant bucket names.
if strings.Contains(bucketName, ".") {
return nil, ErrTransferAccelerationBucket(bucketName)
}
// If transfer acceleration is requested set new host.
// For more details about enabling transfer acceleration read here.
// http://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html
host = c.s3AccelerateEndpoint
} else {
// Do not change the host if the endpoint URL is a FIPS S3 endpoint.
if !s3utils.IsAmazonFIPSGovCloudEndpoint(c.endpointURL) {
// Fetch new host based on the bucket location.
host = getS3Endpoint(bucketLocation)
}
}
}
// Save scheme.
scheme := c.endpointURL.Scheme
// Strip port 80 and 443 so we won't send these ports in Host header.
// The reason is that browsers and curl automatically remove :80 and :443
// with the generated presigned urls, then a signature mismatch error.
if h, p, err := net.SplitHostPort(host); err == nil {
if scheme == "http" && p == "80" || scheme == "https" && p == "443" {
host = h
}
}
urlStr := scheme + "://" + host + "/"
// Make URL only if bucketName is available, otherwise use the
// endpoint URL.
if bucketName != "" {
// Save if target url will have buckets which suppport virtual host.
isVirtualHostStyle := s3utils.IsVirtualHostSupported(c.endpointURL, bucketName)
// If endpoint supports virtual host style use that always.
// Currently only S3 and Google Cloud Storage would support
// virtual host style.
if isVirtualHostStyle {
urlStr = scheme + "://" + bucketName + "." + host + "/"
if objectName != "" {
urlStr = urlStr + s3utils.EncodePath(objectName)
}
} else {
// If not fall back to using path style.
urlStr = urlStr + bucketName + "/"
if objectName != "" {
urlStr = urlStr + s3utils.EncodePath(objectName)
}
}
}
// If there are any query values, add them to the end.
if len(queryValues) > 0 {
urlStr = urlStr + "?" + s3utils.QueryEncode(queryValues)
}
u, err := url.Parse(urlStr)
if err != nil {
return nil, err
}
return u, nil
}

219
vendor/github.com/minio/minio-go/bucket-cache.go generated vendored Normal file
View File

@@ -0,0 +1,219 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"net/http"
"net/url"
"path"
"sync"
"github.com/minio/minio-go/pkg/credentials"
"github.com/minio/minio-go/pkg/s3signer"
"github.com/minio/minio-go/pkg/s3utils"
)
// bucketLocationCache - Provides simple mechanism to hold bucket
// locations in memory.
type bucketLocationCache struct {
// mutex is used for handling the concurrent
// read/write requests for cache.
sync.RWMutex
// items holds the cached bucket locations.
items map[string]string
}
// newBucketLocationCache - Provides a new bucket location cache to be
// used internally with the client object.
func newBucketLocationCache() *bucketLocationCache {
return &bucketLocationCache{
items: make(map[string]string),
}
}
// Get - Returns a value of a given key if it exists.
func (r *bucketLocationCache) Get(bucketName string) (location string, ok bool) {
r.RLock()
defer r.RUnlock()
location, ok = r.items[bucketName]
return
}
// Set - Will persist a value into cache.
func (r *bucketLocationCache) Set(bucketName string, location string) {
r.Lock()
defer r.Unlock()
r.items[bucketName] = location
}
// Delete - Deletes a bucket name from cache.
func (r *bucketLocationCache) Delete(bucketName string) {
r.Lock()
defer r.Unlock()
delete(r.items, bucketName)
}
// GetBucketLocation - get location for the bucket name from location cache, if not
// fetch freshly by making a new request.
func (c Client) GetBucketLocation(bucketName string) (string, error) {
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return "", err
}
return c.getBucketLocation(bucketName)
}
// getBucketLocation - Get location for the bucketName from location map cache, if not
// fetch freshly by making a new request.
func (c Client) getBucketLocation(bucketName string) (string, error) {
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return "", err
}
// Region set then no need to fetch bucket location.
if c.region != "" {
return c.region, nil
}
if location, ok := c.bucketLocCache.Get(bucketName); ok {
return location, nil
}
// Initialize a new request.
req, err := c.getBucketLocationRequest(bucketName)
if err != nil {
return "", err
}
// Initiate the request.
resp, err := c.do(req)
defer closeResponse(resp)
if err != nil {
return "", err
}
location, err := processBucketLocationResponse(resp, bucketName)
if err != nil {
return "", err
}
c.bucketLocCache.Set(bucketName, location)
return location, nil
}
// processes the getBucketLocation http response from the server.
func processBucketLocationResponse(resp *http.Response, bucketName string) (bucketLocation string, err error) {
if resp != nil {
if resp.StatusCode != http.StatusOK {
err = httpRespToErrorResponse(resp, bucketName, "")
errResp := ToErrorResponse(err)
// For access denied error, it could be an anonymous
// request. Move forward and let the top level callers
// succeed if possible based on their policy.
if errResp.Code == "AccessDenied" {
return "us-east-1", nil
}
return "", err
}
}
// Extract location.
var locationConstraint string
err = xmlDecoder(resp.Body, &locationConstraint)
if err != nil {
return "", err
}
location := locationConstraint
// Location is empty will be 'us-east-1'.
if location == "" {
location = "us-east-1"
}
// Location can be 'EU' convert it to meaningful 'eu-west-1'.
if location == "EU" {
location = "eu-west-1"
}
// Save the location into cache.
// Return.
return location, nil
}
// getBucketLocationRequest - Wrapper creates a new getBucketLocation request.
func (c Client) getBucketLocationRequest(bucketName string) (*http.Request, error) {
// Set location query.
urlValues := make(url.Values)
urlValues.Set("location", "")
// Set get bucket location always as path style.
targetURL := c.endpointURL
targetURL.Path = path.Join(bucketName, "") + "/"
targetURL.RawQuery = urlValues.Encode()
// Get a new HTTP request for the method.
req, err := http.NewRequest("GET", targetURL.String(), nil)
if err != nil {
return nil, err
}
// Set UserAgent for the request.
c.setUserAgent(req)
// Get credentials from the configured credentials provider.
value, err := c.credsProvider.Get()
if err != nil {
return nil, err
}
var (
signerType = value.SignerType
accessKeyID = value.AccessKeyID
secretAccessKey = value.SecretAccessKey
sessionToken = value.SessionToken
)
// Custom signer set then override the behavior.
if c.overrideSignerType != credentials.SignatureDefault {
signerType = c.overrideSignerType
}
// If signerType returned by credentials helper is anonymous,
// then do not sign regardless of signerType override.
if value.SignerType == credentials.SignatureAnonymous {
signerType = credentials.SignatureAnonymous
}
if signerType.IsAnonymous() {
return req, nil
}
if signerType.IsV2() {
req = s3signer.SignV2(*req, accessKeyID, secretAccessKey)
return req, nil
}
// Set sha256 sum for signature calculation only with signature version '4'.
contentSha256 := emptySHA256Hex
if c.secure {
contentSha256 = unsignedPayload
}
req.Header.Set("X-Amz-Content-Sha256", contentSha256)
req = s3signer.SignV4(*req, accessKeyID, secretAccessKey, sessionToken, "us-east-1")
return req, nil
}

232
vendor/github.com/minio/minio-go/bucket-notification.go generated vendored Normal file
View File

@@ -0,0 +1,232 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"encoding/xml"
"reflect"
)
// NotificationEventType is a S3 notification event associated to the bucket notification configuration
type NotificationEventType string
// The role of all event types are described in :
// http://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html#notification-how-to-event-types-and-destinations
const (
ObjectCreatedAll NotificationEventType = "s3:ObjectCreated:*"
ObjectCreatedPut = "s3:ObjectCreated:Put"
ObjectCreatedPost = "s3:ObjectCreated:Post"
ObjectCreatedCopy = "s3:ObjectCreated:Copy"
ObjectCreatedCompleteMultipartUpload = "s3:ObjectCreated:CompleteMultipartUpload"
ObjectAccessedGet = "s3:ObjectAccessed:Get"
ObjectAccessedHead = "s3:ObjectAccessed:Head"
ObjectAccessedAll = "s3:ObjectAccessed:*"
ObjectRemovedAll = "s3:ObjectRemoved:*"
ObjectRemovedDelete = "s3:ObjectRemoved:Delete"
ObjectRemovedDeleteMarkerCreated = "s3:ObjectRemoved:DeleteMarkerCreated"
ObjectReducedRedundancyLostObject = "s3:ReducedRedundancyLostObject"
)
// FilterRule - child of S3Key, a tag in the notification xml which
// carries suffix/prefix filters
type FilterRule struct {
Name string `xml:"Name"`
Value string `xml:"Value"`
}
// S3Key - child of Filter, a tag in the notification xml which
// carries suffix/prefix filters
type S3Key struct {
FilterRules []FilterRule `xml:"FilterRule,omitempty"`
}
// Filter - a tag in the notification xml structure which carries
// suffix/prefix filters
type Filter struct {
S3Key S3Key `xml:"S3Key,omitempty"`
}
// Arn - holds ARN information that will be sent to the web service,
// ARN desciption can be found in http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html
type Arn struct {
Partition string
Service string
Region string
AccountID string
Resource string
}
// NewArn creates new ARN based on the given partition, service, region, account id and resource
func NewArn(partition, service, region, accountID, resource string) Arn {
return Arn{Partition: partition,
Service: service,
Region: region,
AccountID: accountID,
Resource: resource}
}
// Return the string format of the ARN
func (arn Arn) String() string {
return "arn:" + arn.Partition + ":" + arn.Service + ":" + arn.Region + ":" + arn.AccountID + ":" + arn.Resource
}
// NotificationConfig - represents one single notification configuration
// such as topic, queue or lambda configuration.
type NotificationConfig struct {
ID string `xml:"Id,omitempty"`
Arn Arn `xml:"-"`
Events []NotificationEventType `xml:"Event"`
Filter *Filter `xml:"Filter,omitempty"`
}
// NewNotificationConfig creates one notification config and sets the given ARN
func NewNotificationConfig(arn Arn) NotificationConfig {
return NotificationConfig{Arn: arn}
}
// AddEvents adds one event to the current notification config
func (t *NotificationConfig) AddEvents(events ...NotificationEventType) {
t.Events = append(t.Events, events...)
}
// AddFilterSuffix sets the suffix configuration to the current notification config
func (t *NotificationConfig) AddFilterSuffix(suffix string) {
if t.Filter == nil {
t.Filter = &Filter{}
}
newFilterRule := FilterRule{Name: "suffix", Value: suffix}
// Replace any suffix rule if existing and add to the list otherwise
for index := range t.Filter.S3Key.FilterRules {
if t.Filter.S3Key.FilterRules[index].Name == "suffix" {
t.Filter.S3Key.FilterRules[index] = newFilterRule
return
}
}
t.Filter.S3Key.FilterRules = append(t.Filter.S3Key.FilterRules, newFilterRule)
}
// AddFilterPrefix sets the prefix configuration to the current notification config
func (t *NotificationConfig) AddFilterPrefix(prefix string) {
if t.Filter == nil {
t.Filter = &Filter{}
}
newFilterRule := FilterRule{Name: "prefix", Value: prefix}
// Replace any prefix rule if existing and add to the list otherwise
for index := range t.Filter.S3Key.FilterRules {
if t.Filter.S3Key.FilterRules[index].Name == "prefix" {
t.Filter.S3Key.FilterRules[index] = newFilterRule
return
}
}
t.Filter.S3Key.FilterRules = append(t.Filter.S3Key.FilterRules, newFilterRule)
}
// TopicConfig carries one single topic notification configuration
type TopicConfig struct {
NotificationConfig
Topic string `xml:"Topic"`
}
// QueueConfig carries one single queue notification configuration
type QueueConfig struct {
NotificationConfig
Queue string `xml:"Queue"`
}
// LambdaConfig carries one single cloudfunction notification configuration
type LambdaConfig struct {
NotificationConfig
Lambda string `xml:"CloudFunction"`
}
// BucketNotification - the struct that represents the whole XML to be sent to the web service
type BucketNotification struct {
XMLName xml.Name `xml:"NotificationConfiguration"`
LambdaConfigs []LambdaConfig `xml:"CloudFunctionConfiguration"`
TopicConfigs []TopicConfig `xml:"TopicConfiguration"`
QueueConfigs []QueueConfig `xml:"QueueConfiguration"`
}
// AddTopic adds a given topic config to the general bucket notification config
func (b *BucketNotification) AddTopic(topicConfig NotificationConfig) {
newTopicConfig := TopicConfig{NotificationConfig: topicConfig, Topic: topicConfig.Arn.String()}
for _, n := range b.TopicConfigs {
if reflect.DeepEqual(n, newTopicConfig) {
// Avoid adding duplicated entry
return
}
}
b.TopicConfigs = append(b.TopicConfigs, newTopicConfig)
}
// AddQueue adds a given queue config to the general bucket notification config
func (b *BucketNotification) AddQueue(queueConfig NotificationConfig) {
newQueueConfig := QueueConfig{NotificationConfig: queueConfig, Queue: queueConfig.Arn.String()}
for _, n := range b.QueueConfigs {
if reflect.DeepEqual(n, newQueueConfig) {
// Avoid adding duplicated entry
return
}
}
b.QueueConfigs = append(b.QueueConfigs, newQueueConfig)
}
// AddLambda adds a given lambda config to the general bucket notification config
func (b *BucketNotification) AddLambda(lambdaConfig NotificationConfig) {
newLambdaConfig := LambdaConfig{NotificationConfig: lambdaConfig, Lambda: lambdaConfig.Arn.String()}
for _, n := range b.LambdaConfigs {
if reflect.DeepEqual(n, newLambdaConfig) {
// Avoid adding duplicated entry
return
}
}
b.LambdaConfigs = append(b.LambdaConfigs, newLambdaConfig)
}
// RemoveTopicByArn removes all topic configurations that match the exact specified ARN
func (b *BucketNotification) RemoveTopicByArn(arn Arn) {
var topics []TopicConfig
for _, topic := range b.TopicConfigs {
if topic.Topic != arn.String() {
topics = append(topics, topic)
}
}
b.TopicConfigs = topics
}
// RemoveQueueByArn removes all queue configurations that match the exact specified ARN
func (b *BucketNotification) RemoveQueueByArn(arn Arn) {
var queues []QueueConfig
for _, queue := range b.QueueConfigs {
if queue.Queue != arn.String() {
queues = append(queues, queue)
}
}
b.QueueConfigs = queues
}
// RemoveLambdaByArn removes all lambda configurations that match the exact specified ARN
func (b *BucketNotification) RemoveLambdaByArn(arn Arn) {
var lambdas []LambdaConfig
for _, lambda := range b.LambdaConfigs {
if lambda.Lambda != arn.String() {
lambdas = append(lambdas, lambda)
}
}
b.LambdaConfigs = lambdas
}

70
vendor/github.com/minio/minio-go/constants.go generated vendored Normal file
View File

@@ -0,0 +1,70 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
/// Multipart upload defaults.
// absMinPartSize - absolute minimum part size (5 MiB) below which
// a part in a multipart upload may not be uploaded.
const absMinPartSize = 1024 * 1024 * 5
// minPartSize - minimum part size 64MiB per object after which
// putObject behaves internally as multipart.
const minPartSize = 1024 * 1024 * 64
// copyPartSize - default (and maximum) part size to copy in a
// copy-object request (5GiB)
const copyPartSize = 1024 * 1024 * 1024 * 5
// maxPartsCount - maximum number of parts for a single multipart session.
const maxPartsCount = 10000
// maxPartSize - maximum part size 5GiB for a single multipart upload
// operation.
const maxPartSize = 1024 * 1024 * 1024 * 5
// maxSinglePutObjectSize - maximum size 5GiB of object per PUT
// operation.
const maxSinglePutObjectSize = 1024 * 1024 * 1024 * 5
// maxMultipartPutObjectSize - maximum size 5TiB of object for
// Multipart operation.
const maxMultipartPutObjectSize = 1024 * 1024 * 1024 * 1024 * 5
// unsignedPayload - value to be set to X-Amz-Content-Sha256 header when
// we don't want to sign the request payload
const unsignedPayload = "UNSIGNED-PAYLOAD"
// Total number of parallel workers used for multipart operation.
const totalWorkers = 4
// Signature related constants.
const (
signV4Algorithm = "AWS4-HMAC-SHA256"
iso8601DateFormat = "20060102T150405Z"
)
// Encryption headers stored along with the object.
const (
amzHeaderIV = "X-Amz-Meta-X-Amz-Iv"
amzHeaderKey = "X-Amz-Meta-X-Amz-Key"
amzHeaderMatDesc = "X-Amz-Meta-X-Amz-Matdesc"
)
// Storage class header constant.
const amzStorageClass = "X-Amz-Storage-Class"

154
vendor/github.com/minio/minio-go/core.go generated vendored Normal file
View File

@@ -0,0 +1,154 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"context"
"io"
"strings"
"github.com/minio/minio-go/pkg/policy"
)
// Core - Inherits Client and adds new methods to expose the low level S3 APIs.
type Core struct {
*Client
}
// NewCore - Returns new initialized a Core client, this CoreClient should be
// only used under special conditions such as need to access lower primitives
// and being able to use them to write your own wrappers.
func NewCore(endpoint string, accessKeyID, secretAccessKey string, secure bool) (*Core, error) {
var s3Client Core
client, err := NewV4(endpoint, accessKeyID, secretAccessKey, secure)
if err != nil {
return nil, err
}
s3Client.Client = client
return &s3Client, nil
}
// ListObjects - List all the objects at a prefix, optionally with marker and delimiter
// you can further filter the results.
func (c Core) ListObjects(bucket, prefix, marker, delimiter string, maxKeys int) (result ListBucketResult, err error) {
return c.listObjectsQuery(bucket, prefix, marker, delimiter, maxKeys)
}
// ListObjectsV2 - Lists all the objects at a prefix, similar to ListObjects() but uses
// continuationToken instead of marker to further filter the results.
func (c Core) ListObjectsV2(bucketName, objectPrefix, continuationToken string, fetchOwner bool, delimiter string, maxkeys int) (ListBucketV2Result, error) {
return c.listObjectsV2Query(bucketName, objectPrefix, continuationToken, fetchOwner, delimiter, maxkeys)
}
// CopyObject - copies an object from source object to destination object on server side.
func (c Core) CopyObject(sourceBucket, sourceObject, destBucket, destObject string, metadata map[string]string) (ObjectInfo, error) {
return c.copyObjectDo(context.Background(), sourceBucket, sourceObject, destBucket, destObject, metadata)
}
// CopyObjectPart - creates a part in a multipart upload by copying (a
// part of) an existing object.
func (c Core) CopyObjectPart(srcBucket, srcObject, destBucket, destObject string, uploadID string,
partID int, startOffset, length int64, metadata map[string]string) (p CompletePart, err error) {
return c.copyObjectPartDo(context.Background(), srcBucket, srcObject, destBucket, destObject, uploadID,
partID, startOffset, length, metadata)
}
// PutObject - Upload object. Uploads using single PUT call.
func (c Core) PutObject(bucket, object string, data io.Reader, size int64, md5Base64, sha256Hex string, metadata map[string]string) (ObjectInfo, error) {
opts := PutObjectOptions{}
m := make(map[string]string)
for k, v := range metadata {
if strings.ToLower(k) == "content-encoding" {
opts.ContentEncoding = v
} else if strings.ToLower(k) == "content-disposition" {
opts.ContentDisposition = v
} else if strings.ToLower(k) == "content-type" {
opts.ContentType = v
} else if strings.ToLower(k) == "cache-control" {
opts.CacheControl = v
} else {
m[k] = metadata[k]
}
}
opts.UserMetadata = m
return c.putObjectDo(context.Background(), bucket, object, data, md5Base64, sha256Hex, size, opts)
}
// NewMultipartUpload - Initiates new multipart upload and returns the new uploadID.
func (c Core) NewMultipartUpload(bucket, object string, opts PutObjectOptions) (uploadID string, err error) {
result, err := c.initiateMultipartUpload(context.Background(), bucket, object, opts)
return result.UploadID, err
}
// ListMultipartUploads - List incomplete uploads.
func (c Core) ListMultipartUploads(bucket, prefix, keyMarker, uploadIDMarker, delimiter string, maxUploads int) (result ListMultipartUploadsResult, err error) {
return c.listMultipartUploadsQuery(bucket, keyMarker, uploadIDMarker, prefix, delimiter, maxUploads)
}
// PutObjectPart - Upload an object part.
func (c Core) PutObjectPart(bucket, object, uploadID string, partID int, data io.Reader, size int64, md5Base64, sha256Hex string) (ObjectPart, error) {
return c.PutObjectPartWithMetadata(bucket, object, uploadID, partID, data, size, md5Base64, sha256Hex, nil)
}
// PutObjectPartWithMetadata - upload an object part with additional request metadata.
func (c Core) PutObjectPartWithMetadata(bucket, object, uploadID string, partID int, data io.Reader,
size int64, md5Base64, sha256Hex string, metadata map[string]string) (ObjectPart, error) {
return c.uploadPart(context.Background(), bucket, object, uploadID, data, partID, md5Base64, sha256Hex, size, metadata)
}
// ListObjectParts - List uploaded parts of an incomplete upload.x
func (c Core) ListObjectParts(bucket, object, uploadID string, partNumberMarker int, maxParts int) (result ListObjectPartsResult, err error) {
return c.listObjectPartsQuery(bucket, object, uploadID, partNumberMarker, maxParts)
}
// CompleteMultipartUpload - Concatenate uploaded parts and commit to an object.
func (c Core) CompleteMultipartUpload(bucket, object, uploadID string, parts []CompletePart) error {
_, err := c.completeMultipartUpload(context.Background(), bucket, object, uploadID, completeMultipartUpload{
Parts: parts,
})
return err
}
// AbortMultipartUpload - Abort an incomplete upload.
func (c Core) AbortMultipartUpload(bucket, object, uploadID string) error {
return c.abortMultipartUpload(context.Background(), bucket, object, uploadID)
}
// GetBucketPolicy - fetches bucket access policy for a given bucket.
func (c Core) GetBucketPolicy(bucket string) (policy.BucketAccessPolicy, error) {
return c.getBucketPolicy(bucket)
}
// PutBucketPolicy - applies a new bucket access policy for a given bucket.
func (c Core) PutBucketPolicy(bucket string, bucketPolicy policy.BucketAccessPolicy) error {
return c.putBucketPolicy(bucket, bucketPolicy)
}
// GetObject is a lower level API implemented to support reading
// partial objects and also downloading objects with special conditions
// matching etag, modtime etc.
func (c Core) GetObject(bucketName, objectName string, opts GetObjectOptions) (io.ReadCloser, ObjectInfo, error) {
return c.getObject(context.Background(), bucketName, objectName, opts)
}
// StatObject is a lower level API implemented to support special
// conditions matching etag, modtime on a request.
func (c Core) StatObject(bucketName, objectName string, opts StatObjectOptions) (ObjectInfo, error) {
return c.statObject(context.Background(), bucketName, objectName, opts)
}

227
vendor/github.com/minio/minio-go/docs/validator.go generated vendored Normal file
View File

@@ -0,0 +1,227 @@
// +build ignore
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"fmt"
"io/ioutil"
"os"
"os/exec"
"path/filepath"
"strings"
"text/template"
"github.com/a8m/mark"
"github.com/gernest/wow"
"github.com/gernest/wow/spin"
"github.com/minio/cli"
)
func init() {
// Validate go binary.
if _, err := exec.LookPath("go"); err != nil {
panic(err)
}
}
var globalFlags = []cli.Flag{
cli.StringFlag{
Name: "m",
Value: "API.md",
Usage: "Path to markdown api documentation.",
},
cli.StringFlag{
Name: "t",
Value: "checker.go.template",
Usage: "Template used for generating the programs.",
},
cli.IntFlag{
Name: "skip",
Value: 2,
Usage: "Skip entries before validating the code.",
},
}
func runGofmt(path string) (msg string, err error) {
cmdArgs := []string{"-s", "-w", "-l", path}
cmd := exec.Command("gofmt", cmdArgs...)
stdoutStderr, err := cmd.CombinedOutput()
if err != nil {
return "", err
}
return string(stdoutStderr), nil
}
func runGoImports(path string) (msg string, err error) {
cmdArgs := []string{"-w", path}
cmd := exec.Command("goimports", cmdArgs...)
stdoutStderr, err := cmd.CombinedOutput()
if err != nil {
return string(stdoutStderr), err
}
return string(stdoutStderr), nil
}
func runGoBuild(path string) (msg string, err error) {
// Go build the path.
cmdArgs := []string{"build", "-o", "/dev/null", path}
cmd := exec.Command("go", cmdArgs...)
stdoutStderr, err := cmd.CombinedOutput()
if err != nil {
return string(stdoutStderr), err
}
return string(stdoutStderr), nil
}
func validatorAction(ctx *cli.Context) error {
if !ctx.IsSet("m") || !ctx.IsSet("t") {
return nil
}
docPath := ctx.String("m")
var err error
docPath, err = filepath.Abs(docPath)
if err != nil {
return err
}
data, err := ioutil.ReadFile(docPath)
if err != nil {
return err
}
templatePath := ctx.String("t")
templatePath, err = filepath.Abs(templatePath)
if err != nil {
return err
}
skipEntries := ctx.Int("skip")
m := mark.New(string(data), &mark.Options{
Gfm: true, // Github markdown support is enabled by default.
})
t, err := template.ParseFiles(templatePath)
if err != nil {
return err
}
tmpDir, err := ioutil.TempDir("", "md-verifier")
if err != nil {
return err
}
defer os.RemoveAll(tmpDir)
entryN := 1
for i := mark.NodeText; i < mark.NodeCheckbox; i++ {
if mark.NodeCode != mark.NodeType(i) {
m.AddRenderFn(mark.NodeType(i), func(node mark.Node) (s string) {
return ""
})
continue
}
m.AddRenderFn(mark.NodeCode, func(node mark.Node) (s string) {
p, ok := node.(*mark.CodeNode)
if !ok {
return
}
p.Text = strings.NewReplacer("&lt;", "<", "&gt;", ">", "&quot;", `"`, "&amp;", "&").Replace(p.Text)
if skipEntries > 0 {
skipEntries--
return
}
testFilePath := filepath.Join(tmpDir, "example.go")
w, werr := os.Create(testFilePath)
if werr != nil {
panic(werr)
}
t.Execute(w, p)
w.Sync()
w.Close()
entryN++
msg, err := runGofmt(testFilePath)
if err != nil {
fmt.Printf("Failed running gofmt on %s, with (%s):(%s)\n", testFilePath, msg, err)
os.Exit(-1)
}
msg, err = runGoImports(testFilePath)
if err != nil {
fmt.Printf("Failed running gofmt on %s, with (%s):(%s)\n", testFilePath, msg, err)
os.Exit(-1)
}
msg, err = runGoBuild(testFilePath)
if err != nil {
fmt.Printf("Failed running gobuild on %s, with (%s):(%s)\n", testFilePath, msg, err)
fmt.Printf("Code with possible issue in %s:\n%s", docPath, p.Text)
fmt.Printf("To test `go build %s`\n", testFilePath)
os.Exit(-1)
}
// Once successfully built remove the test file
os.Remove(testFilePath)
return
})
}
w := wow.New(os.Stdout, spin.Get(spin.Moon), fmt.Sprintf(" Running validation tests in %s", tmpDir))
w.Start()
// Render markdown executes our checker on each code blocks.
_ = m.Render()
w.PersistWith(spin.Get(spin.Runner), " Successfully finished tests")
w.Stop()
return nil
}
func main() {
app := cli.NewApp()
app.Action = validatorAction
app.HideVersion = true
app.HideHelpCommand = true
app.Usage = "Validates code block sections inside API.md"
app.Author = "Minio.io"
app.Flags = globalFlags
// Help template for validator
app.CustomAppHelpTemplate = `NAME:
{{.Name}} - {{.Usage}}
USAGE:
{{.Name}} {{if .VisibleFlags}}[FLAGS] {{end}}COMMAND{{if .VisibleFlags}} [COMMAND FLAGS | -h]{{end}} [ARGUMENTS...]
COMMANDS:
{{range .VisibleCommands}}{{join .Names ", "}}{{ "\t" }}{{.Usage}}
{{end}}{{if .VisibleFlags}}
FLAGS:
{{range .VisibleFlags}}{{.}}
{{end}}{{end}}
TEMPLATE:
Validator uses Go's 'text/template' formatting so you need to ensure
your template is formatted correctly, check 'docs/checker.go.template'
USAGE:
go run docs/validator.go -m docs/API.md -t /tmp/mycode.go.template
`
app.Run(os.Args)
}

View File

@@ -0,0 +1,61 @@
// +build ignore
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"log"
"github.com/minio/minio-go"
)
func main() {
// Note: YOUR-ACCESSKEYID, YOUR-SECRETACCESSKEY and my-bucketname are
// dummy values, please replace them with original values.
// Requests are always secure (HTTPS) by default. Set secure=false to enable insecure (HTTP) access.
// This boolean value is the last argument for New().
// New returns an Amazon S3 compatible client object. API compatibility (v2 or v4) is automatically
// determined based on the Endpoint value.
minioClient, err := minio.New("play.minio.io:9000", "YOUR-ACCESS", "YOUR-SECRET", true)
if err != nil {
log.Fatalln(err)
}
// s3Client.TraceOn(os.Stderr)
// Create a done channel to control 'ListenBucketNotification' go routine.
doneCh := make(chan struct{})
// Indicate to our routine to exit cleanly upon return.
defer close(doneCh)
// Listen for bucket notifications on "mybucket" filtered by prefix, suffix and events.
for notificationInfo := range minioClient.ListenBucketNotification("YOUR-BUCKET", "PREFIX", "SUFFIX", []string{
"s3:ObjectCreated:*",
"s3:ObjectAccessed:*",
"s3:ObjectRemoved:*",
}, doneCh) {
if notificationInfo.Err != nil {
log.Fatalln(notificationInfo.Err)
}
log.Println(notificationInfo)
}
}

View File

@@ -0,0 +1,52 @@
// +build ignore
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"log"
"github.com/minio/minio-go"
)
func main() {
// Note: YOUR-ACCESSKEYID, YOUR-SECRETACCESSKEY and my-bucketname are
// dummy values, please replace them with original values.
// Requests are always secure (HTTPS) by default. Set secure=false to enable insecure (HTTP) access.
// This boolean value is the last argument for New().
// New returns an Amazon S3 compatible client object. API compatibility (v2 or v4) is automatically
// determined based on the Endpoint value.
s3Client, err := minio.New("s3.amazonaws.com", "YOUR-ACCESSKEYID", "YOUR-SECRETACCESSKEY", true)
if err != nil {
log.Fatalln(err)
}
found, err := s3Client.BucketExists("my-bucketname")
if err != nil {
log.Fatalln(err)
}
if found {
log.Println("Bucket found.")
} else {
log.Println("Bucket not found.")
}
}

View File

@@ -0,0 +1,78 @@
// +build ignore
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"log"
minio "github.com/minio/minio-go"
)
func main() {
// Note: YOUR-ACCESSKEYID, YOUR-SECRETACCESSKEY, my-testfile, my-bucketname and
// my-objectname are dummy values, please replace them with original values.
// Requests are always secure (HTTPS) by default. Set secure=false to enable insecure (HTTP) access.
// This boolean value is the last argument for New().
// New returns an Amazon S3 compatible client object. API compatibility (v2 or v4) is automatically
// determined based on the Endpoint value.
s3Client, err := minio.New("s3.amazonaws.com", "YOUR-ACCESSKEYID", "YOUR-SECRETACCESSKEY", true)
if err != nil {
log.Fatalln(err)
}
// Enable trace.
// s3Client.TraceOn(os.Stderr)
// Prepare source decryption key (here we assume same key to
// decrypt all source objects.)
decKey := minio.NewSSEInfo([]byte{1, 2, 3}, "")
// Source objects to concatenate. We also specify decryption
// key for each
src1 := minio.NewSourceInfo("bucket1", "object1", &decKey)
src1.SetMatchETagCond("31624deb84149d2f8ef9c385918b653a")
src2 := minio.NewSourceInfo("bucket2", "object2", &decKey)
src2.SetMatchETagCond("f8ef9c385918b653a31624deb84149d2")
src3 := minio.NewSourceInfo("bucket3", "object3", &decKey)
src3.SetMatchETagCond("5918b653a31624deb84149d2f8ef9c38")
// Create slice of sources.
srcs := []minio.SourceInfo{src1, src2, src3}
// Prepare destination encryption key
encKey := minio.NewSSEInfo([]byte{8, 9, 0}, "")
// Create destination info
dst, err := minio.NewDestinationInfo("bucket", "object", &encKey, nil)
if err != nil {
log.Fatalln(err)
}
err = s3Client.ComposeObject(dst, srcs)
if err != nil {
log.Fatalln(err)
}
log.Println("Composed object successfully.")
}

View File

@@ -0,0 +1,75 @@
// +build ignore
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"log"
"time"
"github.com/minio/minio-go"
)
func main() {
// Note: YOUR-ACCESSKEYID, YOUR-SECRETACCESSKEY, my-testfile, my-bucketname and
// my-objectname are dummy values, please replace them with original values.
// Requests are always secure (HTTPS) by default. Set secure=false to enable insecure (HTTP) access.
// This boolean value is the last argument for New().
// New returns an Amazon S3 compatible client object. API compatibility (v2 or v4) is automatically
// determined based on the Endpoint value.
s3Client, err := minio.New("s3.amazonaws.com", "YOUR-ACCESSKEYID", "YOUR-SECRETACCESSKEY", true)
if err != nil {
log.Fatalln(err)
}
// Enable trace.
// s3Client.TraceOn(os.Stderr)
// Source object
src := minio.NewSourceInfo("my-sourcebucketname", "my-sourceobjectname", nil)
// All following conditions are allowed and can be combined together.
// Set modified condition, copy object modified since 2014 April.
src.SetModifiedSinceCond(time.Date(2014, time.April, 0, 0, 0, 0, 0, time.UTC))
// Set unmodified condition, copy object unmodified since 2014 April.
// src.SetUnmodifiedSinceCond(time.Date(2014, time.April, 0, 0, 0, 0, 0, time.UTC))
// Set matching ETag condition, copy object which matches the following ETag.
// src.SetMatchETagCond("31624deb84149d2f8ef9c385918b653a")
// Set matching ETag except condition, copy object which does not match the following ETag.
// src.SetMatchETagExceptCond("31624deb84149d2f8ef9c385918b653a")
// Destination object
dst, err := minio.NewDestinationInfo("my-bucketname", "my-objectname", nil, nil)
if err != nil {
log.Fatalln(err)
}
// Initiate copy object.
err = s3Client.CopyObject(dst, src)
if err != nil {
log.Fatalln(err)
}
log.Println("Copied source object /my-sourcebucketname/my-sourceobjectname to destination /my-bucketname/my-objectname Successfully.")
}

View File

@@ -0,0 +1,54 @@
// +build ignore
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"log"
"time"
"context"
"github.com/minio/minio-go"
)
func main() {
// Note: YOUR-ACCESSKEYID, YOUR-SECRETACCESSKEY, my-bucketname, my-objectname
// and my-filename.csv are dummy values, please replace them with original values.
// Requests are always secure (HTTPS) by default. Set secure=false to enable insecure (HTTP) access.
// This boolean value is the last argument for New().
// New returns an Amazon S3 compatible client object. API compatibility (v2 or v4) is automatically
// determined based on the Endpoint value.
s3Client, err := minio.New("s3.amazonaws.com", "YOUR-ACCESSKEYID", "YOUR-SECRETACCESSKEY", true)
if err != nil {
log.Fatalln(err)
}
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Minute)
defer cancel()
if err := s3Client.FGetObjectWithContext(ctx, "my-bucketname", "my-objectname", "my-filename.csv", minio.GetObjectOptions{}); err != nil {
log.Fatalln(err)
}
log.Println("Successfully saved my-filename.csv")
}

View File

@@ -0,0 +1,46 @@
// +build ignore
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"log"
"github.com/minio/minio-go"
)
func main() {
// Note: YOUR-ACCESSKEYID, YOUR-SECRETACCESSKEY, my-bucketname, my-objectname
// and my-filename.csv are dummy values, please replace them with original values.
// Requests are always secure (HTTPS) by default. Set secure=false to enable insecure (HTTP) access.
// This boolean value is the last argument for New().
// New returns an Amazon S3 compatible client object. API compatibility (v2 or v4) is automatically
// determined based on the Endpoint value.
s3Client, err := minio.New("s3.amazonaws.com", "YOUR-ACCESSKEYID", "YOUR-SECRETACCESSKEY", true)
if err != nil {
log.Fatalln(err)
}
if err := s3Client.FGetObject("my-bucketname", "my-objectname", "my-filename.csv", minio.GetObjectOptions{}); err != nil {
log.Fatalln(err)
}
log.Println("Successfully saved my-filename.csv")
}

View File

@@ -0,0 +1,80 @@
// +build ignore
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"log"
"github.com/minio/minio-go"
"github.com/minio/minio-go/pkg/encrypt"
)
func main() {
// Note: YOUR-ACCESSKEYID, YOUR-SECRETACCESSKEY, my-testfile, my-bucketname and
// my-objectname are dummy values, please replace them with original values.
// Requests are always secure (HTTPS) by default. Set secure=false to enable insecure (HTTP) access.
// This boolean value is the last argument for New().
// New returns an Amazon S3 compatible client object. API compatibility (v2 or v4) is automatically
// determined based on the Endpoint value.
s3Client, err := minio.New("s3.amazonaws.com", "YOUR-ACCESSKEYID", "YOUR-SECRETACCESSKEY", true)
if err != nil {
log.Fatalln(err)
}
// Specify a local file that we will upload
filePath := "my-testfile"
//// Build an asymmetric key from private and public files
//
// privateKey, err := ioutil.ReadFile("private.key")
// if err != nil {
// t.Fatal(err)
// }
//
// publicKey, err := ioutil.ReadFile("public.key")
// if err != nil {
// t.Fatal(err)
// }
//
// asymmetricKey, err := NewAsymmetricKey(privateKey, publicKey)
// if err != nil {
// t.Fatal(err)
// }
////
// Build a symmetric key
symmetricKey := encrypt.NewSymmetricKey([]byte("my-secret-key-00"))
// Build encryption materials which will encrypt uploaded data
cbcMaterials, err := encrypt.NewCBCSecureMaterials(symmetricKey)
if err != nil {
log.Fatalln(err)
}
// Encrypt file content and upload to the server
n, err := s3Client.FPutEncryptedObject("my-bucketname", "my-objectname", filePath, cbcMaterials)
if err != nil {
log.Fatalln(err)
}
log.Println("Uploaded", "my-objectname", " of size: ", n, "Successfully.")
}

View File

@@ -0,0 +1,53 @@
// +build ignore
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"log"
"time"
"context"
"github.com/minio/minio-go"
)
func main() {
// Note: YOUR-ACCESSKEYID, YOUR-SECRETACCESSKEY, my-bucketname, my-objectname
// and my-filename.csv are dummy values, please replace them with original values.
// Requests are always secure (HTTPS) by default. Set secure=false to enable insecure (HTTP) access.
// This boolean value is the last argument for New().
// New returns an Amazon S3 compatible client object. API compatibility (v2 or v4) is automatically
// determined based on the Endpoint value.
s3Client, err := minio.New("s3.amazonaws.com", "YOUR-ACCESSKEYID", "YOUR-SECRETACCESSKEY", true)
if err != nil {
log.Fatalln(err)
}
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Minute)
defer cancel()
if _, err := s3Client.FPutObjectWithContext(ctx, "my-bucketname", "my-objectname", "my-filename.csv", minio.PutObjectOptions{ContentType: "application/csv"}); err != nil {
log.Fatalln(err)
}
log.Println("Successfully uploaded my-filename.csv")
}

View File

@@ -0,0 +1,48 @@
// +build ignore
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"log"
"github.com/minio/minio-go"
)
func main() {
// Note: YOUR-ACCESSKEYID, YOUR-SECRETACCESSKEY, my-bucketname, my-objectname
// and my-filename.csv are dummy values, please replace them with original values.
// Requests are always secure (HTTPS) by default. Set secure=false to enable insecure (HTTP) access.
// This boolean value is the last argument for New().
// New returns an Amazon S3 compatible client object. API compatibility (v2 or v4) is automatically
// determined based on the Endpoint value.
s3Client, err := minio.New("s3.amazonaws.com", "YOUR-ACCESSKEYID", "YOUR-SECRETACCESSKEY", true)
if err != nil {
log.Fatalln(err)
}
if _, err := s3Client.FPutObject("my-bucketname", "my-objectname", "my-filename.csv", minio.PutObjectOptions{
ContentType: "application/csv",
}); err != nil {
log.Fatalln(err)
}
log.Println("Successfully uploaded my-filename.csv")
}

View File

@@ -0,0 +1,89 @@
// +build ignore
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"io"
"log"
"os"
"github.com/minio/minio-go"
"github.com/minio/minio-go/pkg/encrypt"
)
func main() {
// Note: YOUR-ACCESSKEYID, YOUR-SECRETACCESSKEY, my-bucketname, my-objectname and
// my-testfile are dummy values, please replace them with original values.
// Requests are always secure (HTTPS) by default. Set secure=false to enable insecure (HTTP) access.
// This boolean value is the last argument for New().
// New returns an Amazon S3 compatible client object. API compatibility (v2 or v4) is automatically
// determined based on the Endpoint value.
s3Client, err := minio.New("s3.amazonaws.com", "YOUR-ACCESS-KEY-HERE", "YOUR-SECRET-KEY-HERE", true)
if err != nil {
log.Fatalln(err)
}
//// Build an asymmetric key from private and public files
//
// privateKey, err := ioutil.ReadFile("private.key")
// if err != nil {
// t.Fatal(err)
// }
//
// publicKey, err := ioutil.ReadFile("public.key")
// if err != nil {
// t.Fatal(err)
// }
//
// asymmetricKey, err := NewAsymmetricKey(privateKey, publicKey)
// if err != nil {
// t.Fatal(err)
// }
////
// Build a symmetric key
symmetricKey := encrypt.NewSymmetricKey([]byte("my-secret-key-00"))
// Build encryption materials which will encrypt uploaded data
cbcMaterials, err := encrypt.NewCBCSecureMaterials(symmetricKey)
if err != nil {
log.Fatalln(err)
}
// Get a deciphered data from the server, deciphering is assured by cbcMaterials
reader, err := s3Client.GetEncryptedObject("my-bucketname", "my-objectname", cbcMaterials)
if err != nil {
log.Fatalln(err)
}
defer reader.Close()
// Local file which holds plain data
localFile, err := os.Create("my-testfile")
if err != nil {
log.Fatalln(err)
}
defer localFile.Close()
if _, err := io.Copy(localFile, reader); err != nil {
log.Fatalln(err)
}
}

View File

@@ -0,0 +1,56 @@
// +build ignore
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"log"
"github.com/minio/minio-go"
)
func main() {
// Note: YOUR-ACCESSKEYID, YOUR-SECRETACCESSKEY and my-bucketname are
// dummy values, please replace them with original values.
// Requests are always secure (HTTPS) by default. Set secure=false to enable insecure (HTTP) access.
// This boolean value is the last argument for New().
// New returns an Amazon S3 compatible client object. API compatibility (v2 or v4) is automatically
// determined based on the Endpoint value.
s3Client, err := minio.New("s3.amazonaws.com", "YOUR-ACCESSKEYID", "YOUR-SECRETACCESSKEY", true)
if err != nil {
log.Fatalln(err)
}
// s3Client.TraceOn(os.Stderr)
notifications, err := s3Client.GetBucketNotification("my-bucketname")
if err != nil {
log.Fatalln(err)
}
log.Println("Bucket notification are successfully retrieved.")
for _, topicConfig := range notifications.TopicConfigs {
for _, e := range topicConfig.Events {
log.Println(e + " event is enabled.")
}
}
}

View File

@@ -0,0 +1,56 @@
// +build ignore
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"log"
"github.com/minio/minio-go"
)
func main() {
// Note: YOUR-ACCESSKEYID, YOUR-SECRETACCESSKEY and my-bucketname are
// dummy values, please replace them with original values.
// Requests are always secure (HTTPS) by default. Set secure=false to enable insecure (HTTP) access.
// This boolean value is the last argument for New().
// New returns an Amazon S3 compatible client object. API compatibility (v2 or v4) is automatically
// determined based on the Endpoint value.
s3Client, err := minio.New("s3.amazonaws.com", "YOUR-ACCESSKEYID", "YOUR-SECRETACCESSKEY", true)
if err != nil {
log.Fatalln(err)
}
// s3Client.TraceOn(os.Stderr)
// Fetch the policy at 'my-objectprefix'.
policy, err := s3Client.GetBucketPolicy("my-bucketname", "my-objectprefix")
if err != nil {
log.Fatalln(err)
}
// Description of policy output.
// "none" - The specified bucket does not have a bucket policy.
// "readonly" - Read only operations are allowed.
// "writeonly" - Write only operations are allowed.
// "readwrite" - both read and write operations are allowed, the bucket is public.
log.Println("Success - ", policy)
}

View File

@@ -0,0 +1,73 @@
// +build ignore
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"io"
"log"
"os"
"time"
"context"
"github.com/minio/minio-go"
)
func main() {
// Note: YOUR-ACCESSKEYID, YOUR-SECRETACCESSKEY, my-bucketname, my-objectname and
// my-testfile are dummy values, please replace them with original values.
// Requests are always secure (HTTPS) by default. Set secure=false to enable insecure (HTTP) access.
// This boolean value is the last argument for New().
// New returns an Amazon S3 compatible client object. API compatibility (v2 or v4) is automatically
// determined based on the Endpoint value.
s3Client, err := minio.New("s3.amazonaws.com", "YOUR-ACCESS-KEY-HERE", "YOUR-SECRET-KEY-HERE", true)
if err != nil {
log.Fatalln(err)
}
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Minute)
defer cancel()
opts := minio.GetObjectOptions{}
opts.SetModified(time.Now().Round(10 * time.Minute)) // get object if was modified within the last 10 minutes
reader, err := s3Client.GetObjectWithContext(ctx, "my-bucketname", "my-objectname", opts)
if err != nil {
log.Fatalln(err)
}
defer reader.Close()
localFile, err := os.Create("my-testfile")
if err != nil {
log.Fatalln(err)
}
defer localFile.Close()
stat, err := reader.Stat()
if err != nil {
log.Fatalln(err)
}
if _, err := io.CopyN(localFile, reader, stat.Size); err != nil {
log.Fatalln(err)
}
}

View File

@@ -0,0 +1,64 @@
// +build ignore
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"io"
"log"
"os"
"github.com/minio/minio-go"
)
func main() {
// Note: YOUR-ACCESSKEYID, YOUR-SECRETACCESSKEY, my-bucketname, my-objectname and
// my-testfile are dummy values, please replace them with original values.
// Requests are always secure (HTTPS) by default. Set secure=false to enable insecure (HTTP) access.
// This boolean value is the last argument for New().
// New returns an Amazon S3 compatible client object. API compatibility (v2 or v4) is automatically
// determined based on the Endpoint value.
s3Client, err := minio.New("s3.amazonaws.com", "YOUR-ACCESS-KEY-HERE", "YOUR-SECRET-KEY-HERE", true)
if err != nil {
log.Fatalln(err)
}
reader, err := s3Client.GetObject("my-bucketname", "my-objectname", minio.GetObjectOptions{})
if err != nil {
log.Fatalln(err)
}
defer reader.Close()
localFile, err := os.Create("my-testfile")
if err != nil {
log.Fatalln(err)
}
defer localFile.Close()
stat, err := reader.Stat()
if err != nil {
log.Fatalln(err)
}
if _, err := io.CopyN(localFile, reader, stat.Size); err != nil {
log.Fatalln(err)
}
}

View File

@@ -0,0 +1,57 @@
// +build ignore
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"log"
"github.com/minio/minio-go"
)
func main() {
// Note: YOUR-ACCESSKEYID, YOUR-SECRETACCESSKEY and my-bucketname are
// dummy values, please replace them with original values.
// Requests are always secure (HTTPS) by default. Set secure=false to enable insecure (HTTP) access.
// This boolean value is the last argument for New().
// New returns an Amazon S3 compatible client object. API compatibility (v2 or v4) is automatically
// determined based on the Endpoint value.
s3Client, err := minio.New("s3.amazonaws.com", "YOUR-ACCESSKEYID", "YOUR-SECRETACCESSKEY", true)
if err != nil {
log.Fatalln(err)
}
// s3Client.TraceOn(os.Stderr)
// Fetch the policy at 'my-objectprefix'.
policies, err := s3Client.ListBucketPolicies("my-bucketname", "my-objectprefix")
if err != nil {
log.Fatalln(err)
}
// ListBucketPolicies returns a map of objects policy rules and their associated permissions
// e.g. mybucket/downloadfolder/* => readonly
// mybucket/shared/* => readwrite
for resource, permission := range policies {
log.Println(resource, " => ", permission)
}
}

View File

@@ -0,0 +1,49 @@
// +build ignore
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"log"
"github.com/minio/minio-go"
)
func main() {
// Note: YOUR-ACCESSKEYID and YOUR-SECRETACCESSKEY are
// dummy values, please replace them with original values.
// Requests are always secure (HTTPS) by default. Set secure=false to enable insecure (HTTP) access.
// This boolean value is the last argument for New().
// New returns an Amazon S3 compatible client object. API compatibility (v2 or v4) is automatically
// determined based on the Endpoint value.
s3Client, err := minio.New("s3.amazonaws.com", "YOUR-ACCESSKEYID", "YOUR-SECRETACCESSKEY", true)
if err != nil {
log.Fatalln(err)
}
buckets, err := s3Client.ListBuckets()
if err != nil {
log.Fatalln(err)
}
for _, bucket := range buckets {
log.Println(bucket)
}
}

View File

@@ -0,0 +1,58 @@
// +build ignore
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"fmt"
"log"
"github.com/minio/minio-go"
)
func main() {
// Note: YOUR-ACCESSKEYID, YOUR-SECRETACCESSKEY, my-bucketname and my-prefixname
// are dummy values, please replace them with original values.
// Requests are always secure (HTTPS) by default. Set secure=false to enable insecure (HTTP) access.
// This boolean value is the last argument for New().
// New returns an Amazon S3 compatible client object. API compatibility (v2 or v4) is automatically
// determined based on the Endpoint value.
s3Client, err := minio.New("s3.amazonaws.com", "YOUR-ACCESSKEYID", "YOUR-SECRETACCESSKEY", true)
if err != nil {
log.Fatalln(err)
}
// Create a done channel to control 'ListObjects' go routine.
doneCh := make(chan struct{})
// Indicate to our routine to exit cleanly upon return.
defer close(doneCh)
// List all multipart uploads from a bucket-name with a matching prefix.
for multipartObject := range s3Client.ListIncompleteUploads("my-bucketname", "my-prefixname", true, doneCh) {
if multipartObject.Err != nil {
fmt.Println(multipartObject.Err)
return
}
fmt.Println(multipartObject)
}
return
}

View File

@@ -0,0 +1,77 @@
// +build ignore
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"fmt"
"github.com/minio/minio-go"
)
func main() {
// Note: YOUR-ACCESSKEYID, YOUR-SECRETACCESSKEY, my-bucketname and my-prefixname
// are dummy values, please replace them with original values.
// Requests are always secure (HTTPS) by default. Set secure=false to enable insecure (HTTP) access.
// This boolean value is the last argument for New().
// New returns an Amazon S3 compatible client object. API compatibility (v2 or v4) is automatically
// determined based on the Endpoint value.
s3Client, err := minio.New("s3.amazonaws.com", "YOUR-ACCESSKEYID", "YOUR-SECRETACCESSKEY", true)
if err != nil {
fmt.Println(err)
return
}
// List 'N' number of objects from a bucket-name with a matching prefix.
listObjectsN := func(bucket, prefix string, recursive bool, N int) (objsInfo []minio.ObjectInfo, err error) {
// Create a done channel to control 'ListObjects' go routine.
doneCh := make(chan struct{}, 1)
// Free the channel upon return.
defer close(doneCh)
i := 1
for object := range s3Client.ListObjects(bucket, prefix, recursive, doneCh) {
if object.Err != nil {
return nil, object.Err
}
i++
// Verify if we have printed N objects.
if i == N {
// Indicate ListObjects go-routine to exit and stop
// feeding the objectInfo channel.
doneCh <- struct{}{}
}
objsInfo = append(objsInfo, object)
}
return objsInfo, nil
}
// List recursively first 100 entries for prefix 'my-prefixname'.
recursive := true
objsInfo, err := listObjectsN("my-bucketname", "my-prefixname", recursive, 100)
if err != nil {
fmt.Println(err)
}
// Print all the entries.
fmt.Println(objsInfo)
}

View File

@@ -0,0 +1,58 @@
// +build ignore
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"fmt"
"github.com/minio/minio-go"
)
func main() {
// Note: YOUR-ACCESSKEYID, YOUR-SECRETACCESSKEY, my-bucketname and my-prefixname
// are dummy values, please replace them with original values.
// Requests are always secure (HTTPS) by default. Set secure=false to enable insecure (HTTP) access.
// This boolean value is the last argument for New().
// New returns an Amazon S3 compatible client object. API compatibility (v2 or v4) is automatically
// determined based on the Endpoint value.
s3Client, err := minio.New("s3.amazonaws.com", "YOUR-ACCESSKEYID", "YOUR-SECRETACCESSKEY", true)
if err != nil {
fmt.Println(err)
return
}
// Create a done channel to control 'ListObjects' go routine.
doneCh := make(chan struct{})
// Indicate to our routine to exit cleanly upon return.
defer close(doneCh)
// List all objects from a bucket-name with a matching prefix.
for object := range s3Client.ListObjects("my-bucketname", "my-prefixname", true, doneCh) {
if object.Err != nil {
fmt.Println(object.Err)
return
}
fmt.Println(object)
}
return
}

View File

@@ -0,0 +1,58 @@
// +build ignore
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"fmt"
"github.com/minio/minio-go"
)
func main() {
// Note: YOUR-ACCESSKEYID, YOUR-SECRETACCESSKEY, my-bucketname and my-prefixname
// are dummy values, please replace them with original values.
// Requests are always secure (HTTPS) by default. Set secure=false to enable insecure (HTTP) access.
// This boolean value is the last argument for New().
// New returns an Amazon S3 compatible client object. API compatibility (v2 or v4) is automatically
// determined based on the Endpoint value.
s3Client, err := minio.New("s3.amazonaws.com", "YOUR-ACCESSKEYID", "YOUR-SECRETACCESSKEY", true)
if err != nil {
fmt.Println(err)
return
}
// Create a done channel to control 'ListObjects' go routine.
doneCh := make(chan struct{})
// Indicate to our routine to exit cleanly upon return.
defer close(doneCh)
// List all objects from a bucket-name with a matching prefix.
for object := range s3Client.ListObjectsV2("my-bucketname", "my-prefixname", true, doneCh) {
if object.Err != nil {
fmt.Println(object.Err)
return
}
fmt.Println(object)
}
return
}

View File

@@ -0,0 +1,47 @@
// +build ignore
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"log"
"github.com/minio/minio-go"
)
func main() {
// Note: YOUR-ACCESSKEYID, YOUR-SECRETACCESSKEY and my-bucketname are
// dummy values, please replace them with original values.
// Requests are always secure (HTTPS) by default. Set secure=false to enable insecure (HTTP) access.
// This boolean value is the last argument for New().
// New returns an Amazon S3 compatible client object. API compatibility (v2 or v4) is automatically
// determined based on the Endpoint value.
s3Client, err := minio.New("s3.amazonaws.com", "YOUR-ACCESSKEYID", "YOUR-SECRETACCESSKEY", true)
if err != nil {
log.Fatalln(err)
}
err = s3Client.MakeBucket("my-bucketname", "us-east-1")
if err != nil {
log.Fatalln(err)
}
log.Println("Success")
}

View File

@@ -0,0 +1,54 @@
// +build ignore
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"log"
"net/url"
"time"
"github.com/minio/minio-go"
)
func main() {
// Note: YOUR-ACCESSKEYID, YOUR-SECRETACCESSKEY, my-bucketname and my-objectname
// are dummy values, please replace them with original values.
// Requests are always secure (HTTPS) by default. Set secure=false to enable insecure (HTTP) access.
// This boolean value is the last argument for New().
// New returns an Amazon S3 compatible client object. API compatibility (v2 or v4) is automatically
// determined based on the Endpoint value.
s3Client, err := minio.New("s3.amazonaws.com", "YOUR-ACCESSKEYID", "YOUR-SECRETACCESSKEY", true)
if err != nil {
log.Fatalln(err)
}
// Set request parameters
reqParams := make(url.Values)
reqParams.Set("response-content-disposition", "attachment; filename=\"your-filename.txt\"")
// Gernerate presigned get object url.
presignedURL, err := s3Client.PresignedGetObject("my-bucketname", "my-objectname", time.Duration(1000)*time.Second, reqParams)
if err != nil {
log.Fatalln(err)
}
log.Println(presignedURL)
}

View File

@@ -0,0 +1,54 @@
// +build ignore
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"log"
"net/url"
"time"
"github.com/minio/minio-go"
)
func main() {
// Note: YOUR-ACCESSKEYID, YOUR-SECRETACCESSKEY, my-bucketname and my-objectname
// are dummy values, please replace them with original values.
// Requests are always secure (HTTPS) by default. Set secure=false to enable insecure (HTTP) access.
// This boolean value is the last argument for New().
// New returns an Amazon S3 compatible client object. API compatibility (v2 or v4) is automatically
// determined based on the Endpoint value.
s3Client, err := minio.New("s3.amazonaws.com", "YOUR-ACCESSKEYID", "YOUR-SECRETACCESSKEY", true)
if err != nil {
log.Fatalln(err)
}
// Set request parameters
reqParams := make(url.Values)
reqParams.Set("response-content-disposition", "attachment; filename=\"your-filename.txt\"")
// Gernerate presigned get object url.
presignedURL, err := s3Client.PresignedHeadObject("my-bucketname", "my-objectname", time.Duration(1000)*time.Second, reqParams)
if err != nil {
log.Fatalln(err)
}
log.Println(presignedURL)
}

View File

@@ -0,0 +1,60 @@
// +build ignore
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"fmt"
"log"
"time"
"github.com/minio/minio-go"
)
func main() {
// Note: YOUR-ACCESSKEYID, YOUR-SECRETACCESSKEY, my-bucketname and my-objectname
// are dummy values, please replace them with original values.
// Requests are always secure (HTTPS) by default. Set secure=false to enable insecure (HTTP) access.
// This boolean value is the last argument for New().
// New returns an Amazon S3 compatible client object. API compatibility (v2 or v4) is automatically
// determined based on the Endpoint value.
s3Client, err := minio.New("s3.amazonaws.com", "YOUR-ACCESSKEYID", "YOUR-SECRETACCESSKEY", true)
if err != nil {
log.Fatalln(err)
}
policy := minio.NewPostPolicy()
policy.SetBucket("my-bucketname")
policy.SetKey("my-objectname")
// Expires in 10 days.
policy.SetExpires(time.Now().UTC().AddDate(0, 0, 10))
// Returns form data for POST form request.
url, formData, err := s3Client.PresignedPostPolicy(policy)
if err != nil {
log.Fatalln(err)
}
fmt.Printf("curl ")
for k, v := range formData {
fmt.Printf("-F %s=%s ", k, v)
}
fmt.Printf("-F file=@/etc/bash.bashrc ")
fmt.Printf("%s\n", url)
}

View File

@@ -0,0 +1,48 @@
// +build ignore
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"log"
"time"
"github.com/minio/minio-go"
)
func main() {
// Note: YOUR-ACCESSKEYID, YOUR-SECRETACCESSKEY, my-bucketname and my-objectname
// are dummy values, please replace them with original values.
// Requests are always secure (HTTPS) by default. Set secure=false to enable insecure (HTTP) access.
// This boolean value is the last argument for New().
// New returns an Amazon S3 compatible client object. API compatibility (v2 or v4) is automatically
// determined based on the Endpoint value.
s3Client, err := minio.New("s3.amazonaws.com", "YOUR-ACCESSKEYID", "YOUR-SECRETACCESSKEY", true)
if err != nil {
log.Fatalln(err)
}
presignedURL, err := s3Client.PresignedPutObject("my-bucketname", "my-objectname", time.Duration(1000)*time.Second)
if err != nil {
log.Fatalln(err)
}
log.Println(presignedURL)
}

View File

@@ -0,0 +1,85 @@
// +build ignore
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"log"
"os"
"github.com/minio/minio-go"
"github.com/minio/minio-go/pkg/encrypt"
)
func main() {
// Note: YOUR-ACCESSKEYID, YOUR-SECRETACCESSKEY, my-testfile, my-bucketname and
// my-objectname are dummy values, please replace them with original values.
// Requests are always secure (HTTPS) by default. Set secure=false to enable insecure (HTTP) access.
// This boolean value is the last argument for New().
// New returns an Amazon S3 compatible client object. API compatibility (v2 or v4) is automatically
// determined based on the Endpoint value.
s3Client, err := minio.New("s3.amazonaws.com", "YOUR-ACCESSKEYID", "YOUR-SECRETACCESSKEY", true)
if err != nil {
log.Fatalln(err)
}
// Open a local file that we will upload
file, err := os.Open("my-testfile")
if err != nil {
log.Fatalln(err)
}
defer file.Close()
//// Build an asymmetric key from private and public files
//
// privateKey, err := ioutil.ReadFile("private.key")
// if err != nil {
// t.Fatal(err)
// }
//
// publicKey, err := ioutil.ReadFile("public.key")
// if err != nil {
// t.Fatal(err)
// }
//
// asymmetricKey, err := NewAsymmetricKey(privateKey, publicKey)
// if err != nil {
// t.Fatal(err)
// }
////
// Build a symmetric key
symmetricKey := encrypt.NewSymmetricKey([]byte("my-secret-key-00"))
// Build encryption materials which will encrypt uploaded data
cbcMaterials, err := encrypt.NewCBCSecureMaterials(symmetricKey)
if err != nil {
log.Fatalln(err)
}
// Encrypt file content and upload to the server
n, err := s3Client.PutEncryptedObject("my-bucketname", "my-objectname", file, cbcMaterials)
if err != nil {
log.Fatalln(err)
}
log.Println("Uploaded", "my-objectname", " of size: ", n, "Successfully.")
}

View File

@@ -0,0 +1,68 @@
// +build ignore
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"log"
"os"
"time"
"context"
"github.com/minio/minio-go"
)
func main() {
// Note: YOUR-ACCESSKEYID, YOUR-SECRETACCESSKEY, my-testfile, my-bucketname and
// my-objectname are dummy values, please replace them with original values.
// Requests are always secure (HTTPS) by default. Set secure=false to enable insecure (HTTP) access.
// This boolean value is the last argument for New().
// New returns an Amazon S3 compatible client object. API compatibility (v2 or v4) is automatically
// determined based on the Endpoint value.
s3Client, err := minio.New("s3.amazonaws.com", "YOUR-ACCESSKEYID", "YOUR-SECRETACCESSKEY", true)
if err != nil {
log.Fatalln(err)
}
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Minute)
defer cancel()
object, err := os.Open("my-testfile")
if err != nil {
log.Fatalln(err)
}
defer object.Close()
objectStat, err := object.Stat()
if err != nil {
log.Fatalln(err)
}
n, err := s3Client.PutObjectWithContext(ctx, "my-bucketname", "my-objectname", object, objectStat.Size(), minio.PutObjectOptions{
ContentType: "application/octet-stream",
})
if err != nil {
log.Fatalln(err)
}
log.Println("Uploaded", "my-objectname", " of size: ", n, "Successfully.")
}

View File

@@ -0,0 +1,87 @@
// +build ignore
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"bytes"
"crypto/md5"
"encoding/base64"
"io/ioutil"
"log"
minio "github.com/minio/minio-go"
)
func main() {
// Note: YOUR-ACCESSKEYID, YOUR-SECRETACCESSKEY, my-testfile, my-bucketname and
// my-objectname are dummy values, please replace them with original values.
// New returns an Amazon S3 compatible client object. API compatibility (v2 or v4) is automatically
// determined based on the Endpoint value.
minioClient, err := minio.New("s3.amazonaws.com", "YOUR-ACCESSKEYID", "YOUR-SECRETACCESSKEY", true)
if err != nil {
log.Fatalln(err)
}
content := bytes.NewReader([]byte("Hello again"))
key := []byte("32byteslongsecretkeymustprovided")
h := md5.New()
h.Write(key)
encryptionKey := base64.StdEncoding.EncodeToString(key)
encryptionKeyMD5 := base64.StdEncoding.EncodeToString(h.Sum(nil))
// Amazon S3 does not store the encryption key you provide.
// Instead S3 stores a randomly salted HMAC value of the
// encryption key in order to validate future requests.
// The salted HMAC value cannot be used to derive the value
// of the encryption key or to decrypt the contents of the
// encrypted object. That means, if you lose the encryption
// key, you lose the object.
var metadata = map[string]string{
"x-amz-server-side-encryption-customer-algorithm": "AES256",
"x-amz-server-side-encryption-customer-key": encryptionKey,
"x-amz-server-side-encryption-customer-key-MD5": encryptionKeyMD5,
}
// minioClient.TraceOn(os.Stderr) // Enable to debug.
_, err = minioClient.PutObject("mybucket", "my-encrypted-object.txt", content, 11, minio.PutObjectOptions{UserMetadata: metadata})
if err != nil {
log.Fatalln(err)
}
opts := minio.GetObjectOptions{}
for k, v := range metadata {
opts.Set(k, v)
}
coreClient := minio.Core{minioClient}
reader, _, err := coreClient.GetObject("mybucket", "my-encrypted-object.txt", opts)
if err != nil {
log.Fatalln(err)
}
defer reader.Close()
decBytes, err := ioutil.ReadAll(reader)
if err != nil {
log.Fatalln(err)
}
if !bytes.Equal(decBytes, []byte("Hello again")) {
log.Fatalln("Expected \"Hello, world\", got %s", string(decBytes))
}
}

View File

@@ -0,0 +1,64 @@
// +build ignore
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"log"
"github.com/cheggaaa/pb"
"github.com/minio/minio-go"
)
func main() {
// Note: YOUR-ACCESSKEYID, YOUR-SECRETACCESSKEY, my-testfile, my-bucketname and
// my-objectname are dummy values, please replace them with original values.
// Requests are always secure (HTTPS) by default. Set secure=false to enable insecure (HTTP) access.
// This boolean value is the last argument for New().
// New returns an Amazon S3 compatible client object. API compatibility (v2 or v4) is automatically
// determined based on the Endpoint value.
s3Client, err := minio.New("s3.amazonaws.com", "YOUR-ACCESSKEYID", "YOUR-SECRETACCESSKEY", true)
if err != nil {
log.Fatalln(err)
}
reader, err := s3Client.GetObject("my-bucketname", "my-objectname", minio.GetObjectOptions{})
if err != nil {
log.Fatalln(err)
}
defer reader.Close()
objectInfo, err := reader.Stat()
if err != nil {
log.Fatalln(err)
}
// Progress reader is notified as PutObject makes progress with
// the Reads inside.
progress := pb.New64(objectInfo.Size)
progress.Start()
n, err := s3Client.PutObject("my-bucketname", "my-objectname-progress", reader, objectInfo.Size, minio.PutObjectOptions{ContentType: "application/octet-stream", Progress: progress})
if err != nil {
log.Fatalln(err)
}
log.Println("Uploaded", "my-objectname", " of size: ", n, "Successfully.")
}

View File

@@ -0,0 +1,62 @@
// +build ignore
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"log"
"os"
"github.com/minio/minio-go"
)
func main() {
// Note: YOUR-ACCESSKEYID, YOUR-SECRETACCESSKEY, my-testfile, my-bucketname and
// my-objectname are dummy values, please replace them with original values.
// Requests are always secure (HTTPS) by default. Set secure=false to enable insecure (HTTP) access.
// This boolean value is the last argument for New().
// New returns an Amazon S3 compatible client object. API compatibility (v2 or v4) is automatically
// determined based on the Endpoint value.
s3Client, err := minio.New("s3.amazonaws.com", "YOUR-ACCESSKEYID", "YOUR-SECRETACCESSKEY", true)
if err != nil {
log.Fatalln(err)
}
// Enable S3 transfer accelerate endpoint.
s3Client.SetS3TransferAccelerate("s3-accelerate.amazonaws.com")
object, err := os.Open("my-testfile")
if err != nil {
log.Fatalln(err)
}
defer object.Close()
objectStat, err := object.Stat()
if err != nil {
log.Fatalln(err)
}
n, err := s3Client.PutObject("my-bucketname", "my-objectname", object, objectStat.Size(), minio.PutObjectOptions{ContentType: "application/octet-stream"})
if err != nil {
log.Fatalln(err)
}
log.Println("Uploaded", "my-objectname", " of size: ", n, "Successfully.")
}

View File

@@ -0,0 +1,55 @@
// +build ignore
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"log"
"os"
minio "github.com/minio/minio-go"
)
func main() {
// Note: YOUR-ACCESSKEYID, YOUR-SECRETACCESSKEY, my-testfile, my-bucketname and
// my-objectname are dummy values, please replace them with original values.
// Requests are always secure (HTTPS) by default. Set secure=false to enable insecure (HTTP) access.
// This boolean value is the last argument for New().
// New returns an Amazon S3 compatible client object. API compatibility (v2 or v4) is automatically
// determined based on the Endpoint value.
s3Client, err := minio.New("s3.amazonaws.com", "YOUR-ACCESSKEYID", "YOUR-SECRETACCESSKEY", true)
if err != nil {
log.Fatalln(err)
}
object, err := os.Open("my-testfile")
if err != nil {
log.Fatalln(err)
}
defer object.Close()
n, err := s3Client.PutObject("my-bucketname", "my-objectname", object, -1, minio.PutObjectOptions{})
if err != nil {
log.Fatalln(err)
}
log.Println("Uploaded", "my-objectname", " of size: ", n, "Successfully.")
}

View File

@@ -0,0 +1,58 @@
// +build ignore
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"log"
"os"
"github.com/minio/minio-go"
)
func main() {
// Note: YOUR-ACCESSKEYID, YOUR-SECRETACCESSKEY, my-testfile, my-bucketname and
// my-objectname are dummy values, please replace them with original values.
// Requests are always secure (HTTPS) by default. Set secure=false to enable insecure (HTTP) access.
// This boolean value is the last argument for New().
// New returns an Amazon S3 compatible client object. API compatibility (v2 or v4) is automatically
// determined based on the Endpoint value.
s3Client, err := minio.New("s3.amazonaws.com", "YOUR-ACCESSKEYID", "YOUR-SECRETACCESSKEY", true)
if err != nil {
log.Fatalln(err)
}
object, err := os.Open("my-testfile")
if err != nil {
log.Fatalln(err)
}
defer object.Close()
objectStat, err := object.Stat()
if err != nil {
log.Fatalln(err)
}
n, err := s3Client.PutObject("my-bucketname", "my-objectname", object, objectStat.Size(), minio.PutObjectOptions{ContentType: "application/octet-stream"})
if err != nil {
log.Fatalln(err)
}
log.Println("Uploaded", "my-objectname", " of size: ", n, "Successfully.")
}

View File

@@ -0,0 +1,50 @@
// +build ignore
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"log"
"github.com/minio/minio-go"
)
func main() {
// Note: YOUR-ACCESSKEYID, YOUR-SECRETACCESSKEY and my-bucketname are
// dummy values, please replace them with original values.
// Requests are always secure (HTTPS) by default. Set secure=false to enable insecure (HTTP) access.
// This boolean value is the last argument for New().
// New returns an Amazon S3 compatible client object. API compatibility (v2 or v4) is automatically
// determined based on the Endpoint value.
s3Client, err := minio.New("s3.amazonaws.com", "YOUR-ACCESSKEYID", "YOUR-SECRETACCESSKEY", true)
if err != nil {
log.Fatalln(err)
}
// s3Client.TraceOn(os.Stderr)
err = s3Client.RemoveAllBucketNotification("my-bucketname")
if err != nil {
log.Fatalln(err)
}
log.Println("Bucket notification are successfully removed.")
}

View File

@@ -0,0 +1,49 @@
// +build ignore
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"log"
"github.com/minio/minio-go"
)
func main() {
// Note: YOUR-ACCESSKEYID, YOUR-SECRETACCESSKEY and my-bucketname are
// dummy values, please replace them with original values.
// Requests are always secure (HTTPS) by default. Set secure=false to enable insecure (HTTP) access.
// This boolean value is the last argument for New().
// New returns an Amazon S3 compatible client object. API compatibility (v2 or v4) is automatically
// determined based on the Endpoint value.
s3Client, err := minio.New("s3.amazonaws.com", "YOUR-ACCESSKEYID", "YOUR-SECRETACCESSKEY", true)
if err != nil {
log.Fatalln(err)
}
// This operation will only work if your bucket is empty.
err = s3Client.RemoveBucket("my-bucketname")
if err != nil {
log.Fatalln(err)
}
log.Println("Success")
}

View File

@@ -0,0 +1,47 @@
// +build ignore
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"log"
"github.com/minio/minio-go"
)
func main() {
// Note: YOUR-ACCESSKEYID, YOUR-SECRETACCESSKEY, my-bucketname and my-objectname
// are dummy values, please replace them with original values.
// Requests are always secure (HTTPS) by default. Set secure=false to enable insecure (HTTP) access.
// This boolean value is the last argument for New().
// New returns an Amazon S3 compatible client object. API compatibility (v2 or v4) is automatically
// determined based on the Endpoint value.
s3Client, err := minio.New("s3.amazonaws.com", "YOUR-ACCESSKEYID", "YOUR-SECRETACCESSKEY", true)
if err != nil {
log.Fatalln(err)
}
err = s3Client.RemoveIncompleteUpload("my-bucketname", "my-objectname")
if err != nil {
log.Fatalln(err)
}
log.Println("Success")
}

View File

@@ -0,0 +1,46 @@
// +build ignore
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"log"
"github.com/minio/minio-go"
)
func main() {
// Note: YOUR-ACCESSKEYID, YOUR-SECRETACCESSKEY, my-bucketname and my-objectname
// are dummy values, please replace them with original values.
// Requests are always secure (HTTPS) by default. Set secure=false to enable insecure (HTTP) access.
// This boolean value is the last argument for New().
// New returns an Amazon S3 compatible client object. API compatibility (v2 or v4) is automatically
// determined based on the Endpoint value.
s3Client, err := minio.New("s3.amazonaws.com", "YOUR-ACCESSKEYID", "YOUR-SECRETACCESSKEY", true)
if err != nil {
log.Fatalln(err)
}
err = s3Client.RemoveObject("my-bucketname", "my-objectname")
if err != nil {
log.Fatalln(err)
}
log.Println("Success")
}

View File

@@ -0,0 +1,65 @@
// +build ignore
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"log"
"github.com/minio/minio-go"
)
func main() {
// Note: YOUR-ACCESSKEYID, YOUR-SECRETACCESSKEY, my-bucketname and my-objectname
// are dummy values, please replace them with original values.
// Requests are always secure (HTTPS) by default. Set secure=false to enable insecure (HTTP) access.
// This boolean value is the last argument for New().
// New returns an Amazon S3 compatible client object. API compatibility (v2 or v4) is automatically
// determined based on the Endpoint value.
s3Client, err := minio.New("s3.amazonaws.com", "YOUR-ACCESSKEYID", "YOUR-SECRETACCESSKEY", true)
if err != nil {
log.Fatalln(err)
}
objectsCh := make(chan string)
// Send object names that are needed to be removed to objectsCh
go func() {
defer close(objectsCh)
// List all objects from a bucket-name with a matching prefix.
for object := range s3Client.ListObjects("my-bucketname", "my-prefixname", true, doneCh) {
if object.Err != nil {
log.Fatalln(object.Err)
}
objectsCh <- object.Key
}
}()
// Call RemoveObjects API
errorCh := s3Client.RemoveObjects("my-bucketname", objectsCh)
// Print errors received from RemoveObjects API
for e := range errorCh {
log.Fatalln("Failed to remove " + e.ObjectName + ", error: " + e.Err.Error())
}
log.Println("Success")
}

View File

@@ -0,0 +1,86 @@
// +build ignore
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"log"
"github.com/minio/minio-go"
)
func main() {
// Note: YOUR-ACCESSKEYID, YOUR-SECRETACCESSKEY and my-bucketname are
// dummy values, please replace them with original values.
// Requests are always secure (HTTPS) by default. Set secure=false to enable insecure (HTTP) access.
// This boolean value is the last argument for New().
// New returns an Amazon S3 compatible client object. API compatibility (v2 or v4) is automatically
// determined based on the Endpoint value.
s3Client, err := minio.New("s3.amazonaws.com", "YOUR-ACCESSKEYID", "YOUR-SECRETACCESSKEY", true)
if err != nil {
log.Fatalln(err)
}
// s3Client.TraceOn(os.Stderr)
// ARN represents a notification channel that needs to be created in your S3 provider
// (e.g. http://docs.aws.amazon.com/sns/latest/dg/CreateTopic.html)
// An example of an ARN:
// arn:aws:sns:us-east-1:804064459714:UploadPhoto
// ^ ^ ^ ^ ^
// Provider __| | | | |
// | Region Account ID |_ Notification Name
// Service _|
//
// You should replace YOUR-PROVIDER, YOUR-SERVICE, YOUR-REGION, YOUR-ACCOUNT-ID and YOUR-RESOURCE
// with actual values that you receive from the S3 provider
// Here you create a new Topic notification
topicArn := minio.NewArn("YOUR-PROVIDER", "YOUR-SERVICE", "YOUR-REGION", "YOUR-ACCOUNT-ID", "YOUR-RESOURCE")
topicConfig := minio.NewNotificationConfig(topicArn)
topicConfig.AddEvents(minio.ObjectCreatedAll, minio.ObjectRemovedAll)
topicConfig.AddFilterPrefix("photos/")
topicConfig.AddFilterSuffix(".jpg")
// Create a new Queue notification
queueArn := minio.NewArn("YOUR-PROVIDER", "YOUR-SERVICE", "YOUR-REGION", "YOUR-ACCOUNT-ID", "YOUR-RESOURCE")
queueConfig := minio.NewNotificationConfig(queueArn)
queueConfig.AddEvents(minio.ObjectRemovedAll)
// Create a new Lambda (CloudFunction)
lambdaArn := minio.NewArn("YOUR-PROVIDER", "YOUR-SERVICE", "YOUR-REGION", "YOUR-ACCOUNT-ID", "YOUR-RESOURCE")
lambdaConfig := minio.NewNotificationConfig(lambdaArn)
lambdaConfig.AddEvents(minio.ObjectRemovedAll)
lambdaConfig.AddFilterSuffix(".swp")
// Now, set all previously created notification configs
bucketNotification := minio.BucketNotification{}
bucketNotification.AddTopic(topicConfig)
bucketNotification.AddQueue(queueConfig)
bucketNotification.AddLambda(lambdaConfig)
err = s3Client.SetBucketNotification("YOUR-BUCKET", bucketNotification)
if err != nil {
log.Fatalln("Error: " + err.Error())
}
log.Println("Success")
}

View File

@@ -0,0 +1,55 @@
// +build ignore
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"log"
"github.com/minio/minio-go"
"github.com/minio/minio-go/pkg/policy"
)
func main() {
// Note: YOUR-ACCESSKEYID, YOUR-SECRETACCESSKEY and my-bucketname are
// dummy values, please replace them with original values.
// Requests are always secure (HTTPS) by default. Set secure=false to enable insecure (HTTP) access.
// This boolean value is the last argument for New().
// New returns an Amazon S3 compatible client object. API compatibility (v2 or v4) is automatically
// determined based on the Endpoint value.
s3Client, err := minio.New("s3.amazonaws.com", "YOUR-ACCESSKEYID", "YOUR-SECRETACCESSKEY", true)
if err != nil {
log.Fatalln(err)
}
// s3Client.TraceOn(os.Stderr)
// Description of policy input.
// policy.BucketPolicyNone - Remove any previously applied bucket policy at a prefix.
// policy.BucketPolicyReadOnly - Set read-only operations at a prefix.
// policy.BucketPolicyWriteOnly - Set write-only operations at a prefix.
// policy.BucketPolicyReadWrite - Set read-write operations at a prefix.
err = s3Client.SetBucketPolicy("my-bucketname", "my-objectprefix", policy.BucketPolicyReadWrite)
if err != nil {
log.Fatalln(err)
}
log.Println("Success")
}

View File

@@ -0,0 +1,46 @@
// +build ignore
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package main
import (
"log"
"github.com/minio/minio-go"
)
func main() {
// Note: YOUR-ACCESSKEYID, YOUR-SECRETACCESSKEY, my-bucketname and my-objectname
// are dummy values, please replace them with original values.
// Requests are always secure (HTTPS) by default. Set secure=false to enable insecure (HTTP) access.
// This boolean value is the last argument for New().
// New returns an Amazon S3 compatible client object. API compatibility (v2 or v4) is automatically
// determined based on the Endpoint value.
s3Client, err := minio.New("s3.amazonaws.com", "YOUR-ACCESSKEYID", "YOUR-SECRETACCESSKEY", true)
if err != nil {
log.Fatalln(err)
}
stat, err := s3Client.StatObject("my-bucketname", "my-objectname", minio.StatObjectOptions{})
if err != nil {
log.Fatalln(err)
}
log.Println(stat)
}

6939
vendor/github.com/minio/minio-go/functional_tests.go generated vendored Normal file

File diff suppressed because it is too large Load Diff

71
vendor/github.com/minio/minio-go/hook-reader.go generated vendored Normal file
View File

@@ -0,0 +1,71 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import "io"
// hookReader hooks additional reader in the source stream. It is
// useful for making progress bars. Second reader is appropriately
// notified about the exact number of bytes read from the primary
// source on each Read operation.
type hookReader struct {
source io.Reader
hook io.Reader
}
// Seek implements io.Seeker. Seeks source first, and if necessary
// seeks hook if Seek method is appropriately found.
func (hr *hookReader) Seek(offset int64, whence int) (n int64, err error) {
// Verify for source has embedded Seeker, use it.
sourceSeeker, ok := hr.source.(io.Seeker)
if ok {
return sourceSeeker.Seek(offset, whence)
}
// Verify if hook has embedded Seeker, use it.
hookSeeker, ok := hr.hook.(io.Seeker)
if ok {
return hookSeeker.Seek(offset, whence)
}
return n, nil
}
// Read implements io.Reader. Always reads from the source, the return
// value 'n' number of bytes are reported through the hook. Returns
// error for all non io.EOF conditions.
func (hr *hookReader) Read(b []byte) (n int, err error) {
n, err = hr.source.Read(b)
if err != nil && err != io.EOF {
return n, err
}
// Progress the hook with the total read bytes from the source.
if _, herr := hr.hook.Read(b[:n]); herr != nil {
if herr != io.EOF {
return n, herr
}
}
return n, err
}
// newHook returns a io.ReadSeeker which implements hookReader that
// reports the data read from the source to the hook.
func newHook(source, hook io.Reader) io.Reader {
if hook == nil {
return source
}
return &hookReader{source, hook}
}

View File

@@ -0,0 +1,89 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package credentials
// A Chain will search for a provider which returns credentials
// and cache that provider until Retrieve is called again.
//
// The Chain provides a way of chaining multiple providers together
// which will pick the first available using priority order of the
// Providers in the list.
//
// If none of the Providers retrieve valid credentials Value, ChainProvider's
// Retrieve() will return the no credentials value.
//
// If a Provider is found which returns valid credentials Value ChainProvider
// will cache that Provider for all calls to IsExpired(), until Retrieve is
// called again after IsExpired() is true.
//
// creds := credentials.NewChainCredentials(
// []credentials.Provider{
// &credentials.EnvAWSS3{},
// &credentials.EnvMinio{},
// })
//
// // Usage of ChainCredentials.
// mc, err := minio.NewWithCredentials(endpoint, creds, secure, "us-east-1")
// if err != nil {
// log.Fatalln(err)
// }
//
type Chain struct {
Providers []Provider
curr Provider
}
// NewChainCredentials returns a pointer to a new Credentials object
// wrapping a chain of providers.
func NewChainCredentials(providers []Provider) *Credentials {
return New(&Chain{
Providers: append([]Provider{}, providers...),
})
}
// Retrieve returns the credentials value, returns no credentials(anonymous)
// if no credentials provider returned any value.
//
// If a provider is found with credentials, it will be cached and any calls
// to IsExpired() will return the expired state of the cached provider.
func (c *Chain) Retrieve() (Value, error) {
for _, p := range c.Providers {
creds, _ := p.Retrieve()
// Always prioritize non-anonymous providers, if any.
if creds.AccessKeyID == "" && creds.SecretAccessKey == "" {
continue
}
c.curr = p
return creds, nil
}
// At this point we have exhausted all the providers and
// are left without any credentials return anonymous.
return Value{
SignerType: SignatureAnonymous,
}, nil
}
// IsExpired will returned the expired state of the currently cached provider
// if there is one. If there is no current provider, true will be returned.
func (c *Chain) IsExpired() bool {
if c.curr != nil {
return c.curr.IsExpired()
}
return true
}

View File

@@ -0,0 +1,175 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package credentials
import (
"sync"
"time"
)
// A Value is the AWS credentials value for individual credential fields.
type Value struct {
// AWS Access key ID
AccessKeyID string
// AWS Secret Access Key
SecretAccessKey string
// AWS Session Token
SessionToken string
// Signature Type.
SignerType SignatureType
}
// A Provider is the interface for any component which will provide credentials
// Value. A provider is required to manage its own Expired state, and what to
// be expired means.
type Provider interface {
// Retrieve returns nil if it successfully retrieved the value.
// Error is returned if the value were not obtainable, or empty.
Retrieve() (Value, error)
// IsExpired returns if the credentials are no longer valid, and need
// to be retrieved.
IsExpired() bool
}
// A Expiry provides shared expiration logic to be used by credentials
// providers to implement expiry functionality.
//
// The best method to use this struct is as an anonymous field within the
// provider's struct.
//
// Example:
// type IAMCredentialProvider struct {
// Expiry
// ...
// }
type Expiry struct {
// The date/time when to expire on
expiration time.Time
// If set will be used by IsExpired to determine the current time.
// Defaults to time.Now if CurrentTime is not set.
CurrentTime func() time.Time
}
// SetExpiration sets the expiration IsExpired will check when called.
//
// If window is greater than 0 the expiration time will be reduced by the
// window value.
//
// Using a window is helpful to trigger credentials to expire sooner than
// the expiration time given to ensure no requests are made with expired
// tokens.
func (e *Expiry) SetExpiration(expiration time.Time, window time.Duration) {
e.expiration = expiration
if window > 0 {
e.expiration = e.expiration.Add(-window)
}
}
// IsExpired returns if the credentials are expired.
func (e *Expiry) IsExpired() bool {
if e.CurrentTime == nil {
e.CurrentTime = time.Now
}
return e.expiration.Before(e.CurrentTime())
}
// Credentials - A container for synchronous safe retrieval of credentials Value.
// Credentials will cache the credentials value until they expire. Once the value
// expires the next Get will attempt to retrieve valid credentials.
//
// Credentials is safe to use across multiple goroutines and will manage the
// synchronous state so the Providers do not need to implement their own
// synchronization.
//
// The first Credentials.Get() will always call Provider.Retrieve() to get the
// first instance of the credentials Value. All calls to Get() after that
// will return the cached credentials Value until IsExpired() returns true.
type Credentials struct {
sync.Mutex
creds Value
forceRefresh bool
provider Provider
}
// New returns a pointer to a new Credentials with the provider set.
func New(provider Provider) *Credentials {
return &Credentials{
provider: provider,
forceRefresh: true,
}
}
// Get returns the credentials value, or error if the credentials Value failed
// to be retrieved.
//
// Will return the cached credentials Value if it has not expired. If the
// credentials Value has expired the Provider's Retrieve() will be called
// to refresh the credentials.
//
// If Credentials.Expire() was called the credentials Value will be force
// expired, and the next call to Get() will cause them to be refreshed.
func (c *Credentials) Get() (Value, error) {
c.Lock()
defer c.Unlock()
if c.isExpired() {
creds, err := c.provider.Retrieve()
if err != nil {
return Value{}, err
}
c.creds = creds
c.forceRefresh = false
}
return c.creds, nil
}
// Expire expires the credentials and forces them to be retrieved on the
// next call to Get().
//
// This will override the Provider's expired state, and force Credentials
// to call the Provider's Retrieve().
func (c *Credentials) Expire() {
c.Lock()
defer c.Unlock()
c.forceRefresh = true
}
// IsExpired returns if the credentials are no longer valid, and need
// to be refreshed.
//
// If the Credentials were forced to be expired with Expire() this will
// reflect that override.
func (c *Credentials) IsExpired() bool {
c.Lock()
defer c.Unlock()
return c.isExpired()
}
// isExpired helper method wrapping the definition of expired credentials.
func (c *Credentials) isExpired() bool {
return c.forceRefresh || c.provider.IsExpired()
}

View File

@@ -0,0 +1,62 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
// Package credentials provides credential retrieval and management
// for S3 compatible object storage.
//
// By default the Credentials.Get() will cache the successful result of a
// Provider's Retrieve() until Provider.IsExpired() returns true. At which
// point Credentials will call Provider's Retrieve() to get new credential Value.
//
// The Provider is responsible for determining when credentials have expired.
// It is also important to note that Credentials will always call Retrieve the
// first time Credentials.Get() is called.
//
// Example of using the environment variable credentials.
//
// creds := NewFromEnv()
// // Retrieve the credentials value
// credValue, err := creds.Get()
// if err != nil {
// // handle error
// }
//
// Example of forcing credentials to expire and be refreshed on the next Get().
// This may be helpful to proactively expire credentials and refresh them sooner
// than they would naturally expire on their own.
//
// creds := NewFromIAM("")
// creds.Expire()
// credsValue, err := creds.Get()
// // New credentials will be retrieved instead of from cache.
//
//
// Custom Provider
//
// Each Provider built into this package also provides a helper method to generate
// a Credentials pointer setup with the provider. To use a custom Provider just
// create a type which satisfies the Provider interface and pass it to the
// NewCredentials method.
//
// type MyProvider struct{}
// func (m *MyProvider) Retrieve() (Value, error) {...}
// func (m *MyProvider) IsExpired() bool {...}
//
// creds := NewCredentials(&MyProvider{})
// credValue, err := creds.Get()
//
package credentials

View File

@@ -0,0 +1,71 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package credentials
import "os"
// A EnvAWS retrieves credentials from the environment variables of the
// running process. EnvAWSironment credentials never expire.
//
// EnvAWSironment variables used:
//
// * Access Key ID: AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY.
// * Secret Access Key: AWS_SECRET_ACCESS_KEY or AWS_SECRET_KEY.
// * Secret Token: AWS_SESSION_TOKEN.
type EnvAWS struct {
retrieved bool
}
// NewEnvAWS returns a pointer to a new Credentials object
// wrapping the environment variable provider.
func NewEnvAWS() *Credentials {
return New(&EnvAWS{})
}
// Retrieve retrieves the keys from the environment.
func (e *EnvAWS) Retrieve() (Value, error) {
e.retrieved = false
id := os.Getenv("AWS_ACCESS_KEY_ID")
if id == "" {
id = os.Getenv("AWS_ACCESS_KEY")
}
secret := os.Getenv("AWS_SECRET_ACCESS_KEY")
if secret == "" {
secret = os.Getenv("AWS_SECRET_KEY")
}
signerType := SignatureV4
if id == "" || secret == "" {
signerType = SignatureAnonymous
}
e.retrieved = true
return Value{
AccessKeyID: id,
SecretAccessKey: secret,
SessionToken: os.Getenv("AWS_SESSION_TOKEN"),
SignerType: signerType,
}, nil
}
// IsExpired returns if the credentials have been retrieved.
func (e *EnvAWS) IsExpired() bool {
return !e.retrieved
}

View File

@@ -0,0 +1,62 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package credentials
import "os"
// A EnvMinio retrieves credentials from the environment variables of the
// running process. EnvMinioironment credentials never expire.
//
// EnvMinioironment variables used:
//
// * Access Key ID: MINIO_ACCESS_KEY.
// * Secret Access Key: MINIO_SECRET_KEY.
type EnvMinio struct {
retrieved bool
}
// NewEnvMinio returns a pointer to a new Credentials object
// wrapping the environment variable provider.
func NewEnvMinio() *Credentials {
return New(&EnvMinio{})
}
// Retrieve retrieves the keys from the environment.
func (e *EnvMinio) Retrieve() (Value, error) {
e.retrieved = false
id := os.Getenv("MINIO_ACCESS_KEY")
secret := os.Getenv("MINIO_SECRET_KEY")
signerType := SignatureV4
if id == "" || secret == "" {
signerType = SignatureAnonymous
}
e.retrieved = true
return Value{
AccessKeyID: id,
SecretAccessKey: secret,
SignerType: signerType,
}, nil
}
// IsExpired returns if the credentials have been retrieved.
func (e *EnvMinio) IsExpired() bool {
return !e.retrieved
}

View File

@@ -0,0 +1,120 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package credentials
import (
"os"
"path/filepath"
"github.com/go-ini/ini"
homedir "github.com/mitchellh/go-homedir"
)
// A FileAWSCredentials retrieves credentials from the current user's home
// directory, and keeps track if those credentials are expired.
//
// Profile ini file example: $HOME/.aws/credentials
type FileAWSCredentials struct {
// Path to the shared credentials file.
//
// If empty will look for "AWS_SHARED_CREDENTIALS_FILE" env variable. If the
// env value is empty will default to current user's home directory.
// Linux/OSX: "$HOME/.aws/credentials"
// Windows: "%USERPROFILE%\.aws\credentials"
filename string
// AWS Profile to extract credentials from the shared credentials file. If empty
// will default to environment variable "AWS_PROFILE" or "default" if
// environment variable is also not set.
profile string
// retrieved states if the credentials have been successfully retrieved.
retrieved bool
}
// NewFileAWSCredentials returns a pointer to a new Credentials object
// wrapping the Profile file provider.
func NewFileAWSCredentials(filename string, profile string) *Credentials {
return New(&FileAWSCredentials{
filename: filename,
profile: profile,
})
}
// Retrieve reads and extracts the shared credentials from the current
// users home directory.
func (p *FileAWSCredentials) Retrieve() (Value, error) {
if p.filename == "" {
p.filename = os.Getenv("AWS_SHARED_CREDENTIALS_FILE")
if p.filename == "" {
homeDir, err := homedir.Dir()
if err != nil {
return Value{}, err
}
p.filename = filepath.Join(homeDir, ".aws", "credentials")
}
}
if p.profile == "" {
p.profile = os.Getenv("AWS_PROFILE")
if p.profile == "" {
p.profile = "default"
}
}
p.retrieved = false
iniProfile, err := loadProfile(p.filename, p.profile)
if err != nil {
return Value{}, err
}
// Default to empty string if not found.
id := iniProfile.Key("aws_access_key_id")
// Default to empty string if not found.
secret := iniProfile.Key("aws_secret_access_key")
// Default to empty string if not found.
token := iniProfile.Key("aws_session_token")
p.retrieved = true
return Value{
AccessKeyID: id.String(),
SecretAccessKey: secret.String(),
SessionToken: token.String(),
SignerType: SignatureV4,
}, nil
}
// IsExpired returns if the shared credentials have expired.
func (p *FileAWSCredentials) IsExpired() bool {
return !p.retrieved
}
// loadProfiles loads from the file pointed to by shared credentials filename for profile.
// The credentials retrieved from the profile will be returned or error. Error will be
// returned if it fails to read from the file, or the data is invalid.
func loadProfile(filename, profile string) (*ini.Section, error) {
config, err := ini.Load(filename)
if err != nil {
return nil, err
}
iniProfile, err := config.GetSection(profile)
if err != nil {
return nil, err
}
return iniProfile, nil
}

View File

@@ -0,0 +1,129 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package credentials
import (
"encoding/json"
"io/ioutil"
"os"
"path/filepath"
"runtime"
homedir "github.com/mitchellh/go-homedir"
)
// A FileMinioClient retrieves credentials from the current user's home
// directory, and keeps track if those credentials are expired.
//
// Configuration file example: $HOME/.mc/config.json
type FileMinioClient struct {
// Path to the shared credentials file.
//
// If empty will look for "MINIO_SHARED_CREDENTIALS_FILE" env variable. If the
// env value is empty will default to current user's home directory.
// Linux/OSX: "$HOME/.mc/config.json"
// Windows: "%USERALIAS%\mc\config.json"
filename string
// Minio Alias to extract credentials from the shared credentials file. If empty
// will default to environment variable "MINIO_ALIAS" or "default" if
// environment variable is also not set.
alias string
// retrieved states if the credentials have been successfully retrieved.
retrieved bool
}
// NewFileMinioClient returns a pointer to a new Credentials object
// wrapping the Alias file provider.
func NewFileMinioClient(filename string, alias string) *Credentials {
return New(&FileMinioClient{
filename: filename,
alias: alias,
})
}
// Retrieve reads and extracts the shared credentials from the current
// users home directory.
func (p *FileMinioClient) Retrieve() (Value, error) {
if p.filename == "" {
homeDir, err := homedir.Dir()
if err != nil {
return Value{}, err
}
p.filename = filepath.Join(homeDir, ".mc", "config.json")
if runtime.GOOS == "windows" {
p.filename = filepath.Join(homeDir, "mc", "config.json")
}
}
if p.alias == "" {
p.alias = os.Getenv("MINIO_ALIAS")
if p.alias == "" {
p.alias = "s3"
}
}
p.retrieved = false
hostCfg, err := loadAlias(p.filename, p.alias)
if err != nil {
return Value{}, err
}
p.retrieved = true
return Value{
AccessKeyID: hostCfg.AccessKey,
SecretAccessKey: hostCfg.SecretKey,
SignerType: parseSignatureType(hostCfg.API),
}, nil
}
// IsExpired returns if the shared credentials have expired.
func (p *FileMinioClient) IsExpired() bool {
return !p.retrieved
}
// hostConfig configuration of a host.
type hostConfig struct {
URL string `json:"url"`
AccessKey string `json:"accessKey"`
SecretKey string `json:"secretKey"`
API string `json:"api"`
}
// config config version.
type config struct {
Version string `json:"version"`
Hosts map[string]hostConfig `json:"hosts"`
}
// loadAliass loads from the file pointed to by shared credentials filename for alias.
// The credentials retrieved from the alias will be returned or error. Error will be
// returned if it fails to read from the file.
func loadAlias(filename, alias string) (hostConfig, error) {
cfg := &config{}
configBytes, err := ioutil.ReadFile(filename)
if err != nil {
return hostConfig{}, err
}
if err = json.Unmarshal(configBytes, cfg); err != nil {
return hostConfig{}, err
}
return cfg.Hosts[alias], nil
}

View File

@@ -0,0 +1,214 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package credentials
import (
"bufio"
"encoding/json"
"errors"
"net/http"
"net/url"
"path"
"time"
)
// DefaultExpiryWindow - Default expiry window.
// ExpiryWindow will allow the credentials to trigger refreshing
// prior to the credentials actually expiring. This is beneficial
// so race conditions with expiring credentials do not cause
// request to fail unexpectedly due to ExpiredTokenException exceptions.
const DefaultExpiryWindow = time.Second * 10 // 10 secs
// A IAM retrieves credentials from the EC2 service, and keeps track if
// those credentials are expired.
type IAM struct {
Expiry
// Required http Client to use when connecting to IAM metadata service.
Client *http.Client
// Custom endpoint to fetch IAM role credentials.
endpoint string
}
// IAM Roles for Amazon EC2
// http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
const (
defaultIAMRoleEndpoint = "http://169.254.169.254"
defaultIAMSecurityCredsPath = "/latest/meta-data/iam/security-credentials"
)
// NewIAM returns a pointer to a new Credentials object wrapping
// the IAM. Takes a ConfigProvider to create a EC2Metadata client.
// The ConfigProvider is satisfied by the session.Session type.
func NewIAM(endpoint string) *Credentials {
if endpoint == "" {
endpoint = defaultIAMRoleEndpoint
}
p := &IAM{
Client: &http.Client{
Transport: http.DefaultTransport,
},
endpoint: endpoint,
}
return New(p)
}
// Retrieve retrieves credentials from the EC2 service.
// Error will be returned if the request fails, or unable to extract
// the desired
func (m *IAM) Retrieve() (Value, error) {
roleCreds, err := getCredentials(m.Client, m.endpoint)
if err != nil {
return Value{}, err
}
// Expiry window is set to 10secs.
m.SetExpiration(roleCreds.Expiration, DefaultExpiryWindow)
return Value{
AccessKeyID: roleCreds.AccessKeyID,
SecretAccessKey: roleCreds.SecretAccessKey,
SessionToken: roleCreds.Token,
SignerType: SignatureV4,
}, nil
}
// A ec2RoleCredRespBody provides the shape for unmarshaling credential
// request responses.
type ec2RoleCredRespBody struct {
// Success State
Expiration time.Time
AccessKeyID string
SecretAccessKey string
Token string
// Error state
Code string
Message string
// Unused params.
LastUpdated time.Time
Type string
}
// Get the final IAM role URL where the request will
// be sent to fetch the rolling access credentials.
// http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
func getIAMRoleURL(endpoint string) (*url.URL, error) {
if endpoint == "" {
endpoint = defaultIAMRoleEndpoint
}
u, err := url.Parse(endpoint)
if err != nil {
return nil, err
}
u.Path = defaultIAMSecurityCredsPath
return u, nil
}
// listRoleNames lists of credential role names associated
// with the current EC2 service. If there are no credentials,
// or there is an error making or receiving the request.
// http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
func listRoleNames(client *http.Client, u *url.URL) ([]string, error) {
req, err := http.NewRequest("GET", u.String(), nil)
if err != nil {
return nil, err
}
resp, err := client.Do(req)
if err != nil {
return nil, err
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return nil, errors.New(resp.Status)
}
credsList := []string{}
s := bufio.NewScanner(resp.Body)
for s.Scan() {
credsList = append(credsList, s.Text())
}
if err := s.Err(); err != nil {
return nil, err
}
return credsList, nil
}
// getCredentials - obtains the credentials from the IAM role name associated with
// the current EC2 service.
//
// If the credentials cannot be found, or there is an error
// reading the response an error will be returned.
func getCredentials(client *http.Client, endpoint string) (ec2RoleCredRespBody, error) {
// http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
u, err := getIAMRoleURL(endpoint)
if err != nil {
return ec2RoleCredRespBody{}, err
}
// http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
roleNames, err := listRoleNames(client, u)
if err != nil {
return ec2RoleCredRespBody{}, err
}
if len(roleNames) == 0 {
return ec2RoleCredRespBody{}, errors.New("No IAM roles attached to this EC2 service")
}
// http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
// - An instance profile can contain only one IAM role. This limit cannot be increased.
roleName := roleNames[0]
// http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
// The following command retrieves the security credentials for an
// IAM role named `s3access`.
//
// $ curl http://169.254.169.254/latest/meta-data/iam/security-credentials/s3access
//
u.Path = path.Join(u.Path, roleName)
req, err := http.NewRequest("GET", u.String(), nil)
if err != nil {
return ec2RoleCredRespBody{}, err
}
resp, err := client.Do(req)
if err != nil {
return ec2RoleCredRespBody{}, err
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return ec2RoleCredRespBody{}, errors.New(resp.Status)
}
respCreds := ec2RoleCredRespBody{}
if err := json.NewDecoder(resp.Body).Decode(&respCreds); err != nil {
return ec2RoleCredRespBody{}, err
}
if respCreds.Code != "Success" {
// If an error code was returned something failed requesting the role.
return ec2RoleCredRespBody{}, errors.New(respCreds.Message)
}
return respCreds, nil
}

View File

@@ -0,0 +1,77 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package credentials
import "strings"
// SignatureType is type of Authorization requested for a given HTTP request.
type SignatureType int
// Different types of supported signatures - default is SignatureV4 or SignatureDefault.
const (
// SignatureDefault is always set to v4.
SignatureDefault SignatureType = iota
SignatureV4
SignatureV2
SignatureV4Streaming
SignatureAnonymous // Anonymous signature signifies, no signature.
)
// IsV2 - is signature SignatureV2?
func (s SignatureType) IsV2() bool {
return s == SignatureV2
}
// IsV4 - is signature SignatureV4?
func (s SignatureType) IsV4() bool {
return s == SignatureV4 || s == SignatureDefault
}
// IsStreamingV4 - is signature SignatureV4Streaming?
func (s SignatureType) IsStreamingV4() bool {
return s == SignatureV4Streaming
}
// IsAnonymous - is signature empty?
func (s SignatureType) IsAnonymous() bool {
return s == SignatureAnonymous
}
// Stringer humanized version of signature type,
// strings returned here are case insensitive.
func (s SignatureType) String() string {
if s.IsV2() {
return "S3v2"
} else if s.IsV4() {
return "S3v4"
} else if s.IsStreamingV4() {
return "S3v4Streaming"
}
return "Anonymous"
}
func parseSignatureType(str string) SignatureType {
if strings.EqualFold(str, "S3v4") {
return SignatureV4
} else if strings.EqualFold(str, "S3v2") {
return SignatureV2
} else if strings.EqualFold(str, "S3v4Streaming") {
return SignatureV4Streaming
}
return SignatureAnonymous
}

View File

@@ -0,0 +1,67 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package credentials
// A Static is a set of credentials which are set programmatically,
// and will never expire.
type Static struct {
Value
}
// NewStaticV2 returns a pointer to a new Credentials object
// wrapping a static credentials value provider, signature is
// set to v2. If access and secret are not specified then
// regardless of signature type set it Value will return
// as anonymous.
func NewStaticV2(id, secret, token string) *Credentials {
return NewStatic(id, secret, token, SignatureV2)
}
// NewStaticV4 is similar to NewStaticV2 with similar considerations.
func NewStaticV4(id, secret, token string) *Credentials {
return NewStatic(id, secret, token, SignatureV4)
}
// NewStatic returns a pointer to a new Credentials object
// wrapping a static credentials value provider.
func NewStatic(id, secret, token string, signerType SignatureType) *Credentials {
return New(&Static{
Value: Value{
AccessKeyID: id,
SecretAccessKey: secret,
SessionToken: token,
SignerType: signerType,
},
})
}
// Retrieve returns the static credentials.
func (s *Static) Retrieve() (Value, error) {
if s.AccessKeyID == "" || s.SecretAccessKey == "" {
// Anonymous is not an error
return Value{SignerType: SignatureAnonymous}, nil
}
return s.Value, nil
}
// IsExpired returns if the credentials are expired.
//
// For Static, the credentials never expired.
func (s *Static) IsExpired() bool {
return false
}

294
vendor/github.com/minio/minio-go/pkg/encrypt/cbc.go generated vendored Normal file
View File

@@ -0,0 +1,294 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package encrypt
import (
"bytes"
"crypto/aes"
"crypto/cipher"
"crypto/rand"
"encoding/base64"
"errors"
"io"
)
// Crypt mode - encryption or decryption
type cryptMode int
const (
encryptMode cryptMode = iota
decryptMode
)
// CBCSecureMaterials encrypts/decrypts data using AES CBC algorithm
type CBCSecureMaterials struct {
// Data stream to encrypt/decrypt
stream io.Reader
// Last internal error
err error
// End of file reached
eof bool
// Holds initial data
srcBuf *bytes.Buffer
// Holds transformed data (encrypted or decrypted)
dstBuf *bytes.Buffer
// Encryption algorithm
encryptionKey Key
// Key to encrypts/decrypts data
contentKey []byte
// Encrypted form of contentKey
cryptedKey []byte
// Initialization vector
iv []byte
// matDesc - currently unused
matDesc []byte
// Indicate if we are going to encrypt or decrypt
cryptMode cryptMode
// Helper that encrypts/decrypts data
blockMode cipher.BlockMode
}
// NewCBCSecureMaterials builds new CBC crypter module with
// the specified encryption key (symmetric or asymmetric)
func NewCBCSecureMaterials(key Key) (*CBCSecureMaterials, error) {
if key == nil {
return nil, errors.New("Unable to recognize empty encryption properties")
}
return &CBCSecureMaterials{
srcBuf: bytes.NewBuffer([]byte{}),
dstBuf: bytes.NewBuffer([]byte{}),
encryptionKey: key,
matDesc: []byte("{}"),
}, nil
}
// Close implements closes the internal stream.
func (s *CBCSecureMaterials) Close() error {
closer, ok := s.stream.(io.Closer)
if ok {
return closer.Close()
}
return nil
}
// SetupEncryptMode - tells CBC that we are going to encrypt data
func (s *CBCSecureMaterials) SetupEncryptMode(stream io.Reader) error {
// Set mode to encrypt
s.cryptMode = encryptMode
// Set underlying reader
s.stream = stream
s.eof = false
s.srcBuf.Reset()
s.dstBuf.Reset()
var err error
// Generate random content key
s.contentKey = make([]byte, aes.BlockSize*2)
if _, err := rand.Read(s.contentKey); err != nil {
return err
}
// Encrypt content key
s.cryptedKey, err = s.encryptionKey.Encrypt(s.contentKey)
if err != nil {
return err
}
// Generate random IV
s.iv = make([]byte, aes.BlockSize)
if _, err = rand.Read(s.iv); err != nil {
return err
}
// New cipher
encryptContentBlock, err := aes.NewCipher(s.contentKey)
if err != nil {
return err
}
s.blockMode = cipher.NewCBCEncrypter(encryptContentBlock, s.iv)
return nil
}
// SetupDecryptMode - tells CBC that we are going to decrypt data
func (s *CBCSecureMaterials) SetupDecryptMode(stream io.Reader, iv string, key string) error {
// Set mode to decrypt
s.cryptMode = decryptMode
// Set underlying reader
s.stream = stream
// Reset
s.eof = false
s.srcBuf.Reset()
s.dstBuf.Reset()
var err error
// Get IV
s.iv, err = base64.StdEncoding.DecodeString(iv)
if err != nil {
return err
}
// Get encrypted content key
s.cryptedKey, err = base64.StdEncoding.DecodeString(key)
if err != nil {
return err
}
// Decrypt content key
s.contentKey, err = s.encryptionKey.Decrypt(s.cryptedKey)
if err != nil {
return err
}
// New cipher
decryptContentBlock, err := aes.NewCipher(s.contentKey)
if err != nil {
return err
}
s.blockMode = cipher.NewCBCDecrypter(decryptContentBlock, s.iv)
return nil
}
// GetIV - return randomly generated IV (per S3 object), base64 encoded.
func (s *CBCSecureMaterials) GetIV() string {
return base64.StdEncoding.EncodeToString(s.iv)
}
// GetKey - return content encrypting key (cek) in encrypted form, base64 encoded.
func (s *CBCSecureMaterials) GetKey() string {
return base64.StdEncoding.EncodeToString(s.cryptedKey)
}
// GetDesc - user provided encryption material description in JSON (UTF8) format.
func (s *CBCSecureMaterials) GetDesc() string {
return string(s.matDesc)
}
// Fill buf with encrypted/decrypted data
func (s *CBCSecureMaterials) Read(buf []byte) (n int, err error) {
// Always fill buf from bufChunk at the end of this function
defer func() {
if s.err != nil {
n, err = 0, s.err
} else {
n, err = s.dstBuf.Read(buf)
}
}()
// Return
if s.eof {
return
}
// Fill dest buffer if its length is less than buf
for !s.eof && s.dstBuf.Len() < len(buf) {
srcPart := make([]byte, aes.BlockSize)
dstPart := make([]byte, aes.BlockSize)
// Fill src buffer
for s.srcBuf.Len() < aes.BlockSize*2 {
_, err = io.CopyN(s.srcBuf, s.stream, aes.BlockSize)
if err != nil {
break
}
}
// Quit immediately for errors other than io.EOF
if err != nil && err != io.EOF {
s.err = err
return
}
// Mark current encrypting/decrypting as finished
s.eof = (err == io.EOF)
if s.eof && s.cryptMode == encryptMode {
if srcPart, err = pkcs5Pad(s.srcBuf.Bytes(), aes.BlockSize); err != nil {
s.err = err
return
}
} else {
_, _ = s.srcBuf.Read(srcPart)
}
// Crypt srcPart content
for len(srcPart) > 0 {
// Crypt current part
s.blockMode.CryptBlocks(dstPart, srcPart[:aes.BlockSize])
// Unpad when this is the last part and we are decrypting
if s.eof && s.cryptMode == decryptMode {
dstPart, err = pkcs5Unpad(dstPart, aes.BlockSize)
if err != nil {
s.err = err
return
}
}
// Send crypted data to dstBuf
if _, wErr := s.dstBuf.Write(dstPart); wErr != nil {
s.err = wErr
return
}
// Move to the next part
srcPart = srcPart[aes.BlockSize:]
}
}
return
}
// Unpad a set of bytes following PKCS5 algorithm
func pkcs5Unpad(buf []byte, blockSize int) ([]byte, error) {
len := len(buf)
if len == 0 {
return nil, errors.New("buffer is empty")
}
pad := int(buf[len-1])
if pad > len || pad > blockSize {
return nil, errors.New("invalid padding size")
}
return buf[:len-pad], nil
}
// Pad a set of bytes following PKCS5 algorithm
func pkcs5Pad(buf []byte, blockSize int) ([]byte, error) {
len := len(buf)
pad := blockSize - (len % blockSize)
padText := bytes.Repeat([]byte{byte(pad)}, pad)
return append(buf, padText...), nil
}

View File

@@ -0,0 +1,54 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
// Package encrypt implements a generic interface to encrypt any stream of data.
// currently this package implements two types of encryption
// - Symmetric encryption using AES.
// - Asymmetric encrytion using RSA.
package encrypt
import "io"
// Materials - provides generic interface to encrypt any stream of data.
type Materials interface {
// Closes the wrapped stream properly, initiated by the caller.
Close() error
// Returns encrypted/decrypted data, io.Reader compatible.
Read(b []byte) (int, error)
// Get randomly generated IV, base64 encoded.
GetIV() (iv string)
// Get content encrypting key (cek) in encrypted form, base64 encoded.
GetKey() (key string)
// Get user provided encryption material description in
// JSON (UTF8) format. This is not used, kept for future.
GetDesc() (desc string)
// Setup encrypt mode, further calls of Read() function
// will return the encrypted form of data streamed
// by the passed reader
SetupEncryptMode(stream io.Reader) error
// Setup decrypted mode, further calls of Read() function
// will return the decrypted form of data streamed
// by the passed reader
SetupDecryptMode(stream io.Reader, iv string, key string) error
}

166
vendor/github.com/minio/minio-go/pkg/encrypt/keys.go generated vendored Normal file
View File

@@ -0,0 +1,166 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package encrypt
import (
"crypto/aes"
"crypto/rand"
"crypto/rsa"
"crypto/x509"
"errors"
)
// Key - generic interface to encrypt/decrypt a key.
// We use it to encrypt/decrypt content key which is the key
// that encrypt/decrypt object data.
type Key interface {
// Encrypt data using to the set encryption key
Encrypt([]byte) ([]byte, error)
// Decrypt data using to the set encryption key
Decrypt([]byte) ([]byte, error)
}
// SymmetricKey - encrypts data with a symmetric master key
type SymmetricKey struct {
masterKey []byte
}
// Encrypt passed bytes
func (s *SymmetricKey) Encrypt(plain []byte) ([]byte, error) {
// Initialize an AES encryptor using a master key
keyBlock, err := aes.NewCipher(s.masterKey)
if err != nil {
return []byte{}, err
}
// Pad the key before encryption
plain, _ = pkcs5Pad(plain, aes.BlockSize)
encKey := []byte{}
encPart := make([]byte, aes.BlockSize)
// Encrypt the passed key by block
for {
if len(plain) < aes.BlockSize {
break
}
// Encrypt the passed key
keyBlock.Encrypt(encPart, plain[:aes.BlockSize])
// Add the encrypted block to the total encrypted key
encKey = append(encKey, encPart...)
// Pass to the next plain block
plain = plain[aes.BlockSize:]
}
return encKey, nil
}
// Decrypt passed bytes
func (s *SymmetricKey) Decrypt(cipher []byte) ([]byte, error) {
// Initialize AES decrypter
keyBlock, err := aes.NewCipher(s.masterKey)
if err != nil {
return nil, err
}
var plain []byte
plainPart := make([]byte, aes.BlockSize)
// Decrypt the encrypted data block by block
for {
if len(cipher) < aes.BlockSize {
break
}
keyBlock.Decrypt(plainPart, cipher[:aes.BlockSize])
// Add the decrypted block to the total result
plain = append(plain, plainPart...)
// Pass to the next cipher block
cipher = cipher[aes.BlockSize:]
}
// Unpad the resulted plain data
plain, err = pkcs5Unpad(plain, aes.BlockSize)
if err != nil {
return nil, err
}
return plain, nil
}
// NewSymmetricKey generates a new encrypt/decrypt crypto using
// an AES master key password
func NewSymmetricKey(b []byte) *SymmetricKey {
return &SymmetricKey{masterKey: b}
}
// AsymmetricKey - struct which encrypts/decrypts data
// using RSA public/private certificates
type AsymmetricKey struct {
publicKey *rsa.PublicKey
privateKey *rsa.PrivateKey
}
// Encrypt data using public key
func (a *AsymmetricKey) Encrypt(plain []byte) ([]byte, error) {
cipher, err := rsa.EncryptPKCS1v15(rand.Reader, a.publicKey, plain)
if err != nil {
return nil, err
}
return cipher, nil
}
// Decrypt data using public key
func (a *AsymmetricKey) Decrypt(cipher []byte) ([]byte, error) {
cipher, err := rsa.DecryptPKCS1v15(rand.Reader, a.privateKey, cipher)
if err != nil {
return nil, err
}
return cipher, nil
}
// NewAsymmetricKey - generates a crypto module able to encrypt/decrypt
// data using a pair for private and public key
func NewAsymmetricKey(privData []byte, pubData []byte) (*AsymmetricKey, error) {
// Parse private key from passed data
priv, err := x509.ParsePKCS8PrivateKey(privData)
if err != nil {
return nil, err
}
privKey, ok := priv.(*rsa.PrivateKey)
if !ok {
return nil, errors.New("not a valid private key")
}
// Parse public key from passed data
pub, err := x509.ParsePKIXPublicKey(pubData)
if err != nil {
return nil, err
}
pubKey, ok := pub.(*rsa.PublicKey)
if !ok {
return nil, errors.New("not a valid public key")
}
// Associate the private key with the passed public key
privKey.PublicKey = *pubKey
return &AsymmetricKey{
publicKey: pubKey,
privateKey: privKey,
}, nil
}

View File

@@ -0,0 +1,116 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package policy
import "github.com/minio/minio-go/pkg/set"
// ConditionKeyMap - map of policy condition key and value.
type ConditionKeyMap map[string]set.StringSet
// Add - adds key and value. The value is appended If key already exists.
func (ckm ConditionKeyMap) Add(key string, value set.StringSet) {
if v, ok := ckm[key]; ok {
ckm[key] = v.Union(value)
} else {
ckm[key] = set.CopyStringSet(value)
}
}
// Remove - removes value of given key. If key has empty after removal, the key is also removed.
func (ckm ConditionKeyMap) Remove(key string, value set.StringSet) {
if v, ok := ckm[key]; ok {
if value != nil {
ckm[key] = v.Difference(value)
}
if ckm[key].IsEmpty() {
delete(ckm, key)
}
}
}
// RemoveKey - removes key and its value.
func (ckm ConditionKeyMap) RemoveKey(key string) {
if _, ok := ckm[key]; ok {
delete(ckm, key)
}
}
// CopyConditionKeyMap - returns new copy of given ConditionKeyMap.
func CopyConditionKeyMap(condKeyMap ConditionKeyMap) ConditionKeyMap {
out := make(ConditionKeyMap)
for k, v := range condKeyMap {
out[k] = set.CopyStringSet(v)
}
return out
}
// mergeConditionKeyMap - returns a new ConditionKeyMap which contains merged key/value of given two ConditionKeyMap.
func mergeConditionKeyMap(condKeyMap1 ConditionKeyMap, condKeyMap2 ConditionKeyMap) ConditionKeyMap {
out := CopyConditionKeyMap(condKeyMap1)
for k, v := range condKeyMap2 {
if ev, ok := out[k]; ok {
out[k] = ev.Union(v)
} else {
out[k] = set.CopyStringSet(v)
}
}
return out
}
// ConditionMap - map of condition and conditional values.
type ConditionMap map[string]ConditionKeyMap
// Add - adds condition key and condition value. The value is appended if key already exists.
func (cond ConditionMap) Add(condKey string, condKeyMap ConditionKeyMap) {
if v, ok := cond[condKey]; ok {
cond[condKey] = mergeConditionKeyMap(v, condKeyMap)
} else {
cond[condKey] = CopyConditionKeyMap(condKeyMap)
}
}
// Remove - removes condition key and its value.
func (cond ConditionMap) Remove(condKey string) {
if _, ok := cond[condKey]; ok {
delete(cond, condKey)
}
}
// mergeConditionMap - returns new ConditionMap which contains merged key/value of two ConditionMap.
func mergeConditionMap(condMap1 ConditionMap, condMap2 ConditionMap) ConditionMap {
out := make(ConditionMap)
for k, v := range condMap1 {
out[k] = CopyConditionKeyMap(v)
}
for k, v := range condMap2 {
if ev, ok := out[k]; ok {
out[k] = mergeConditionKeyMap(ev, v)
} else {
out[k] = CopyConditionKeyMap(v)
}
}
return out
}

View File

@@ -0,0 +1,635 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package policy
import (
"reflect"
"strings"
"github.com/minio/minio-go/pkg/set"
)
// BucketPolicy - Bucket level policy.
type BucketPolicy string
// Different types of Policies currently supported for buckets.
const (
BucketPolicyNone BucketPolicy = "none"
BucketPolicyReadOnly = "readonly"
BucketPolicyReadWrite = "readwrite"
BucketPolicyWriteOnly = "writeonly"
)
// IsValidBucketPolicy - returns true if policy is valid and supported, false otherwise.
func (p BucketPolicy) IsValidBucketPolicy() bool {
switch p {
case BucketPolicyNone, BucketPolicyReadOnly, BucketPolicyReadWrite, BucketPolicyWriteOnly:
return true
}
return false
}
// Resource prefix for all aws resources.
const awsResourcePrefix = "arn:aws:s3:::"
// Common bucket actions for both read and write policies.
var commonBucketActions = set.CreateStringSet("s3:GetBucketLocation")
// Read only bucket actions.
var readOnlyBucketActions = set.CreateStringSet("s3:ListBucket")
// Write only bucket actions.
var writeOnlyBucketActions = set.CreateStringSet("s3:ListBucketMultipartUploads")
// Read only object actions.
var readOnlyObjectActions = set.CreateStringSet("s3:GetObject")
// Write only object actions.
var writeOnlyObjectActions = set.CreateStringSet("s3:AbortMultipartUpload", "s3:DeleteObject", "s3:ListMultipartUploadParts", "s3:PutObject")
// Read and write object actions.
var readWriteObjectActions = readOnlyObjectActions.Union(writeOnlyObjectActions)
// All valid bucket and object actions.
var validActions = commonBucketActions.
Union(readOnlyBucketActions).
Union(writeOnlyBucketActions).
Union(readOnlyObjectActions).
Union(writeOnlyObjectActions)
var startsWithFunc = func(resource string, resourcePrefix string) bool {
return strings.HasPrefix(resource, resourcePrefix)
}
// User - canonical users list.
type User struct {
AWS set.StringSet `json:"AWS,omitempty"`
CanonicalUser set.StringSet `json:"CanonicalUser,omitempty"`
}
// Statement - minio policy statement
type Statement struct {
Actions set.StringSet `json:"Action"`
Conditions ConditionMap `json:"Condition,omitempty"`
Effect string
Principal User `json:"Principal"`
Resources set.StringSet `json:"Resource"`
Sid string
}
// BucketAccessPolicy - minio policy collection
type BucketAccessPolicy struct {
Version string // date in YYYY-MM-DD format
Statements []Statement `json:"Statement"`
}
// isValidStatement - returns whether given statement is valid to process for given bucket name.
func isValidStatement(statement Statement, bucketName string) bool {
if statement.Actions.Intersection(validActions).IsEmpty() {
return false
}
if statement.Effect != "Allow" {
return false
}
if statement.Principal.AWS == nil || !statement.Principal.AWS.Contains("*") {
return false
}
bucketResource := awsResourcePrefix + bucketName
if statement.Resources.Contains(bucketResource) {
return true
}
if statement.Resources.FuncMatch(startsWithFunc, bucketResource+"/").IsEmpty() {
return false
}
return true
}
// Returns new statements with bucket actions for given policy.
func newBucketStatement(policy BucketPolicy, bucketName string, prefix string) (statements []Statement) {
statements = []Statement{}
if policy == BucketPolicyNone || bucketName == "" {
return statements
}
bucketResource := set.CreateStringSet(awsResourcePrefix + bucketName)
statement := Statement{
Actions: commonBucketActions,
Effect: "Allow",
Principal: User{AWS: set.CreateStringSet("*")},
Resources: bucketResource,
Sid: "",
}
statements = append(statements, statement)
if policy == BucketPolicyReadOnly || policy == BucketPolicyReadWrite {
statement = Statement{
Actions: readOnlyBucketActions,
Effect: "Allow",
Principal: User{AWS: set.CreateStringSet("*")},
Resources: bucketResource,
Sid: "",
}
if prefix != "" {
condKeyMap := make(ConditionKeyMap)
condKeyMap.Add("s3:prefix", set.CreateStringSet(prefix))
condMap := make(ConditionMap)
condMap.Add("StringEquals", condKeyMap)
statement.Conditions = condMap
}
statements = append(statements, statement)
}
if policy == BucketPolicyWriteOnly || policy == BucketPolicyReadWrite {
statement = Statement{
Actions: writeOnlyBucketActions,
Effect: "Allow",
Principal: User{AWS: set.CreateStringSet("*")},
Resources: bucketResource,
Sid: "",
}
statements = append(statements, statement)
}
return statements
}
// Returns new statements contains object actions for given policy.
func newObjectStatement(policy BucketPolicy, bucketName string, prefix string) (statements []Statement) {
statements = []Statement{}
if policy == BucketPolicyNone || bucketName == "" {
return statements
}
statement := Statement{
Effect: "Allow",
Principal: User{AWS: set.CreateStringSet("*")},
Resources: set.CreateStringSet(awsResourcePrefix + bucketName + "/" + prefix + "*"),
Sid: "",
}
if policy == BucketPolicyReadOnly {
statement.Actions = readOnlyObjectActions
} else if policy == BucketPolicyWriteOnly {
statement.Actions = writeOnlyObjectActions
} else if policy == BucketPolicyReadWrite {
statement.Actions = readWriteObjectActions
}
statements = append(statements, statement)
return statements
}
// Returns new statements for given policy, bucket and prefix.
func newStatements(policy BucketPolicy, bucketName string, prefix string) (statements []Statement) {
statements = []Statement{}
ns := newBucketStatement(policy, bucketName, prefix)
statements = append(statements, ns...)
ns = newObjectStatement(policy, bucketName, prefix)
statements = append(statements, ns...)
return statements
}
// Returns whether given bucket statements are used by other than given prefix statements.
func getInUsePolicy(statements []Statement, bucketName string, prefix string) (readOnlyInUse, writeOnlyInUse bool) {
resourcePrefix := awsResourcePrefix + bucketName + "/"
objectResource := awsResourcePrefix + bucketName + "/" + prefix + "*"
for _, s := range statements {
if !s.Resources.Contains(objectResource) && !s.Resources.FuncMatch(startsWithFunc, resourcePrefix).IsEmpty() {
if s.Actions.Intersection(readOnlyObjectActions).Equals(readOnlyObjectActions) {
readOnlyInUse = true
}
if s.Actions.Intersection(writeOnlyObjectActions).Equals(writeOnlyObjectActions) {
writeOnlyInUse = true
}
}
if readOnlyInUse && writeOnlyInUse {
break
}
}
return readOnlyInUse, writeOnlyInUse
}
// Removes object actions in given statement.
func removeObjectActions(statement Statement, objectResource string) Statement {
if statement.Conditions == nil {
if len(statement.Resources) > 1 {
statement.Resources.Remove(objectResource)
} else {
statement.Actions = statement.Actions.Difference(readOnlyObjectActions)
statement.Actions = statement.Actions.Difference(writeOnlyObjectActions)
}
}
return statement
}
// Removes bucket actions for given policy in given statement.
func removeBucketActions(statement Statement, prefix string, bucketResource string, readOnlyInUse, writeOnlyInUse bool) Statement {
removeReadOnly := func() {
if !statement.Actions.Intersection(readOnlyBucketActions).Equals(readOnlyBucketActions) {
return
}
if statement.Conditions == nil {
statement.Actions = statement.Actions.Difference(readOnlyBucketActions)
return
}
if prefix != "" {
stringEqualsValue := statement.Conditions["StringEquals"]
values := set.NewStringSet()
if stringEqualsValue != nil {
values = stringEqualsValue["s3:prefix"]
if values == nil {
values = set.NewStringSet()
}
}
values.Remove(prefix)
if stringEqualsValue != nil {
if values.IsEmpty() {
delete(stringEqualsValue, "s3:prefix")
}
if len(stringEqualsValue) == 0 {
delete(statement.Conditions, "StringEquals")
}
}
if len(statement.Conditions) == 0 {
statement.Conditions = nil
statement.Actions = statement.Actions.Difference(readOnlyBucketActions)
}
}
}
removeWriteOnly := func() {
if statement.Conditions == nil {
statement.Actions = statement.Actions.Difference(writeOnlyBucketActions)
}
}
if len(statement.Resources) > 1 {
statement.Resources.Remove(bucketResource)
} else {
if !readOnlyInUse {
removeReadOnly()
}
if !writeOnlyInUse {
removeWriteOnly()
}
}
return statement
}
// Returns statements containing removed actions/statements for given
// policy, bucket name and prefix.
func removeStatements(statements []Statement, bucketName string, prefix string) []Statement {
bucketResource := awsResourcePrefix + bucketName
objectResource := awsResourcePrefix + bucketName + "/" + prefix + "*"
readOnlyInUse, writeOnlyInUse := getInUsePolicy(statements, bucketName, prefix)
out := []Statement{}
readOnlyBucketStatements := []Statement{}
s3PrefixValues := set.NewStringSet()
for _, statement := range statements {
if !isValidStatement(statement, bucketName) {
out = append(out, statement)
continue
}
if statement.Resources.Contains(bucketResource) {
if statement.Conditions != nil {
statement = removeBucketActions(statement, prefix, bucketResource, false, false)
} else {
statement = removeBucketActions(statement, prefix, bucketResource, readOnlyInUse, writeOnlyInUse)
}
} else if statement.Resources.Contains(objectResource) {
statement = removeObjectActions(statement, objectResource)
}
if !statement.Actions.IsEmpty() {
if statement.Resources.Contains(bucketResource) &&
statement.Actions.Intersection(readOnlyBucketActions).Equals(readOnlyBucketActions) &&
statement.Effect == "Allow" &&
statement.Principal.AWS.Contains("*") {
if statement.Conditions != nil {
stringEqualsValue := statement.Conditions["StringEquals"]
values := set.NewStringSet()
if stringEqualsValue != nil {
values = stringEqualsValue["s3:prefix"]
if values == nil {
values = set.NewStringSet()
}
}
s3PrefixValues = s3PrefixValues.Union(values.ApplyFunc(func(v string) string {
return bucketResource + "/" + v + "*"
}))
} else if !s3PrefixValues.IsEmpty() {
readOnlyBucketStatements = append(readOnlyBucketStatements, statement)
continue
}
}
out = append(out, statement)
}
}
skipBucketStatement := true
resourcePrefix := awsResourcePrefix + bucketName + "/"
for _, statement := range out {
if !statement.Resources.FuncMatch(startsWithFunc, resourcePrefix).IsEmpty() &&
s3PrefixValues.Intersection(statement.Resources).IsEmpty() {
skipBucketStatement = false
break
}
}
for _, statement := range readOnlyBucketStatements {
if skipBucketStatement &&
statement.Resources.Contains(bucketResource) &&
statement.Effect == "Allow" &&
statement.Principal.AWS.Contains("*") &&
statement.Conditions == nil {
continue
}
out = append(out, statement)
}
if len(out) == 1 {
statement := out[0]
if statement.Resources.Contains(bucketResource) &&
statement.Actions.Intersection(commonBucketActions).Equals(commonBucketActions) &&
statement.Effect == "Allow" &&
statement.Principal.AWS.Contains("*") &&
statement.Conditions == nil {
out = []Statement{}
}
}
return out
}
// Appends given statement into statement list to have unique statements.
// - If statement already exists in statement list, it ignores.
// - If statement exists with different conditions, they are merged.
// - Else the statement is appended to statement list.
func appendStatement(statements []Statement, statement Statement) []Statement {
for i, s := range statements {
if s.Actions.Equals(statement.Actions) &&
s.Effect == statement.Effect &&
s.Principal.AWS.Equals(statement.Principal.AWS) &&
reflect.DeepEqual(s.Conditions, statement.Conditions) {
statements[i].Resources = s.Resources.Union(statement.Resources)
return statements
} else if s.Resources.Equals(statement.Resources) &&
s.Effect == statement.Effect &&
s.Principal.AWS.Equals(statement.Principal.AWS) &&
reflect.DeepEqual(s.Conditions, statement.Conditions) {
statements[i].Actions = s.Actions.Union(statement.Actions)
return statements
}
if s.Resources.Intersection(statement.Resources).Equals(statement.Resources) &&
s.Actions.Intersection(statement.Actions).Equals(statement.Actions) &&
s.Effect == statement.Effect &&
s.Principal.AWS.Intersection(statement.Principal.AWS).Equals(statement.Principal.AWS) {
if reflect.DeepEqual(s.Conditions, statement.Conditions) {
return statements
}
if s.Conditions != nil && statement.Conditions != nil {
if s.Resources.Equals(statement.Resources) {
statements[i].Conditions = mergeConditionMap(s.Conditions, statement.Conditions)
return statements
}
}
}
}
if !(statement.Actions.IsEmpty() && statement.Resources.IsEmpty()) {
return append(statements, statement)
}
return statements
}
// Appends two statement lists.
func appendStatements(statements []Statement, appendStatements []Statement) []Statement {
for _, s := range appendStatements {
statements = appendStatement(statements, s)
}
return statements
}
// Returns policy of given bucket statement.
func getBucketPolicy(statement Statement, prefix string) (commonFound, readOnly, writeOnly bool) {
if !(statement.Effect == "Allow" && statement.Principal.AWS.Contains("*")) {
return commonFound, readOnly, writeOnly
}
if statement.Actions.Intersection(commonBucketActions).Equals(commonBucketActions) &&
statement.Conditions == nil {
commonFound = true
}
if statement.Actions.Intersection(writeOnlyBucketActions).Equals(writeOnlyBucketActions) &&
statement.Conditions == nil {
writeOnly = true
}
if statement.Actions.Intersection(readOnlyBucketActions).Equals(readOnlyBucketActions) {
if prefix != "" && statement.Conditions != nil {
if stringEqualsValue, ok := statement.Conditions["StringEquals"]; ok {
if s3PrefixValues, ok := stringEqualsValue["s3:prefix"]; ok {
if s3PrefixValues.Contains(prefix) {
readOnly = true
}
}
} else if stringNotEqualsValue, ok := statement.Conditions["StringNotEquals"]; ok {
if s3PrefixValues, ok := stringNotEqualsValue["s3:prefix"]; ok {
if !s3PrefixValues.Contains(prefix) {
readOnly = true
}
}
}
} else if prefix == "" && statement.Conditions == nil {
readOnly = true
} else if prefix != "" && statement.Conditions == nil {
readOnly = true
}
}
return commonFound, readOnly, writeOnly
}
// Returns policy of given object statement.
func getObjectPolicy(statement Statement) (readOnly bool, writeOnly bool) {
if statement.Effect == "Allow" &&
statement.Principal.AWS.Contains("*") &&
statement.Conditions == nil {
if statement.Actions.Intersection(readOnlyObjectActions).Equals(readOnlyObjectActions) {
readOnly = true
}
if statement.Actions.Intersection(writeOnlyObjectActions).Equals(writeOnlyObjectActions) {
writeOnly = true
}
}
return readOnly, writeOnly
}
// GetPolicy - Returns policy of given bucket name, prefix in given statements.
func GetPolicy(statements []Statement, bucketName string, prefix string) BucketPolicy {
bucketResource := awsResourcePrefix + bucketName
objectResource := awsResourcePrefix + bucketName + "/" + prefix + "*"
bucketCommonFound := false
bucketReadOnly := false
bucketWriteOnly := false
matchedResource := ""
objReadOnly := false
objWriteOnly := false
for _, s := range statements {
matchedObjResources := set.NewStringSet()
if s.Resources.Contains(objectResource) {
matchedObjResources.Add(objectResource)
} else {
matchedObjResources = s.Resources.FuncMatch(resourceMatch, objectResource)
}
if !matchedObjResources.IsEmpty() {
readOnly, writeOnly := getObjectPolicy(s)
for resource := range matchedObjResources {
if len(matchedResource) < len(resource) {
objReadOnly = readOnly
objWriteOnly = writeOnly
matchedResource = resource
} else if len(matchedResource) == len(resource) {
objReadOnly = objReadOnly || readOnly
objWriteOnly = objWriteOnly || writeOnly
matchedResource = resource
}
}
} else if s.Resources.Contains(bucketResource) {
commonFound, readOnly, writeOnly := getBucketPolicy(s, prefix)
bucketCommonFound = bucketCommonFound || commonFound
bucketReadOnly = bucketReadOnly || readOnly
bucketWriteOnly = bucketWriteOnly || writeOnly
}
}
policy := BucketPolicyNone
if bucketCommonFound {
if bucketReadOnly && bucketWriteOnly && objReadOnly && objWriteOnly {
policy = BucketPolicyReadWrite
} else if bucketReadOnly && objReadOnly {
policy = BucketPolicyReadOnly
} else if bucketWriteOnly && objWriteOnly {
policy = BucketPolicyWriteOnly
}
}
return policy
}
// GetPolicies - returns a map of policies rules of given bucket name, prefix in given statements.
func GetPolicies(statements []Statement, bucketName string) map[string]BucketPolicy {
policyRules := map[string]BucketPolicy{}
objResources := set.NewStringSet()
// Search all resources related to objects policy
for _, s := range statements {
for r := range s.Resources {
if strings.HasPrefix(r, awsResourcePrefix+bucketName+"/") {
objResources.Add(r)
}
}
}
// Pretend that policy resource as an actual object and fetch its policy
for r := range objResources {
// Put trailing * if exists in asterisk
asterisk := ""
if strings.HasSuffix(r, "*") {
r = r[:len(r)-1]
asterisk = "*"
}
objectPath := r[len(awsResourcePrefix+bucketName)+1:]
p := GetPolicy(statements, bucketName, objectPath)
policyRules[bucketName+"/"+objectPath+asterisk] = p
}
return policyRules
}
// SetPolicy - Returns new statements containing policy of given bucket name and prefix are appended.
func SetPolicy(statements []Statement, policy BucketPolicy, bucketName string, prefix string) []Statement {
out := removeStatements(statements, bucketName, prefix)
// fmt.Println("out = ")
// printstatement(out)
ns := newStatements(policy, bucketName, prefix)
// fmt.Println("ns = ")
// printstatement(ns)
rv := appendStatements(out, ns)
// fmt.Println("rv = ")
// printstatement(rv)
return rv
}
// Match function matches wild cards in 'pattern' for resource.
func resourceMatch(pattern, resource string) bool {
if pattern == "" {
return resource == pattern
}
if pattern == "*" {
return true
}
parts := strings.Split(pattern, "*")
if len(parts) == 1 {
return resource == pattern
}
tGlob := strings.HasSuffix(pattern, "*")
end := len(parts) - 1
if !strings.HasPrefix(resource, parts[0]) {
return false
}
for i := 1; i < end; i++ {
if !strings.Contains(resource, parts[i]) {
return false
}
idx := strings.Index(resource, parts[i]) + len(parts[i])
resource = resource[idx:]
}
return tGlob || strings.HasSuffix(resource, parts[end])
}

View File

@@ -0,0 +1,306 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package s3signer
import (
"bytes"
"encoding/hex"
"fmt"
"io"
"io/ioutil"
"net/http"
"strconv"
"strings"
"time"
)
// Reference for constants used below -
// http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html#example-signature-calculations-streaming
const (
streamingSignAlgorithm = "STREAMING-AWS4-HMAC-SHA256-PAYLOAD"
streamingPayloadHdr = "AWS4-HMAC-SHA256-PAYLOAD"
emptySHA256 = "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"
payloadChunkSize = 64 * 1024
chunkSigConstLen = 17 // ";chunk-signature="
signatureStrLen = 64 // e.g. "f2ca1bb6c7e907d06dafe4687e579fce76b37e4e93b7605022da52e6ccc26fd2"
crlfLen = 2 // CRLF
)
// Request headers to be ignored while calculating seed signature for
// a request.
var ignoredStreamingHeaders = map[string]bool{
"Authorization": true,
"User-Agent": true,
"Content-Type": true,
}
// getSignedChunkLength - calculates the length of chunk metadata
func getSignedChunkLength(chunkDataSize int64) int64 {
return int64(len(fmt.Sprintf("%x", chunkDataSize))) +
chunkSigConstLen +
signatureStrLen +
crlfLen +
chunkDataSize +
crlfLen
}
// getStreamLength - calculates the length of the overall stream (data + metadata)
func getStreamLength(dataLen, chunkSize int64) int64 {
if dataLen <= 0 {
return 0
}
chunksCount := int64(dataLen / chunkSize)
remainingBytes := int64(dataLen % chunkSize)
streamLen := int64(0)
streamLen += chunksCount * getSignedChunkLength(chunkSize)
if remainingBytes > 0 {
streamLen += getSignedChunkLength(remainingBytes)
}
streamLen += getSignedChunkLength(0)
return streamLen
}
// buildChunkStringToSign - returns the string to sign given chunk data
// and previous signature.
func buildChunkStringToSign(t time.Time, region, previousSig string, chunkData []byte) string {
stringToSignParts := []string{
streamingPayloadHdr,
t.Format(iso8601DateFormat),
getScope(region, t),
previousSig,
emptySHA256,
hex.EncodeToString(sum256(chunkData)),
}
return strings.Join(stringToSignParts, "\n")
}
// prepareStreamingRequest - prepares a request with appropriate
// headers before computing the seed signature.
func prepareStreamingRequest(req *http.Request, sessionToken string, dataLen int64, timestamp time.Time) {
// Set x-amz-content-sha256 header.
req.Header.Set("X-Amz-Content-Sha256", streamingSignAlgorithm)
if sessionToken != "" {
req.Header.Set("X-Amz-Security-Token", sessionToken)
}
req.Header.Set("X-Amz-Date", timestamp.Format(iso8601DateFormat))
// Set content length with streaming signature for each chunk included.
req.ContentLength = getStreamLength(dataLen, int64(payloadChunkSize))
req.Header.Set("x-amz-decoded-content-length", strconv.FormatInt(dataLen, 10))
}
// buildChunkHeader - returns the chunk header.
// e.g string(IntHexBase(chunk-size)) + ";chunk-signature=" + signature + \r\n + chunk-data + \r\n
func buildChunkHeader(chunkLen int64, signature string) []byte {
return []byte(strconv.FormatInt(chunkLen, 16) + ";chunk-signature=" + signature + "\r\n")
}
// buildChunkSignature - returns chunk signature for a given chunk and previous signature.
func buildChunkSignature(chunkData []byte, reqTime time.Time, region,
previousSignature, secretAccessKey string) string {
chunkStringToSign := buildChunkStringToSign(reqTime, region,
previousSignature, chunkData)
signingKey := getSigningKey(secretAccessKey, region, reqTime)
return getSignature(signingKey, chunkStringToSign)
}
// getSeedSignature - returns the seed signature for a given request.
func (s *StreamingReader) setSeedSignature(req *http.Request) {
// Get canonical request
canonicalRequest := getCanonicalRequest(*req, ignoredStreamingHeaders)
// Get string to sign from canonical request.
stringToSign := getStringToSignV4(s.reqTime, s.region, canonicalRequest)
signingKey := getSigningKey(s.secretAccessKey, s.region, s.reqTime)
// Calculate signature.
s.seedSignature = getSignature(signingKey, stringToSign)
}
// StreamingReader implements chunked upload signature as a reader on
// top of req.Body's ReaderCloser chunk header;data;... repeat
type StreamingReader struct {
accessKeyID string
secretAccessKey string
sessionToken string
region string
prevSignature string
seedSignature string
contentLen int64 // Content-Length from req header
baseReadCloser io.ReadCloser // underlying io.Reader
bytesRead int64 // bytes read from underlying io.Reader
buf bytes.Buffer // holds signed chunk
chunkBuf []byte // holds raw data read from req Body
chunkBufLen int // no. of bytes read so far into chunkBuf
done bool // done reading the underlying reader to EOF
reqTime time.Time
chunkNum int
totalChunks int
lastChunkSize int
}
// signChunk - signs a chunk read from s.baseReader of chunkLen size.
func (s *StreamingReader) signChunk(chunkLen int) {
// Compute chunk signature for next header
signature := buildChunkSignature(s.chunkBuf[:chunkLen], s.reqTime,
s.region, s.prevSignature, s.secretAccessKey)
// For next chunk signature computation
s.prevSignature = signature
// Write chunk header into streaming buffer
chunkHdr := buildChunkHeader(int64(chunkLen), signature)
s.buf.Write(chunkHdr)
// Write chunk data into streaming buffer
s.buf.Write(s.chunkBuf[:chunkLen])
// Write the chunk trailer.
s.buf.Write([]byte("\r\n"))
// Reset chunkBufLen for next chunk read.
s.chunkBufLen = 0
s.chunkNum++
}
// setStreamingAuthHeader - builds and sets authorization header value
// for streaming signature.
func (s *StreamingReader) setStreamingAuthHeader(req *http.Request) {
credential := GetCredential(s.accessKeyID, s.region, s.reqTime)
authParts := []string{
signV4Algorithm + " Credential=" + credential,
"SignedHeaders=" + getSignedHeaders(*req, ignoredStreamingHeaders),
"Signature=" + s.seedSignature,
}
// Set authorization header.
auth := strings.Join(authParts, ",")
req.Header.Set("Authorization", auth)
}
// StreamingSignV4 - provides chunked upload signatureV4 support by
// implementing io.Reader.
func StreamingSignV4(req *http.Request, accessKeyID, secretAccessKey, sessionToken,
region string, dataLen int64, reqTime time.Time) *http.Request {
// Set headers needed for streaming signature.
prepareStreamingRequest(req, sessionToken, dataLen, reqTime)
if req.Body == nil {
req.Body = ioutil.NopCloser(bytes.NewReader([]byte("")))
}
stReader := &StreamingReader{
baseReadCloser: req.Body,
accessKeyID: accessKeyID,
secretAccessKey: secretAccessKey,
sessionToken: sessionToken,
region: region,
reqTime: reqTime,
chunkBuf: make([]byte, payloadChunkSize),
contentLen: dataLen,
chunkNum: 1,
totalChunks: int((dataLen+payloadChunkSize-1)/payloadChunkSize) + 1,
lastChunkSize: int(dataLen % payloadChunkSize),
}
// Add the request headers required for chunk upload signing.
// Compute the seed signature.
stReader.setSeedSignature(req)
// Set the authorization header with the seed signature.
stReader.setStreamingAuthHeader(req)
// Set seed signature as prevSignature for subsequent
// streaming signing process.
stReader.prevSignature = stReader.seedSignature
req.Body = stReader
return req
}
// Read - this method performs chunk upload signature providing a
// io.Reader interface.
func (s *StreamingReader) Read(buf []byte) (int, error) {
switch {
// After the last chunk is read from underlying reader, we
// never re-fill s.buf.
case s.done:
// s.buf will be (re-)filled with next chunk when has lesser
// bytes than asked for.
case s.buf.Len() < len(buf):
s.chunkBufLen = 0
for {
n1, err := s.baseReadCloser.Read(s.chunkBuf[s.chunkBufLen:])
// Usually we validate `err` first, but in this case
// we are validating n > 0 for the following reasons.
//
// 1. n > 0, err is one of io.EOF, nil (near end of stream)
// A Reader returning a non-zero number of bytes at the end
// of the input stream may return either err == EOF or err == nil
//
// 2. n == 0, err is io.EOF (actual end of stream)
//
// Callers should always process the n > 0 bytes returned
// before considering the error err.
if n1 > 0 {
s.chunkBufLen += n1
s.bytesRead += int64(n1)
if s.chunkBufLen == payloadChunkSize ||
(s.chunkNum == s.totalChunks-1 &&
s.chunkBufLen == s.lastChunkSize) {
// Sign the chunk and write it to s.buf.
s.signChunk(s.chunkBufLen)
break
}
}
if err != nil {
if err == io.EOF {
// No more data left in baseReader - last chunk.
// Done reading the last chunk from baseReader.
s.done = true
// bytes read from baseReader different than
// content length provided.
if s.bytesRead != s.contentLen {
return 0, io.ErrUnexpectedEOF
}
// Sign the chunk and write it to s.buf.
s.signChunk(0)
break
}
return 0, err
}
}
}
return s.buf.Read(buf)
}
// Close - this method makes underlying io.ReadCloser's Close method available.
func (s *StreamingReader) Close() error {
return s.baseReadCloser.Close()
}

View File

@@ -0,0 +1,320 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package s3signer
import (
"bytes"
"crypto/hmac"
"crypto/sha1"
"encoding/base64"
"fmt"
"net/http"
"net/url"
"path/filepath"
"sort"
"strconv"
"strings"
"time"
"github.com/minio/minio-go/pkg/s3utils"
)
// Signature and API related constants.
const (
signV2Algorithm = "AWS"
)
// Encode input URL path to URL encoded path.
func encodeURL2Path(u *url.URL) (path string) {
// Encode URL path.
if isS3, _ := filepath.Match("*.s3*.amazonaws.com", u.Host); isS3 {
bucketName := u.Host[:strings.LastIndex(u.Host, ".s3")]
path = "/" + bucketName
path += u.Path
path = s3utils.EncodePath(path)
return
}
if strings.HasSuffix(u.Host, ".storage.googleapis.com") {
path = "/" + strings.TrimSuffix(u.Host, ".storage.googleapis.com")
path += u.Path
path = s3utils.EncodePath(path)
return
}
path = s3utils.EncodePath(u.Path)
return
}
// PreSignV2 - presign the request in following style.
// https://${S3_BUCKET}.s3.amazonaws.com/${S3_OBJECT}?AWSAccessKeyId=${S3_ACCESS_KEY}&Expires=${TIMESTAMP}&Signature=${SIGNATURE}.
func PreSignV2(req http.Request, accessKeyID, secretAccessKey string, expires int64) *http.Request {
// Presign is not needed for anonymous credentials.
if accessKeyID == "" || secretAccessKey == "" {
return &req
}
d := time.Now().UTC()
// Find epoch expires when the request will expire.
epochExpires := d.Unix() + expires
// Add expires header if not present.
if expiresStr := req.Header.Get("Expires"); expiresStr == "" {
req.Header.Set("Expires", strconv.FormatInt(epochExpires, 10))
}
// Get presigned string to sign.
stringToSign := preStringToSignV2(req)
hm := hmac.New(sha1.New, []byte(secretAccessKey))
hm.Write([]byte(stringToSign))
// Calculate signature.
signature := base64.StdEncoding.EncodeToString(hm.Sum(nil))
query := req.URL.Query()
// Handle specially for Google Cloud Storage.
if strings.Contains(req.URL.Host, ".storage.googleapis.com") {
query.Set("GoogleAccessId", accessKeyID)
} else {
query.Set("AWSAccessKeyId", accessKeyID)
}
// Fill in Expires for presigned query.
query.Set("Expires", strconv.FormatInt(epochExpires, 10))
// Encode query and save.
req.URL.RawQuery = s3utils.QueryEncode(query)
// Save signature finally.
req.URL.RawQuery += "&Signature=" + s3utils.EncodePath(signature)
// Return.
return &req
}
// PostPresignSignatureV2 - presigned signature for PostPolicy
// request.
func PostPresignSignatureV2(policyBase64, secretAccessKey string) string {
hm := hmac.New(sha1.New, []byte(secretAccessKey))
hm.Write([]byte(policyBase64))
signature := base64.StdEncoding.EncodeToString(hm.Sum(nil))
return signature
}
// Authorization = "AWS" + " " + AWSAccessKeyId + ":" + Signature;
// Signature = Base64( HMAC-SHA1( YourSecretAccessKeyID, UTF-8-Encoding-Of( StringToSign ) ) );
//
// StringToSign = HTTP-Verb + "\n" +
// Content-Md5 + "\n" +
// Content-Type + "\n" +
// Date + "\n" +
// CanonicalizedProtocolHeaders +
// CanonicalizedResource;
//
// CanonicalizedResource = [ "/" + Bucket ] +
// <HTTP-Request-URI, from the protocol name up to the query string> +
// [ subresource, if present. For example "?acl", "?location", "?logging", or "?torrent"];
//
// CanonicalizedProtocolHeaders = <described below>
// SignV2 sign the request before Do() (AWS Signature Version 2).
func SignV2(req http.Request, accessKeyID, secretAccessKey string) *http.Request {
// Signature calculation is not needed for anonymous credentials.
if accessKeyID == "" || secretAccessKey == "" {
return &req
}
// Initial time.
d := time.Now().UTC()
// Add date if not present.
if date := req.Header.Get("Date"); date == "" {
req.Header.Set("Date", d.Format(http.TimeFormat))
}
// Calculate HMAC for secretAccessKey.
stringToSign := stringToSignV2(req)
hm := hmac.New(sha1.New, []byte(secretAccessKey))
hm.Write([]byte(stringToSign))
// Prepare auth header.
authHeader := new(bytes.Buffer)
authHeader.WriteString(fmt.Sprintf("%s %s:", signV2Algorithm, accessKeyID))
encoder := base64.NewEncoder(base64.StdEncoding, authHeader)
encoder.Write(hm.Sum(nil))
encoder.Close()
// Set Authorization header.
req.Header.Set("Authorization", authHeader.String())
return &req
}
// From the Amazon docs:
//
// StringToSign = HTTP-Verb + "\n" +
// Content-Md5 + "\n" +
// Content-Type + "\n" +
// Expires + "\n" +
// CanonicalizedProtocolHeaders +
// CanonicalizedResource;
func preStringToSignV2(req http.Request) string {
buf := new(bytes.Buffer)
// Write standard headers.
writePreSignV2Headers(buf, req)
// Write canonicalized protocol headers if any.
writeCanonicalizedHeaders(buf, req)
// Write canonicalized Query resources if any.
writeCanonicalizedResource(buf, req)
return buf.String()
}
// writePreSignV2Headers - write preSign v2 required headers.
func writePreSignV2Headers(buf *bytes.Buffer, req http.Request) {
buf.WriteString(req.Method + "\n")
buf.WriteString(req.Header.Get("Content-Md5") + "\n")
buf.WriteString(req.Header.Get("Content-Type") + "\n")
buf.WriteString(req.Header.Get("Expires") + "\n")
}
// From the Amazon docs:
//
// StringToSign = HTTP-Verb + "\n" +
// Content-Md5 + "\n" +
// Content-Type + "\n" +
// Date + "\n" +
// CanonicalizedProtocolHeaders +
// CanonicalizedResource;
func stringToSignV2(req http.Request) string {
buf := new(bytes.Buffer)
// Write standard headers.
writeSignV2Headers(buf, req)
// Write canonicalized protocol headers if any.
writeCanonicalizedHeaders(buf, req)
// Write canonicalized Query resources if any.
writeCanonicalizedResource(buf, req)
return buf.String()
}
// writeSignV2Headers - write signV2 required headers.
func writeSignV2Headers(buf *bytes.Buffer, req http.Request) {
buf.WriteString(req.Method + "\n")
buf.WriteString(req.Header.Get("Content-Md5") + "\n")
buf.WriteString(req.Header.Get("Content-Type") + "\n")
buf.WriteString(req.Header.Get("Date") + "\n")
}
// writeCanonicalizedHeaders - write canonicalized headers.
func writeCanonicalizedHeaders(buf *bytes.Buffer, req http.Request) {
var protoHeaders []string
vals := make(map[string][]string)
for k, vv := range req.Header {
// All the AMZ headers should be lowercase
lk := strings.ToLower(k)
if strings.HasPrefix(lk, "x-amz") {
protoHeaders = append(protoHeaders, lk)
vals[lk] = vv
}
}
sort.Strings(protoHeaders)
for _, k := range protoHeaders {
buf.WriteString(k)
buf.WriteByte(':')
for idx, v := range vals[k] {
if idx > 0 {
buf.WriteByte(',')
}
if strings.Contains(v, "\n") {
// TODO: "Unfold" long headers that
// span multiple lines (as allowed by
// RFC 2616, section 4.2) by replacing
// the folding white-space (including
// new-line) by a single space.
buf.WriteString(v)
} else {
buf.WriteString(v)
}
}
buf.WriteByte('\n')
}
}
// AWS S3 Signature V2 calculation rule is give here:
// http://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html#RESTAuthenticationStringToSign
// Whitelist resource list that will be used in query string for signature-V2 calculation.
// The list should be alphabetically sorted
var resourceList = []string{
"acl",
"delete",
"lifecycle",
"location",
"logging",
"notification",
"partNumber",
"policy",
"requestPayment",
"response-cache-control",
"response-content-disposition",
"response-content-encoding",
"response-content-language",
"response-content-type",
"response-expires",
"torrent",
"uploadId",
"uploads",
"versionId",
"versioning",
"versions",
"website",
}
// From the Amazon docs:
//
// CanonicalizedResource = [ "/" + Bucket ] +
// <HTTP-Request-URI, from the protocol name up to the query string> +
// [ sub-resource, if present. For example "?acl", "?location", "?logging", or "?torrent"];
func writeCanonicalizedResource(buf *bytes.Buffer, req http.Request) {
// Save request URL.
requestURL := req.URL
// Get encoded URL path.
buf.WriteString(encodeURL2Path(requestURL))
if requestURL.RawQuery != "" {
var n int
vals, _ := url.ParseQuery(requestURL.RawQuery)
// Verify if any sub resource queries are present, if yes
// canonicallize them.
for _, resource := range resourceList {
if vv, ok := vals[resource]; ok && len(vv) > 0 {
n++
// First element
switch n {
case 1:
buf.WriteByte('?')
// The rest
default:
buf.WriteByte('&')
}
buf.WriteString(resource)
// Request parameters
if len(vv[0]) > 0 {
buf.WriteByte('=')
buf.WriteString(vv[0])
}
}
}
}
}

View File

@@ -0,0 +1,315 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package s3signer
import (
"bytes"
"encoding/hex"
"net/http"
"sort"
"strconv"
"strings"
"time"
"github.com/minio/minio-go/pkg/s3utils"
)
// Signature and API related constants.
const (
signV4Algorithm = "AWS4-HMAC-SHA256"
iso8601DateFormat = "20060102T150405Z"
yyyymmdd = "20060102"
)
///
/// Excerpts from @lsegal -
/// https://github.com/aws/aws-sdk-js/issues/659#issuecomment-120477258.
///
/// User-Agent:
///
/// This is ignored from signing because signing this causes
/// problems with generating pre-signed URLs (that are executed
/// by other agents) or when customers pass requests through
/// proxies, which may modify the user-agent.
///
/// Content-Length:
///
/// This is ignored from signing because generating a pre-signed
/// URL should not provide a content-length constraint,
/// specifically when vending a S3 pre-signed PUT URL. The
/// corollary to this is that when sending regular requests
/// (non-pre-signed), the signature contains a checksum of the
/// body, which implicitly validates the payload length (since
/// changing the number of bytes would change the checksum)
/// and therefore this header is not valuable in the signature.
///
/// Content-Type:
///
/// Signing this header causes quite a number of problems in
/// browser environments, where browsers like to modify and
/// normalize the content-type header in different ways. There is
/// more information on this in https://goo.gl/2E9gyy. Avoiding
/// this field simplifies logic and reduces the possibility of
/// future bugs.
///
/// Authorization:
///
/// Is skipped for obvious reasons
///
var v4IgnoredHeaders = map[string]bool{
"Authorization": true,
"Content-Type": true,
"Content-Length": true,
"User-Agent": true,
}
// getSigningKey hmac seed to calculate final signature.
func getSigningKey(secret, loc string, t time.Time) []byte {
date := sumHMAC([]byte("AWS4"+secret), []byte(t.Format(yyyymmdd)))
location := sumHMAC(date, []byte(loc))
service := sumHMAC(location, []byte("s3"))
signingKey := sumHMAC(service, []byte("aws4_request"))
return signingKey
}
// getSignature final signature in hexadecimal form.
func getSignature(signingKey []byte, stringToSign string) string {
return hex.EncodeToString(sumHMAC(signingKey, []byte(stringToSign)))
}
// getScope generate a string of a specific date, an AWS region, and a
// service.
func getScope(location string, t time.Time) string {
scope := strings.Join([]string{
t.Format(yyyymmdd),
location,
"s3",
"aws4_request",
}, "/")
return scope
}
// GetCredential generate a credential string.
func GetCredential(accessKeyID, location string, t time.Time) string {
scope := getScope(location, t)
return accessKeyID + "/" + scope
}
// getHashedPayload get the hexadecimal value of the SHA256 hash of
// the request payload.
func getHashedPayload(req http.Request) string {
hashedPayload := req.Header.Get("X-Amz-Content-Sha256")
if hashedPayload == "" {
// Presign does not have a payload, use S3 recommended value.
hashedPayload = unsignedPayload
}
return hashedPayload
}
// getCanonicalHeaders generate a list of request headers for
// signature.
func getCanonicalHeaders(req http.Request, ignoredHeaders map[string]bool) string {
var headers []string
vals := make(map[string][]string)
for k, vv := range req.Header {
if _, ok := ignoredHeaders[http.CanonicalHeaderKey(k)]; ok {
continue // ignored header
}
headers = append(headers, strings.ToLower(k))
vals[strings.ToLower(k)] = vv
}
headers = append(headers, "host")
sort.Strings(headers)
var buf bytes.Buffer
// Save all the headers in canonical form <header>:<value> newline
// separated for each header.
for _, k := range headers {
buf.WriteString(k)
buf.WriteByte(':')
switch {
case k == "host":
buf.WriteString(req.URL.Host)
fallthrough
default:
for idx, v := range vals[k] {
if idx > 0 {
buf.WriteByte(',')
}
buf.WriteString(v)
}
buf.WriteByte('\n')
}
}
return buf.String()
}
// getSignedHeaders generate all signed request headers.
// i.e lexically sorted, semicolon-separated list of lowercase
// request header names.
func getSignedHeaders(req http.Request, ignoredHeaders map[string]bool) string {
var headers []string
for k := range req.Header {
if _, ok := ignoredHeaders[http.CanonicalHeaderKey(k)]; ok {
continue // Ignored header found continue.
}
headers = append(headers, strings.ToLower(k))
}
headers = append(headers, "host")
sort.Strings(headers)
return strings.Join(headers, ";")
}
// getCanonicalRequest generate a canonical request of style.
//
// canonicalRequest =
// <HTTPMethod>\n
// <CanonicalURI>\n
// <CanonicalQueryString>\n
// <CanonicalHeaders>\n
// <SignedHeaders>\n
// <HashedPayload>
func getCanonicalRequest(req http.Request, ignoredHeaders map[string]bool) string {
req.URL.RawQuery = strings.Replace(req.URL.Query().Encode(), "+", "%20", -1)
canonicalRequest := strings.Join([]string{
req.Method,
s3utils.EncodePath(req.URL.Path),
req.URL.RawQuery,
getCanonicalHeaders(req, ignoredHeaders),
getSignedHeaders(req, ignoredHeaders),
getHashedPayload(req),
}, "\n")
return canonicalRequest
}
// getStringToSign a string based on selected query values.
func getStringToSignV4(t time.Time, location, canonicalRequest string) string {
stringToSign := signV4Algorithm + "\n" + t.Format(iso8601DateFormat) + "\n"
stringToSign = stringToSign + getScope(location, t) + "\n"
stringToSign = stringToSign + hex.EncodeToString(sum256([]byte(canonicalRequest)))
return stringToSign
}
// PreSignV4 presign the request, in accordance with
// http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-query-string-auth.html.
func PreSignV4(req http.Request, accessKeyID, secretAccessKey, sessionToken, location string, expires int64) *http.Request {
// Presign is not needed for anonymous credentials.
if accessKeyID == "" || secretAccessKey == "" {
return &req
}
// Initial time.
t := time.Now().UTC()
// Get credential string.
credential := GetCredential(accessKeyID, location, t)
// Get all signed headers.
signedHeaders := getSignedHeaders(req, v4IgnoredHeaders)
// Set URL query.
query := req.URL.Query()
query.Set("X-Amz-Algorithm", signV4Algorithm)
query.Set("X-Amz-Date", t.Format(iso8601DateFormat))
query.Set("X-Amz-Expires", strconv.FormatInt(expires, 10))
query.Set("X-Amz-SignedHeaders", signedHeaders)
query.Set("X-Amz-Credential", credential)
// Set session token if available.
if sessionToken != "" {
query.Set("X-Amz-Security-Token", sessionToken)
}
req.URL.RawQuery = query.Encode()
// Get canonical request.
canonicalRequest := getCanonicalRequest(req, v4IgnoredHeaders)
// Get string to sign from canonical request.
stringToSign := getStringToSignV4(t, location, canonicalRequest)
// Gext hmac signing key.
signingKey := getSigningKey(secretAccessKey, location, t)
// Calculate signature.
signature := getSignature(signingKey, stringToSign)
// Add signature header to RawQuery.
req.URL.RawQuery += "&X-Amz-Signature=" + signature
return &req
}
// PostPresignSignatureV4 - presigned signature for PostPolicy
// requests.
func PostPresignSignatureV4(policyBase64 string, t time.Time, secretAccessKey, location string) string {
// Get signining key.
signingkey := getSigningKey(secretAccessKey, location, t)
// Calculate signature.
signature := getSignature(signingkey, policyBase64)
return signature
}
// SignV4 sign the request before Do(), in accordance with
// http://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html.
func SignV4(req http.Request, accessKeyID, secretAccessKey, sessionToken, location string) *http.Request {
// Signature calculation is not needed for anonymous credentials.
if accessKeyID == "" || secretAccessKey == "" {
return &req
}
// Initial time.
t := time.Now().UTC()
// Set x-amz-date.
req.Header.Set("X-Amz-Date", t.Format(iso8601DateFormat))
// Set session token if available.
if sessionToken != "" {
req.Header.Set("X-Amz-Security-Token", sessionToken)
}
// Get canonical request.
canonicalRequest := getCanonicalRequest(req, v4IgnoredHeaders)
// Get string to sign from canonical request.
stringToSign := getStringToSignV4(t, location, canonicalRequest)
// Get hmac signing key.
signingKey := getSigningKey(secretAccessKey, location, t)
// Get credential string.
credential := GetCredential(accessKeyID, location, t)
// Get all signed headers.
signedHeaders := getSignedHeaders(req, v4IgnoredHeaders)
// Calculate signature.
signature := getSignature(signingKey, stringToSign)
// If regular request, construct the final authorization header.
parts := []string{
signV4Algorithm + " Credential=" + credential,
"SignedHeaders=" + signedHeaders,
"Signature=" + signature,
}
// Set authorization header.
auth := strings.Join(parts, ", ")
req.Header.Set("Authorization", auth)
return &req
}

40
vendor/github.com/minio/minio-go/pkg/s3signer/utils.go generated vendored Normal file
View File

@@ -0,0 +1,40 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package s3signer
import (
"crypto/hmac"
"crypto/sha256"
)
// unsignedPayload - value to be set to X-Amz-Content-Sha256 header when
const unsignedPayload = "UNSIGNED-PAYLOAD"
// sum256 calculate sha256 sum for an input byte array.
func sum256(data []byte) []byte {
hash := sha256.New()
hash.Write(data)
return hash.Sum(nil)
}
// sumHMAC calculate hmac between two input byte array.
func sumHMAC(key []byte, data []byte) []byte {
hash := hmac.New(sha256.New, key)
hash.Write(data)
return hash.Sum(nil)
}

277
vendor/github.com/minio/minio-go/pkg/s3utils/utils.go generated vendored Normal file
View File

@@ -0,0 +1,277 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package s3utils
import (
"bytes"
"encoding/hex"
"errors"
"net"
"net/url"
"regexp"
"sort"
"strings"
"unicode/utf8"
)
// Sentinel URL is the default url value which is invalid.
var sentinelURL = url.URL{}
// IsValidDomain validates if input string is a valid domain name.
func IsValidDomain(host string) bool {
// See RFC 1035, RFC 3696.
host = strings.TrimSpace(host)
if len(host) == 0 || len(host) > 255 {
return false
}
// host cannot start or end with "-"
if host[len(host)-1:] == "-" || host[:1] == "-" {
return false
}
// host cannot start or end with "_"
if host[len(host)-1:] == "_" || host[:1] == "_" {
return false
}
// host cannot start or end with a "."
if host[len(host)-1:] == "." || host[:1] == "." {
return false
}
// All non alphanumeric characters are invalid.
if strings.ContainsAny(host, "`~!@#$%^&*()+={}[]|\\\"';:><?/") {
return false
}
// No need to regexp match, since the list is non-exhaustive.
// We let it valid and fail later.
return true
}
// IsValidIP parses input string for ip address validity.
func IsValidIP(ip string) bool {
return net.ParseIP(ip) != nil
}
// IsVirtualHostSupported - verifies if bucketName can be part of
// virtual host. Currently only Amazon S3 and Google Cloud Storage
// would support this.
func IsVirtualHostSupported(endpointURL url.URL, bucketName string) bool {
if endpointURL == sentinelURL {
return false
}
// bucketName can be valid but '.' in the hostname will fail SSL
// certificate validation. So do not use host-style for such buckets.
if endpointURL.Scheme == "https" && strings.Contains(bucketName, ".") {
return false
}
// Return true for all other cases
return IsAmazonEndpoint(endpointURL) || IsGoogleEndpoint(endpointURL)
}
// AmazonS3Host - regular expression used to determine if an arg is s3 host.
var AmazonS3Host = regexp.MustCompile("^s3[.-]?(.*?)\\.amazonaws\\.com$")
// IsAmazonEndpoint - Match if it is exactly Amazon S3 endpoint.
func IsAmazonEndpoint(endpointURL url.URL) bool {
if IsAmazonChinaEndpoint(endpointURL) {
return true
}
if IsAmazonGovCloudEndpoint(endpointURL) {
return true
}
return AmazonS3Host.MatchString(endpointURL.Host)
}
// IsAmazonGovCloudEndpoint - Match if it is exactly Amazon S3 GovCloud endpoint.
func IsAmazonGovCloudEndpoint(endpointURL url.URL) bool {
if endpointURL == sentinelURL {
return false
}
return (endpointURL.Host == "s3-us-gov-west-1.amazonaws.com" ||
IsAmazonFIPSGovCloudEndpoint(endpointURL))
}
// IsAmazonFIPSGovCloudEndpoint - Match if it is exactly Amazon S3 FIPS GovCloud endpoint.
func IsAmazonFIPSGovCloudEndpoint(endpointURL url.URL) bool {
if endpointURL == sentinelURL {
return false
}
return endpointURL.Host == "s3-fips-us-gov-west-1.amazonaws.com"
}
// IsAmazonChinaEndpoint - Match if it is exactly Amazon S3 China endpoint.
// Customers who wish to use the new Beijing Region are required
// to sign up for a separate set of account credentials unique to
// the China (Beijing) Region. Customers with existing AWS credentials
// will not be able to access resources in the new Region, and vice versa.
// For more info https://aws.amazon.com/about-aws/whats-new/2013/12/18/announcing-the-aws-china-beijing-region/
func IsAmazonChinaEndpoint(endpointURL url.URL) bool {
if endpointURL == sentinelURL {
return false
}
return endpointURL.Host == "s3.cn-north-1.amazonaws.com.cn"
}
// IsGoogleEndpoint - Match if it is exactly Google cloud storage endpoint.
func IsGoogleEndpoint(endpointURL url.URL) bool {
if endpointURL == sentinelURL {
return false
}
return endpointURL.Host == "storage.googleapis.com"
}
// Expects ascii encoded strings - from output of urlEncodePath
func percentEncodeSlash(s string) string {
return strings.Replace(s, "/", "%2F", -1)
}
// QueryEncode - encodes query values in their URL encoded form. In
// addition to the percent encoding performed by urlEncodePath() used
// here, it also percent encodes '/' (forward slash)
func QueryEncode(v url.Values) string {
if v == nil {
return ""
}
var buf bytes.Buffer
keys := make([]string, 0, len(v))
for k := range v {
keys = append(keys, k)
}
sort.Strings(keys)
for _, k := range keys {
vs := v[k]
prefix := percentEncodeSlash(EncodePath(k)) + "="
for _, v := range vs {
if buf.Len() > 0 {
buf.WriteByte('&')
}
buf.WriteString(prefix)
buf.WriteString(percentEncodeSlash(EncodePath(v)))
}
}
return buf.String()
}
// if object matches reserved string, no need to encode them
var reservedObjectNames = regexp.MustCompile("^[a-zA-Z0-9-_.~/]+$")
// EncodePath encode the strings from UTF-8 byte representations to HTML hex escape sequences
//
// This is necessary since regular url.Parse() and url.Encode() functions do not support UTF-8
// non english characters cannot be parsed due to the nature in which url.Encode() is written
//
// This function on the other hand is a direct replacement for url.Encode() technique to support
// pretty much every UTF-8 character.
func EncodePath(pathName string) string {
if reservedObjectNames.MatchString(pathName) {
return pathName
}
var encodedPathname string
for _, s := range pathName {
if 'A' <= s && s <= 'Z' || 'a' <= s && s <= 'z' || '0' <= s && s <= '9' { // §2.3 Unreserved characters (mark)
encodedPathname = encodedPathname + string(s)
continue
}
switch s {
case '-', '_', '.', '~', '/': // §2.3 Unreserved characters (mark)
encodedPathname = encodedPathname + string(s)
continue
default:
len := utf8.RuneLen(s)
if len < 0 {
// if utf8 cannot convert return the same string as is
return pathName
}
u := make([]byte, len)
utf8.EncodeRune(u, s)
for _, r := range u {
hex := hex.EncodeToString([]byte{r})
encodedPathname = encodedPathname + "%" + strings.ToUpper(hex)
}
}
}
return encodedPathname
}
// We support '.' with bucket names but we fallback to using path
// style requests instead for such buckets.
var (
validBucketName = regexp.MustCompile(`^[A-Za-z0-9][A-Za-z0-9\.\-\_\:]{1,61}[A-Za-z0-9]$`)
validBucketNameStrict = regexp.MustCompile(`^[a-z0-9][a-z0-9\.\-]{1,61}[a-z0-9]$`)
ipAddress = regexp.MustCompile(`^(\d+\.){3}\d+$`)
)
// Common checker for both stricter and basic validation.
func checkBucketNameCommon(bucketName string, strict bool) (err error) {
if strings.TrimSpace(bucketName) == "" {
return errors.New("Bucket name cannot be empty")
}
if len(bucketName) < 3 {
return errors.New("Bucket name cannot be smaller than 3 characters")
}
if len(bucketName) > 63 {
return errors.New("Bucket name cannot be greater than 63 characters")
}
if ipAddress.MatchString(bucketName) {
return errors.New("Bucket name cannot be an ip address")
}
if strings.Contains(bucketName, "..") {
return errors.New("Bucket name contains invalid characters")
}
if strict {
if !validBucketNameStrict.MatchString(bucketName) {
err = errors.New("Bucket name contains invalid characters")
}
return err
}
if !validBucketName.MatchString(bucketName) {
err = errors.New("Bucket name contains invalid characters")
}
return err
}
// CheckValidBucketName - checks if we have a valid input bucket name.
func CheckValidBucketName(bucketName string) (err error) {
return checkBucketNameCommon(bucketName, false)
}
// CheckValidBucketNameStrict - checks if we have a valid input bucket name.
// This is a stricter version.
// - http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html
func CheckValidBucketNameStrict(bucketName string) (err error) {
return checkBucketNameCommon(bucketName, true)
}
// CheckValidObjectNamePrefix - checks if we have a valid input object name prefix.
// - http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html
func CheckValidObjectNamePrefix(objectName string) error {
if len(objectName) > 1024 {
return errors.New("Object name cannot be greater than 1024 characters")
}
if !utf8.ValidString(objectName) {
return errors.New("Object name with non UTF-8 strings are not supported")
}
return nil
}
// CheckValidObjectName - checks if we have a valid input object name.
// - http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html
func CheckValidObjectName(objectName string) error {
if strings.TrimSpace(objectName) == "" {
return errors.New("Object name cannot be empty")
}
return CheckValidObjectNamePrefix(objectName)
}

197
vendor/github.com/minio/minio-go/pkg/set/stringset.go generated vendored Normal file
View File

@@ -0,0 +1,197 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package set
import (
"encoding/json"
"fmt"
"sort"
)
// StringSet - uses map as set of strings.
type StringSet map[string]struct{}
// ToSlice - returns StringSet as string slice.
func (set StringSet) ToSlice() []string {
keys := make([]string, 0, len(set))
for k := range set {
keys = append(keys, k)
}
sort.Strings(keys)
return keys
}
// IsEmpty - returns whether the set is empty or not.
func (set StringSet) IsEmpty() bool {
return len(set) == 0
}
// Add - adds string to the set.
func (set StringSet) Add(s string) {
set[s] = struct{}{}
}
// Remove - removes string in the set. It does nothing if string does not exist in the set.
func (set StringSet) Remove(s string) {
delete(set, s)
}
// Contains - checks if string is in the set.
func (set StringSet) Contains(s string) bool {
_, ok := set[s]
return ok
}
// FuncMatch - returns new set containing each value who passes match function.
// A 'matchFn' should accept element in a set as first argument and
// 'matchString' as second argument. The function can do any logic to
// compare both the arguments and should return true to accept element in
// a set to include in output set else the element is ignored.
func (set StringSet) FuncMatch(matchFn func(string, string) bool, matchString string) StringSet {
nset := NewStringSet()
for k := range set {
if matchFn(k, matchString) {
nset.Add(k)
}
}
return nset
}
// ApplyFunc - returns new set containing each value processed by 'applyFn'.
// A 'applyFn' should accept element in a set as a argument and return
// a processed string. The function can do any logic to return a processed
// string.
func (set StringSet) ApplyFunc(applyFn func(string) string) StringSet {
nset := NewStringSet()
for k := range set {
nset.Add(applyFn(k))
}
return nset
}
// Equals - checks whether given set is equal to current set or not.
func (set StringSet) Equals(sset StringSet) bool {
// If length of set is not equal to length of given set, the
// set is not equal to given set.
if len(set) != len(sset) {
return false
}
// As both sets are equal in length, check each elements are equal.
for k := range set {
if _, ok := sset[k]; !ok {
return false
}
}
return true
}
// Intersection - returns the intersection with given set as new set.
func (set StringSet) Intersection(sset StringSet) StringSet {
nset := NewStringSet()
for k := range set {
if _, ok := sset[k]; ok {
nset.Add(k)
}
}
return nset
}
// Difference - returns the difference with given set as new set.
func (set StringSet) Difference(sset StringSet) StringSet {
nset := NewStringSet()
for k := range set {
if _, ok := sset[k]; !ok {
nset.Add(k)
}
}
return nset
}
// Union - returns the union with given set as new set.
func (set StringSet) Union(sset StringSet) StringSet {
nset := NewStringSet()
for k := range set {
nset.Add(k)
}
for k := range sset {
nset.Add(k)
}
return nset
}
// MarshalJSON - converts to JSON data.
func (set StringSet) MarshalJSON() ([]byte, error) {
return json.Marshal(set.ToSlice())
}
// UnmarshalJSON - parses JSON data and creates new set with it.
// If 'data' contains JSON string array, the set contains each string.
// If 'data' contains JSON string, the set contains the string as one element.
// If 'data' contains Other JSON types, JSON parse error is returned.
func (set *StringSet) UnmarshalJSON(data []byte) error {
sl := []string{}
var err error
if err = json.Unmarshal(data, &sl); err == nil {
*set = make(StringSet)
for _, s := range sl {
set.Add(s)
}
} else {
var s string
if err = json.Unmarshal(data, &s); err == nil {
*set = make(StringSet)
set.Add(s)
}
}
return err
}
// String - returns printable string of the set.
func (set StringSet) String() string {
return fmt.Sprintf("%s", set.ToSlice())
}
// NewStringSet - creates new string set.
func NewStringSet() StringSet {
return make(StringSet)
}
// CreateStringSet - creates new string set with given string values.
func CreateStringSet(sl ...string) StringSet {
set := make(StringSet)
for _, k := range sl {
set.Add(k)
}
return set
}
// CopyStringSet - returns copy of given set.
func CopyStringSet(set StringSet) StringSet {
nset := NewStringSet()
for k, v := range set {
nset[k] = v
}
return nset
}

248
vendor/github.com/minio/minio-go/post-policy.go generated vendored Normal file
View File

@@ -0,0 +1,248 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"encoding/base64"
"fmt"
"strings"
"time"
)
// expirationDateFormat date format for expiration key in json policy.
const expirationDateFormat = "2006-01-02T15:04:05.999Z"
// policyCondition explanation:
// http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-HTTPPOSTConstructPolicy.html
//
// Example:
//
// policyCondition {
// matchType: "$eq",
// key: "$Content-Type",
// value: "image/png",
// }
//
type policyCondition struct {
matchType string
condition string
value string
}
// PostPolicy - Provides strict static type conversion and validation
// for Amazon S3's POST policy JSON string.
type PostPolicy struct {
// Expiration date and time of the POST policy.
expiration time.Time
// Collection of different policy conditions.
conditions []policyCondition
// ContentLengthRange minimum and maximum allowable size for the
// uploaded content.
contentLengthRange struct {
min int64
max int64
}
// Post form data.
formData map[string]string
}
// NewPostPolicy - Instantiate new post policy.
func NewPostPolicy() *PostPolicy {
p := &PostPolicy{}
p.conditions = make([]policyCondition, 0)
p.formData = make(map[string]string)
return p
}
// SetExpires - Sets expiration time for the new policy.
func (p *PostPolicy) SetExpires(t time.Time) error {
if t.IsZero() {
return ErrInvalidArgument("No expiry time set.")
}
p.expiration = t
return nil
}
// SetKey - Sets an object name for the policy based upload.
func (p *PostPolicy) SetKey(key string) error {
if strings.TrimSpace(key) == "" || key == "" {
return ErrInvalidArgument("Object name is empty.")
}
policyCond := policyCondition{
matchType: "eq",
condition: "$key",
value: key,
}
if err := p.addNewPolicy(policyCond); err != nil {
return err
}
p.formData["key"] = key
return nil
}
// SetKeyStartsWith - Sets an object name that an policy based upload
// can start with.
func (p *PostPolicy) SetKeyStartsWith(keyStartsWith string) error {
if strings.TrimSpace(keyStartsWith) == "" || keyStartsWith == "" {
return ErrInvalidArgument("Object prefix is empty.")
}
policyCond := policyCondition{
matchType: "starts-with",
condition: "$key",
value: keyStartsWith,
}
if err := p.addNewPolicy(policyCond); err != nil {
return err
}
p.formData["key"] = keyStartsWith
return nil
}
// SetBucket - Sets bucket at which objects will be uploaded to.
func (p *PostPolicy) SetBucket(bucketName string) error {
if strings.TrimSpace(bucketName) == "" || bucketName == "" {
return ErrInvalidArgument("Bucket name is empty.")
}
policyCond := policyCondition{
matchType: "eq",
condition: "$bucket",
value: bucketName,
}
if err := p.addNewPolicy(policyCond); err != nil {
return err
}
p.formData["bucket"] = bucketName
return nil
}
// SetContentType - Sets content-type of the object for this policy
// based upload.
func (p *PostPolicy) SetContentType(contentType string) error {
if strings.TrimSpace(contentType) == "" || contentType == "" {
return ErrInvalidArgument("No content type specified.")
}
policyCond := policyCondition{
matchType: "eq",
condition: "$Content-Type",
value: contentType,
}
if err := p.addNewPolicy(policyCond); err != nil {
return err
}
p.formData["Content-Type"] = contentType
return nil
}
// SetContentLengthRange - Set new min and max content length
// condition for all incoming uploads.
func (p *PostPolicy) SetContentLengthRange(min, max int64) error {
if min > max {
return ErrInvalidArgument("Minimum limit is larger than maximum limit.")
}
if min < 0 {
return ErrInvalidArgument("Minimum limit cannot be negative.")
}
if max < 0 {
return ErrInvalidArgument("Maximum limit cannot be negative.")
}
p.contentLengthRange.min = min
p.contentLengthRange.max = max
return nil
}
// SetSuccessStatusAction - Sets the status success code of the object for this policy
// based upload.
func (p *PostPolicy) SetSuccessStatusAction(status string) error {
if strings.TrimSpace(status) == "" || status == "" {
return ErrInvalidArgument("Status is empty")
}
policyCond := policyCondition{
matchType: "eq",
condition: "$success_action_status",
value: status,
}
if err := p.addNewPolicy(policyCond); err != nil {
return err
}
p.formData["success_action_status"] = status
return nil
}
// SetUserMetadata - Set user metadata as a key/value couple.
// Can be retrieved through a HEAD request or an event.
func (p *PostPolicy) SetUserMetadata(key string, value string) error {
if strings.TrimSpace(key) == "" || key == "" {
return ErrInvalidArgument("Key is empty")
}
if strings.TrimSpace(value) == "" || value == "" {
return ErrInvalidArgument("Value is empty")
}
headerName := fmt.Sprintf("x-amz-meta-%s", key)
policyCond := policyCondition{
matchType: "eq",
condition: fmt.Sprintf("$%s", headerName),
value: value,
}
if err := p.addNewPolicy(policyCond); err != nil {
return err
}
p.formData[headerName] = value
return nil
}
// addNewPolicy - internal helper to validate adding new policies.
func (p *PostPolicy) addNewPolicy(policyCond policyCondition) error {
if policyCond.matchType == "" || policyCond.condition == "" || policyCond.value == "" {
return ErrInvalidArgument("Policy fields are empty.")
}
p.conditions = append(p.conditions, policyCond)
return nil
}
// Stringer interface for printing policy in json formatted string.
func (p PostPolicy) String() string {
return string(p.marshalJSON())
}
// marshalJSON - Provides Marshalled JSON in bytes.
func (p PostPolicy) marshalJSON() []byte {
expirationStr := `"expiration":"` + p.expiration.Format(expirationDateFormat) + `"`
var conditionsStr string
conditions := []string{}
for _, po := range p.conditions {
conditions = append(conditions, fmt.Sprintf("[\"%s\",\"%s\",\"%s\"]", po.matchType, po.condition, po.value))
}
if p.contentLengthRange.min != 0 || p.contentLengthRange.max != 0 {
conditions = append(conditions, fmt.Sprintf("[\"content-length-range\", %d, %d]",
p.contentLengthRange.min, p.contentLengthRange.max))
}
if len(conditions) > 0 {
conditionsStr = `"conditions":[` + strings.Join(conditions, ",") + "]"
}
retStr := "{"
retStr = retStr + expirationStr + ","
retStr = retStr + conditionsStr
retStr = retStr + "}"
return []byte(retStr)
}
// base64 - Produces base64 of PostPolicy's Marshalled json.
func (p PostPolicy) base64() string {
return base64.StdEncoding.EncodeToString(p.marshalJSON())
}

69
vendor/github.com/minio/minio-go/retry-continous.go generated vendored Normal file
View File

@@ -0,0 +1,69 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import "time"
// newRetryTimerContinous creates a timer with exponentially increasing delays forever.
func (c Client) newRetryTimerContinous(unit time.Duration, cap time.Duration, jitter float64, doneCh chan struct{}) <-chan int {
attemptCh := make(chan int)
// normalize jitter to the range [0, 1.0]
if jitter < NoJitter {
jitter = NoJitter
}
if jitter > MaxJitter {
jitter = MaxJitter
}
// computes the exponential backoff duration according to
// https://www.awsarchitectureblog.com/2015/03/backoff.html
exponentialBackoffWait := func(attempt int) time.Duration {
// 1<<uint(attempt) below could overflow, so limit the value of attempt
maxAttempt := 30
if attempt > maxAttempt {
attempt = maxAttempt
}
//sleep = random_between(0, min(cap, base * 2 ** attempt))
sleep := unit * time.Duration(1<<uint(attempt))
if sleep > cap {
sleep = cap
}
if jitter != NoJitter {
sleep -= time.Duration(c.random.Float64() * float64(sleep) * jitter)
}
return sleep
}
go func() {
defer close(attemptCh)
var nextBackoff int
for {
select {
// Attempts starts.
case attemptCh <- nextBackoff:
nextBackoff++
case <-doneCh:
// Stop the routine.
return
}
time.Sleep(exponentialBackoffWait(nextBackoff))
}
}()
return attemptCh
}

153
vendor/github.com/minio/minio-go/retry.go generated vendored Normal file
View File

@@ -0,0 +1,153 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"net"
"net/http"
"net/url"
"strings"
"time"
)
// MaxRetry is the maximum number of retries before stopping.
var MaxRetry = 5
// MaxJitter will randomize over the full exponential backoff time
const MaxJitter = 1.0
// NoJitter disables the use of jitter for randomizing the exponential backoff time
const NoJitter = 0.0
// DefaultRetryUnit - default unit multiplicative per retry.
// defaults to 1 second.
const DefaultRetryUnit = time.Second
// DefaultRetryCap - Each retry attempt never waits no longer than
// this maximum time duration.
const DefaultRetryCap = time.Second * 30
// newRetryTimer creates a timer with exponentially increasing
// delays until the maximum retry attempts are reached.
func (c Client) newRetryTimer(maxRetry int, unit time.Duration, cap time.Duration, jitter float64, doneCh chan struct{}) <-chan int {
attemptCh := make(chan int)
// computes the exponential backoff duration according to
// https://www.awsarchitectureblog.com/2015/03/backoff.html
exponentialBackoffWait := func(attempt int) time.Duration {
// normalize jitter to the range [0, 1.0]
if jitter < NoJitter {
jitter = NoJitter
}
if jitter > MaxJitter {
jitter = MaxJitter
}
//sleep = random_between(0, min(cap, base * 2 ** attempt))
sleep := unit * time.Duration(1<<uint(attempt))
if sleep > cap {
sleep = cap
}
if jitter != NoJitter {
sleep -= time.Duration(c.random.Float64() * float64(sleep) * jitter)
}
return sleep
}
go func() {
defer close(attemptCh)
for i := 0; i < maxRetry; i++ {
select {
// Attempts start from 1.
case attemptCh <- i + 1:
case <-doneCh:
// Stop the routine.
return
}
time.Sleep(exponentialBackoffWait(i))
}
}()
return attemptCh
}
// isNetErrorRetryable - is network error retryable.
func isNetErrorRetryable(err error) bool {
if err == nil {
return false
}
switch err.(type) {
case net.Error:
switch err.(type) {
case *net.DNSError, *net.OpError, net.UnknownNetworkError:
return true
case *url.Error:
// For a URL error, where it replies back "connection closed"
// retry again.
if strings.Contains(err.Error(), "Connection closed by foreign host") {
return true
}
default:
if strings.Contains(err.Error(), "net/http: TLS handshake timeout") {
// If error is - tlsHandshakeTimeoutError, retry.
return true
} else if strings.Contains(err.Error(), "i/o timeout") {
// If error is - tcp timeoutError, retry.
return true
} else if strings.Contains(err.Error(), "connection timed out") {
// If err is a net.Dial timeout, retry.
return true
}
}
}
return false
}
// List of AWS S3 error codes which are retryable.
var retryableS3Codes = map[string]struct{}{
"RequestError": {},
"RequestTimeout": {},
"Throttling": {},
"ThrottlingException": {},
"RequestLimitExceeded": {},
"RequestThrottled": {},
"InternalError": {},
"ExpiredToken": {},
"ExpiredTokenException": {},
// Add more AWS S3 codes here.
}
// isS3CodeRetryable - is s3 error code retryable.
func isS3CodeRetryable(s3Code string) (ok bool) {
_, ok = retryableS3Codes[s3Code]
return ok
}
// List of HTTP status codes which are retryable.
var retryableHTTPStatusCodes = map[int]struct{}{
429: {}, // http.StatusTooManyRequests is not part of the Go 1.5 library, yet
http.StatusInternalServerError: {},
http.StatusBadGateway: {},
http.StatusServiceUnavailable: {},
// Add more HTTP status codes here.
}
// isHTTPStatusRetryable - is HTTP error code retryable.
func isHTTPStatusRetryable(httpStatusCode int) (ok bool) {
_, ok = retryableHTTPStatusCodes[httpStatusCode]
return ok
}

49
vendor/github.com/minio/minio-go/s3-endpoints.go generated vendored Normal file
View File

@@ -0,0 +1,49 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
// awsS3EndpointMap Amazon S3 endpoint map.
// "cn-north-1" adds support for AWS China.
var awsS3EndpointMap = map[string]string{
"us-east-1": "s3.amazonaws.com",
"us-east-2": "s3-us-east-2.amazonaws.com",
"us-west-2": "s3-us-west-2.amazonaws.com",
"us-west-1": "s3-us-west-1.amazonaws.com",
"ca-central-1": "s3.ca-central-1.amazonaws.com",
"eu-west-1": "s3-eu-west-1.amazonaws.com",
"eu-west-2": "s3-eu-west-2.amazonaws.com",
"eu-central-1": "s3-eu-central-1.amazonaws.com",
"ap-south-1": "s3-ap-south-1.amazonaws.com",
"ap-southeast-1": "s3-ap-southeast-1.amazonaws.com",
"ap-southeast-2": "s3-ap-southeast-2.amazonaws.com",
"ap-northeast-1": "s3-ap-northeast-1.amazonaws.com",
"ap-northeast-2": "s3-ap-northeast-2.amazonaws.com",
"sa-east-1": "s3-sa-east-1.amazonaws.com",
"us-gov-west-1": "s3-us-gov-west-1.amazonaws.com",
"cn-north-1": "s3.cn-north-1.amazonaws.com.cn",
}
// getS3Endpoint get Amazon S3 endpoint based on the bucket location.
func getS3Endpoint(bucketLocation string) (s3Endpoint string) {
s3Endpoint, ok := awsS3EndpointMap[bucketLocation]
if !ok {
// Default to 's3.amazonaws.com' endpoint.
s3Endpoint = "s3.amazonaws.com"
}
return s3Endpoint
}

61
vendor/github.com/minio/minio-go/s3-error.go generated vendored Normal file
View File

@@ -0,0 +1,61 @@
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
// Non exhaustive list of AWS S3 standard error responses -
// http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html
var s3ErrorResponseMap = map[string]string{
"AccessDenied": "Access Denied.",
"BadDigest": "The Content-Md5 you specified did not match what we received.",
"EntityTooSmall": "Your proposed upload is smaller than the minimum allowed object size.",
"EntityTooLarge": "Your proposed upload exceeds the maximum allowed object size.",
"IncompleteBody": "You did not provide the number of bytes specified by the Content-Length HTTP header.",
"InternalError": "We encountered an internal error, please try again.",
"InvalidAccessKeyId": "The access key ID you provided does not exist in our records.",
"InvalidBucketName": "The specified bucket is not valid.",
"InvalidDigest": "The Content-Md5 you specified is not valid.",
"InvalidRange": "The requested range is not satisfiable",
"MalformedXML": "The XML you provided was not well-formed or did not validate against our published schema.",
"MissingContentLength": "You must provide the Content-Length HTTP header.",
"MissingContentMD5": "Missing required header for this request: Content-Md5.",
"MissingRequestBodyError": "Request body is empty.",
"NoSuchBucket": "The specified bucket does not exist",
"NoSuchBucketPolicy": "The bucket policy does not exist",
"NoSuchKey": "The specified key does not exist.",
"NoSuchUpload": "The specified multipart upload does not exist. The upload ID may be invalid, or the upload may have been aborted or completed.",
"NotImplemented": "A header you provided implies functionality that is not implemented",
"PreconditionFailed": "At least one of the pre-conditions you specified did not hold",
"RequestTimeTooSkewed": "The difference between the request time and the server's time is too large.",
"SignatureDoesNotMatch": "The request signature we calculated does not match the signature you provided. Check your key and signing method.",
"MethodNotAllowed": "The specified method is not allowed against this resource.",
"InvalidPart": "One or more of the specified parts could not be found.",
"InvalidPartOrder": "The list of parts was not in ascending order. The parts list must be specified in order by part number.",
"InvalidObjectState": "The operation is not valid for the current state of the object.",
"AuthorizationHeaderMalformed": "The authorization header is malformed; the region is wrong.",
"MalformedPOSTRequest": "The body of your POST request is not well-formed multipart/form-data.",
"BucketNotEmpty": "The bucket you tried to delete is not empty",
"AllAccessDisabled": "All access to this bucket has been disabled.",
"MalformedPolicy": "Policy has invalid resource.",
"MissingFields": "Missing fields in request.",
"AuthorizationQueryParametersError": "Error parsing the X-Amz-Credential parameter; the Credential is mal-formed; expecting \"<YOUR-AKID>/YYYYMMDD/REGION/SERVICE/aws4_request\".",
"MalformedDate": "Invalid date format header, expected to be in ISO8601, RFC1123 or RFC1123Z time format.",
"BucketAlreadyOwnedByYou": "Your previous request to create the named bucket succeeded and you already own it.",
"InvalidDuration": "Duration provided in the request is invalid.",
"XAmzContentSHA256Mismatch": "The provided 'x-amz-content-sha256' header does not match what was computed.",
// Add new API errors here.
}

48
vendor/github.com/minio/minio-go/transport.go generated vendored Normal file
View File

@@ -0,0 +1,48 @@
// +build go1.7 go1.8
/*
* Minio Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2017 Minio, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"net"
"net/http"
"time"
)
// This default transport is similar to http.DefaultTransport
// but with additional DisableCompression:
var defaultMinioTransport http.RoundTripper = &http.Transport{
Proxy: http.ProxyFromEnvironment,
DialContext: (&net.Dialer{
Timeout: 30 * time.Second,
KeepAlive: 30 * time.Second,
DualStack: true,
}).DialContext,
MaxIdleConns: 100,
IdleConnTimeout: 90 * time.Second,
TLSHandshakeTimeout: 10 * time.Second,
ExpectContinueTimeout: 1 * time.Second,
// Set this value so that the underlying transport round-tripper
// doesn't try to auto decode the body of objects with
// content-encoding set to `gzip`.
//
// Refer:
// https://golang.org/src/net/http/transport.go?h=roundTrip#L1843
DisableCompression: true,
}

Some files were not shown because too many files have changed in this diff Show More