Cyberduck Mountain Duck CLI

Cyberduck Help / Howto / Amazon S3

Transfer files to your S3 account and browse the S3 buckets and files in a hierarchical way. For a short overview of Amazon S3, refer to the Wikipedia article.

Connecting to Amazon S3

You must obtain the login credentials (Access Key ID and Secret Access Key) of your Amazon Web Services Account from the AWS Access Identifiers page. Enter the Access Key ID and Secret Access Key in the login prompt.

IAM User

You can also connect using IAM credentials that have the Amazon S3 Full Access template policy permissions attached and optionally the CloudFront Full Access .

Generic S3 profiles

For use with third party S3 installations.

Authentication with signature version AWS4-HMAC-SHA256

HTTP

It is discouraged to enable this option to connect plaintext to Amazon S3.

If you have a S3 installation without SSL configured, you need an optional connection profile to connect using HTTP only without transport layer security. You will then have the added option S3 (HTTP) in the protocol dropdown selection in the Connection and Bookmark panels.

  • Download the S3 (HTTP) profile for preconfigured settings.
HTTPS
  • Download the S3 (HTTPS) profile for preconfigured settings.

Authentication with signature version AWS2

Incomplete list of known providers that require the use of AWS2

HTTP
  • Download the S3 AWS2 Signature Version (HTTP) profile for preconfigured settings.
HTTPS
  • Download the S3 AWS2 Signature Version (HTTPS) profile for preconfigured settings.

AWS Gov Cloud

Use the endpoint s3-us-gov-west-1.amazonaws.com or install the connection profile

  • Download the S3 Gov Cloud profile for preconfigured settings.

AWS China (Beijing)

Connect to the region AWS China (Beijing)

  • Download the S3 China (Beijing) profile for preconfigured settings.

Access third party buckets

Connecting to a bucket you are not the owner (and therefore not included when logging in as above and listing all your owned buckets) is possible. You can access buckets owned by someone else if the ACL allows you to access it by either:

  • Specify the bucket you want to access in the hostname to connect to like <bucketname>.s3.amazonaws.com. Your own buckets will not be displayed but only the third party bucket.
  • Set the Default Path in the bookmark to the bucket name.
  • Choose Go → Go to Folder… when already connected.

Connecting with temporary access credentials (Token) from EC2

If you are running Cyberduck for Windows or Cyberduck CLI on EC2 and have setup IAM Roles for Amazon EC2 to provide access to S3 from the EC2 instance, you can use the

  • Download the S3 (Temporary Credentials) profile for preconfigured settings

that will fetch temporary credentials from EC2 instance metadata service at http://169.254.169.254/latest/meta-data/iam/security-credentials/s3access to authenticate. Edit the profile to change the role name s3access to match your IAM configuration.

Connecting using AssumeRole from AWS Security Token Service (STS)

Version 6.7.0 or later required Instead of providing Access Key ID and Secret Access Key, authenticate using temporary credentials from AWS Security Token Service (STS) with optional Multi-Factor Authentication (MFA). Refer to Using IAM Roles.

  • Download the S3 (Credentials from AWS Security Token Service) profile for preconfigured settings. You must provide configuration in the standard credentials property file ~/.aws/credentials from AWS Command Line Interface. Configure a bookmark with the Username matching the profile name from ~/.aws/credentials.

Example configuration

Refer to Assuming a Role.

   [testuser]
   aws_access_key_id=<access key for testuser>
   aws_secret_access_key=<secret key for testuser>
   [testrole]
   role_arn=arn:aws:iam::123456789012:role/testrole
   source_profile=testuser
   mfa_serial=arn:aws:iam::123456789012:mfa/testuser

Read credentials from ~/.aws/credentials

When editing a bookmark, the Access Key ID is set from the default profile in the credentials file located at ~/.aws/credentials.

Third-Party S3 providers

There are a growing number of third parties beside Amazon offering S3 compatible cloud storage software or solutions. Here is a non exhaustive list:

Storage Class

You have the option to store files using the Reduced Redundancy Storage (RRS) to reduce costs by storing non-critical, reproducible data at lower levels of redundancy. Set the default storage class in Preferences (⌘-,)→ S3 and edit the storage class for already uploaded files using File → Info (⌘-I) → S3.

Distribution (CloudFront CDN)

Amazon CloudFront delivers your static and streaming content using a global network of edge locations. Requests for your objects are automatically routed to the nearest edge location, so content is delivered with the best possible performance. Refer to Amazon CloudFront distribution for help about setting up distributions.

Buckets

Creating a bucket

To create a new bucket for your account, browse to the root and choose File → New Folder... (⌘-N). You can choose the bucket location in Preferences (⌘-,) → S3. Note that Amazon has a different pricing scheme for different regions.

Supported Regions

  • EU (Ireland)
  • EU (Frankfurt)
  • US East (Northern Virginia)
  • US East (Ohio)
  • US-West (Northern California)
  • US West (Oregon)
  • Asia Pacific (Singapore)
  • Asia Pacific (Tokyo)
  • Asia Pacific (Sydney)
  • Asia Pacific (Seoul)
  • Asia Pacific (Mumbai)
  • South America (Sao Paulo)

Important: Because the bucket name must be globally unique the operation might fail if the name is already taken by someone else (E.g. don't assume any common name like media or images will be available). Important: You cannot change the location of an existing bucket. Important: Make sure to uncheck any selection in the browser or you will create a folder inside an existing bucket.

Bucket Access Logging

When this option is enabled in the S3 panel of the Info (File → Info (⌘-I)) window for a bucket or any file within, available log records for this bucket are periodically aggregated into log files and delivered to /logs in the target logging bucket specified. It is considered best practice to choose a logging target that is different from the origin bucket.

To toggle CloudFront access logging, select the the Distribution panel in the File → Info (⌘-N) window.

Qloudstat Analytics

The Read Access for Qloudstat checkbox in the Info panel tab creates a dedicated IAM user with a read only IAM policy for Qloudstat to fetch log files. Unchecking the Read Access for Qloudstat checkbox will remove the IAM user again revoking all access for Qloudstat. When enabled, a clickable link is displayed that redirects to Qloudstat to confirm the new setup.

Folders

Creating a folder inside a bucket will create a placeholder object named after the directory, has no data content and the mimetype application/x-directory.

Supported thirdparty folder placeholder formats

Access Control (ACL)

Amazon S3 uses Access Control List (ACL) settings to control who may access or modify items stored in S3. You can edit ACLs in File → Info (⌘-I) → Permissions.

Canonical User ID Grantee

If you enter a user ID unknown to AWS, the error message S3 Error Message. Bad Request. Invalid id. will be displayed.

Email Address Grantee

If you enter an email address unknown to AWS, the error message S3 Error Message. Bad Request. Invalid id. will be displayed. If multiple accounts are registered with AWS for the given email address, the error message Bad Request. The e-mail address you provided is associated with more than one account. Please retry your request using a different identification method or after resolving the ambiguity. is returned.

All Users Group Grantee

You must give the group grantee http://acs.amazonaws.com/groups/global/AllUsers read permissions for your objects to make them accessible using a regular web browser for everyone.

If bucket logging is enabled, the bucket ACL will have READ_ACP and WRITE permissions for the group grantee http://acs.amazonaws.com/groups/s3/LogDelivery.

Default ACLs

  • Buckets. New buckets created have a default pre-defined canned ACL set to private. You can change the default bucket ACL to public-read with a hidden option
defaults write ch.sudo.cyberduck s3.bucket.acl.default public-read
  • Files. For new files uploaded, the ACL applied depends on the setting in Preferences → Transfers (⌘-T)→ Permissions → Uploads. If you want files uploaded accessible to anyone, make sure to set the following:
    • If you have selected to apply the permissions of the local file or folder for uploads, then check the access permissions of the file in Finder.app. If everyone is allowed read access in the Sharing & Permissions section of the Finder.app Info window, the file should have a READ ACL in S3 for http://acs.amazonaws.com/groups/global/AllUsers.
    • If you have choosen to set default permissions for uploads, make sure Read access for Others is selected in the Upload Permissions Transfer Preferences.

Permissions

The following permissions can be given to grantees:

BucketFiles
READ Allows grantee to list the files in the bucket Allows grantee to download the file and its metadata
WRITE Allows grantee to create, overwrite, and delete any file in the bucket Not applicable
FULL_CONTROL Allows grantee all permissions on the bucket Allows grantee all permissions on the object
READ_ACP Allows grantee to read the bucket ACL Allows grantee to read the file ACL
WRITE_ACP Allows grantee to write the ACL for the applicable bucket Allows grantee to write the ACL for the applicable file

Lifecycle Configuration

Specify after how many days a file in a bucket should be moved to Amazon Glacier or deleted.

Versions

Versioning can be enabled per bucket in File → Info (⌘-I)→ S3. You can view all revisions of a file in the browser by choosing View → Show Hidden Files.

Revert

To revert to a previous version and make it the current, choose File → Revert.

Multi-Factor Authentication (MFA) Delete

To enable Multi-Factor Authentication (MFA) Delete, you need to purchase a compatible authentication device. Toggle MFA in File → Info (⌘-I) → S3. When enabled, you are prompted for the device number and one-time token in a login prompt. Never reenter a token in the prompt already used before. A token is only valid for a single request. Wait for the previous token to disapear from the device screen and request a new token from the device.

Public URLs

You can access all URLs (including from CDN configurations) from the menu Edit → Copy URL and File → Open URL.

Pre-signed temporary URLs

A private object stored in S3 can be made publicly available for a limited time using a pre-signed URL. The pre-signed URL can be used by anyone to download the object, yet it includes a date and time after which the URL will no longer work. Copy the pre-signed URL from Edit → Copy URL→ Signed URL or File → Info (⌘-I) → S3.

There are pre-signed URLs that expire in one hour, 24 hours (using the preference s3.url.expire.seconds), a week and a month. You can change the hidden preference s3.url.expire.seconds from the default 86400 (24 hours).

Force use of AWS2 signature

Using the AWS4 signature version used in version 5.0 and later, pre-signed URLs cannot have an expiry date later than a week. You can revert by setting the default signature version to AWS2 by using the S3 AWS2 Signature Version (HTTP) connection profile.

Note: This deprecated signature version is not compatible with new regions such as eu-central-1.

BitTorrent URLs

Use File → Info (⌘-I) → S3 to copy the BitTorrent URL of a selected file. The ACL of the object must allow aonymous read. One important thing to note is that the .torrent file describing an Amazon S3 object is generated on-demand, the first time the Torrent URL is requested. Generating the .torrent for an object takes time proportional to the size of that object. For large objects, this time can be significant. Therefore, before publishing a ?torrent link, we suggest making the first request for it yourself. Amazon S3 might take several minutes to respond to this first request, as it generates the .torrent file. Unless you update the object in question, subsequent requests for the .torrent will be fast.

Metadata

You can edit standard HTTP headers and add custom HTTP headers to files to store metadata. Choose File → Info (⌘-I) → Metadata to edit headers.

Refer to the Info panel wiki page.

Default metadata

Currently only possible using a hidden configuration option you can define default headers to be added for uploads. Multiple headers must be separated using a whitespace delimiter. Key and value of a header are seperated with =. For example if you want to add a HTTP header for Cache-Control and one named Creator you would set

defaults write ch.sudo.cyberduck s3.metadata.default "Cache-Control=public,max-age=86400 Creator=Cyberduck"

Cache Control Setting

This option lets you control how long a client accessing objects from your S3 bucket will cache the content and thus lowering the number of access to your S3 storage. In conjunction with Amazon CloudFront, it controls the time an object stays in an edge location until it expires. After the object expires, CloudFront must go back to the origin server the next time that edge location needs to serve that object. By default, all objects automatically expire after 24 hours when no custom Cache-Control header is set.

The default setting is Cache-Control: public,max-age=2052000 when choosing to add a custom Cache-Control header in the Info panel which translates to a cache expiration of one month (one month in seconds equals more or less 60*60*24*30).

defaults write ch.sudo.cyberduck s3.cache.seconds 2052000

Tip: Use curl -I <http://<bucketname>.s3.amazonaws.com/<key> to debug HTTP headers.

Server Side Encryption (SSE)

Server side encryption for stored files is supported and can be enabled by default for all uploads in the S3 preferences or for individual files in the File → Info (⌘-I) → S3. AWS handles key management and key protection for you.

Defaults

Choose Preferences → S3 → Server Side Encryption to change the default.

  • None will not encrypt files (Default).
  • SSE-S3 will encrypt files using AES-256 with a default key provided by S3.
  • SSE-KMS will encrypt files with the default key stored in AWS Key Management Service (KMS).

You can override these default settings in the File → Info (⌘-I) → S3 panel per bucket.

Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)

When changing the setting for a folder or bucket you are prompted to confirm the recursive operation on all files contained in the selected bucket or folder.

Server-Side Encryption with AWS KMS-Managed Keys (SSE-KMS)

Version 5.0 or later required.

Among the default SSE-S3 (AES-256), the server side encryption (SSE) dropdown list allows to choose from all private keys managed in AWS Key Management Service (KMS).

Permissions

This requires the kms:ListKeys and kms:ListAliases permission for the AWS credentials used to connect to S3.

When changing the setting for a folder or bucket you are prompted to confirm the recursive operation on all files contained in the selected bucket or folder.

Prevent uploads of unencrypted files

Refer to the AWS Security Blog

Website Configuration

To host a static website on S3, It is possible to define a Amazon S3 bucket as a Website Endpoint. The configuration in File → Info (⌘-I) → Distribution allows you to enable website configuration. Choose Website Configuration (HTTP) from Delivery Method and define an index document name that is searched for and returned when requests are made to the root or the subfolder of your website.

To access this website functionality, Amazon S3 exposes a new website endpoint for each region (US Standard, US West, EU, or Asia Pacific). For example, s3-website-ap-southeast-1.amazonaws.com is the endpoint for the Asia Pacific Region. The location is displayed in the Where field following the Origin.

To configure Amazon CloudFront for your website endpoints, refer to Website Configuration Endpoint Distributions with CloudFront CDN.

Reference

File Transfers

Transfer Acceleration

When enabled for the bucket, downloads and uploads use the S3 Transfer Acceleration endpoints to transfer data through CloudFront’s globally distributed edge locations. The name of the bucket used for Transfer Acceleration must be DNS-compliant and must not contain periods ("."). You do not need to enter transfer accelerated endpoints manually. When using Transfer Acceleration, additional data transfer charges may apply connecting to s3-accelerate.dualstack.amazonaws.com.

Permissions

Make sure the user has s3:GetAccelerateConfiguration permission permits users to return the Transfer Acceleration state of a bucket.

Checksums

Files are verified both by AWS when the file is received and compared with the SHA256 checksum sent with the request. Additionally, the checksum returned by AWS for the uploaded file is compared with the checksum computed locally.

Multipart Uploads

Files larger than 100MB are uploaded in parts with up to 10 parallel connections as 10MB parts. Given these sizes, the file size limit is 100GB with a maximum of 10'000 parts allowed by S3. The number of connections used can be limited using the toggle in the lower right of the transfer window.

Multipart uploads can be resumed later when interrupted. Make sure the user has the permission s3:ListBucketMultipartUploads.

Unfinished multipart uploads

You can view unfinished multipart uploads in the browser by choosing View → Show Hidden Files.

Problems

SSL certificate trust verification

When listing a bucket that has a . in its name, connecting will give a trust verification failure The certificate is not valid (host name mismatch) for the wildcard certificate *.s3.amazonaws.com. Because the wildcard only applies to one level in the domain name, you must manually trust this certificate. (Fix with version 4.8 or later.)

References

Last modified 2 weeks ago Last modified on Sep 11, 2018 8:47:08 AM

Attachments (13)

Download all attachments as: .zip

swiss made software