Cyberduck Mountain Duck CLI

Version 194 (modified by dkocher, on Jun 3, 2016 at 10:52:36 AM) (diff)


Cyberduck Help / Howto / Amazon S3

Transfer files to your S3 account and browse the S3 buckets and files in a hierarchical way. For a short overview of Amazon S3, refer to the Wikipedia article.

Connecting to Amazon S3

You must obtain the login credentials (Access Key ID and Secret Access Key) of your Amazon Web Services Account from the AWS Access Identifiers page. Enter the Access Key ID and Secret Access Key in the login prompt.

IAM User

You can also connect using IAM credentials that have the Amazon S3 Full Access template policy permissions attached and optionally the CloudFront Full Access .

Generic S3 profiles

For use with third party S3 installations.


It is discouraged to enable this option to connect plaintext to Amazon S3.

If you have a S3 installation without SSL configured, you need an optional connection profile to connect using HTTP only without transport layer security. You will then have the added option S3 (HTTP) in the protocol dropdown selection in the Connection and Bookmark panels.

  • Download the S3 (HTTP) profile for preconfigured settings.


  • Download the S3 (HTTPS) profile for preconfigured settings.

Access third party buckets

Connecting to a bucket you are not the owner (and therefore not included when logging in as above and listing all your owned buckets) is possible. You can access buckets owned by someone else if the ACL allows you to access it by either:

  • Specify the bucket you want to access in the hostname to connect to like <bucketname> Your own buckets will not be displayed but only the third party bucket.
  • Set the Default Path in the bookmark to the bucket name.
  • Choose Go → Go to Folder… when already connected.

Connecting with temporary access credentials from EC2

If you are running Cyberduck for Windows or Cyberduck CLI on EC2 and have setup IAM Roles for Amazon EC2 to provide access to S3 from the EC2 instance, you can use the

  • Download the S3 (Temporary Credentials)' profile for preconfigured settings

that will fetch temporary credentials from to authenticate. Edit the profile to change the role name s3access to match your IAM configuration.

Storage Class

You have the option to store files using the Reduced Redundancy Storage (RRS) to reduce costs by storing non-critical, reproducible data at lower levels of redundancy. Set the default storage class in Preferences (⌘-,)→ S3 and edit the storage class for already uploaded files using File → Info (⌘-I) → S3.

Third-Party S3 providers

There are a growing number of third parties beside Amazon offering S3 compatible cloud storage software or solutions. Here is a non exhaustive list by Alphabetical Order :

Distribution (CloudFront CDN)

Amazon CloudFront delivers your static and streaming content using a global network of edge locations. Requests for your objects are automatically routed to the nearest edge location, so content is delivered with the best possible performance. Refer to Amazon CloudFront distribution for help about setting up distributions.


Creating a bucket

To create a new bucket for your account, browse to the root and choose File → New Folder... (⌘-N). You can choose the bucket location in Preferences (⌘-,) → S3. Note that Amazon has a different pricing scheme for different locations. Supported locations are:

  • EU (Ireland)
  • US Standard
  • US-West (Northern California)
  • Asia Pacific (Singapore)
  • Asia Pacific (Tokyo)

Important: Because the bucket name must be globally unique the operation might fail if the name is already taken by someone else (E.g. don't assume any common name like media or images will be available). Important: You cannot change the location of an existing bucket. Important: Make sure to uncheck any selection in the browser or you will create a folder inside an existing bucket.

You can change the default bucket ACL public-read with a hidden option to private.

defaults write ch.sudo.cyberduck s3.bucket.acl.default private

Bucket Access Logging

When this option is enabled in the S3 panel of the Info (File → Info (⌘-I)) window for a bucket or any file within, available log records for this bucket are periodically aggregated into log files and delivered to /logs in the target logging bucket specified. It is considered best practice to choose a logging target that is different from the origin bucket.

To toggle CloudFront access logging, select the the Distribution panel in the File → Info (⌘-N) window.

Qloudstat Analytics

The Read Access for Qloudstat checkbox in the Info panel tab creates a dedicated IAM user with a read only IAM policy for Qloudstat to fetch log files. Unchecking the Read Access for Qloudstat checkbox will remove the IAM user again revoking all access for Qloudstat. When enabled, a clickable link is displayed that redirects to Qloudstat to confirm the new setup.


Creating a folder inside a bucket will create a placeholder object named after the directory, has no data content and the mimetype application/x-directory.

Supported thirdparty folder placeholder formats

Access Control (ACL)

Amazon S3 uses Access Control List (ACL) settings to control who may access or modify items stored in S3. You can edit ACLs in File → Info (⌘-I) → Permissions.

Canonical User ID Grantee

If you enter a user ID unknown to AWS, the error message S3 Error Message. Bad Request. Invalid id. will be displayed.

Email Address Grantee

If you enter an email address unknown to AWS, the error message S3 Error Message. Bad Request. Invalid id. will be displayed. If multiple accounts are registered with AWS for the given email address, the error message Bad Request. The e-mail address you provided is associated with more than one account. Please retry your request using a different identification method or after resolving the ambiguity. is returned.

All Users Group Grantee

You must give the group grantee read permissions for your objects to make them accessible using a regular web browser for everyone.

If bucket logging is enabled, the bucket ACL will have READ_ACP and WRITE permissions for the group grantee

Default ACLs

  • Buckets. New buckets created have a default pre-defined canned ACL set to public-read. You get FULL_CONTROL. All other users have READ access.
  • Files. For new files uploaded, the ACL applied depends on the setting in Preferences → Transfers (⌘-T)→ Permissions → Uploads. If you want files uploaded accessible to anyone, make sure to set the following:
    • If you have selected to apply the permissions of the local file or folder for uploads, then check the access permissions of the file in If everyone is allowed read access in the Sharing & Permissions section of the Info window, the file should have a READ ACL in S3 for
    • If you have choosen to set default permissions for uploads, make sure Read access for Others is selected in the Upload Permissions Transfer Preferences.


The following permissions can be given to grantees:

READ Allows grantee to list the files in the bucket Allows grantee to download the file and its metadata
WRITE Allows grantee to create, overwrite, and delete any file in the bucket Not applicable
FULL_CONTROL Allows grantee all permissions on the bucket Allows grantee all permissions on the object
READ_ACP Allows grantee to read the bucket ACL Allows grantee to read the file ACL
WRITE_ACP Allows grantee to write the ACL for the applicable bucket Allows grantee to write the ACL for the applicable file

Lifecycle Configuration

Specify after how many days a file in a bucket should be moved to Amazon Glacier or deleted.


Versioning can be enabled per bucket in File → Info (⌘-I)→ S3. You can view all revisions of a file in the browser by choosing View → Show Hidden Files.


To revert to a previous version and make it the current, choose File → Revert.

Multi-Factor Authentication (MFA) Delete

To enable Multi-Factor Authentication (MFA) Delete, you need to purchase a compatible authentication device. Toggle MFA in File → Info (⌘-I) → S3. When enabled, you are prompted for the device number and one-time token in a login prompt. Never reenter a token in the prompt already used before. A token is only valid for a single request. Wait for the previous token to disapear from the device screen and request a new token from the device.

Public URLs

You can access all URLs (including from CDN configurations) from the menu Edit → Copy URL and File → Open URL.

Signed temporary URLs

A private object stored in S3 can be made publicly available for a limited time using a pre-signed URL. The pre-signed URL can be used by anyone to download the object, yet it includes a date and time after which the URL will no longer work. Copy the pre-signed URL from Edit → Copy URL→ Signed URL or File → Info (⌘-I) → S3.

There are pre-signed URLs that expire in one hour, 24 hours (using the preference s3.url.expire.seconds), a week and a month. You can change the hidden preference s3.url.expire.seconds from the default 86400 (24 hours).

Force use of AWS2 signature

Using the AWS4 signature version used in version 5.0 and later, pre-signed URLs cannot have an expiry date later than a week. You can revert by setting the default signature version to AWS2 using the hidden configuration option. Use

defaults write ch.sudo.cyberduck s3.signature.version AWS2

Note: This deprecated signature version is not compatible with new regions such as eu-central-1.

BitTorrent URLs

Use File → Info (⌘-I) → S3 to copy the BitTorrent URL of a selected file. The ACL of the object must allow aonymous read. One important thing to note is that the .torrent file describing an Amazon S3 object is generated on-demand, the first time the Torrent URL is requested. Generating the .torrent for an object takes time proportional to the size of that object. For large objects, this time can be significant. Therefore, before publishing a ?torrent link, we suggest making the first request for it yourself. Amazon S3 might take several minutes to respond to this first request, as it generates the .torrent file. Unless you update the object in question, subsequent requests for the .torrent will be fast.


You can edit standard HTTP headers and add custom HTTP headers to files to store metadata. Choose File → Info (⌘-I) → Metadata to edit headers.

Refer to the Info panel wiki page.

Default metadata

Currently only possible using a hidden configuration option you can define default headers to be added for uploads. Multiple headers must be separated using a whitespace delimiter. Key and value of a header are seperated with =. For example if you want to add a HTTP header for Cache-Control and one named Creator you would set

defaults write ch.sudo.cyberduck s3.metadata.default "Cache-Control=public,max-age=86400 Creator=Cyberduck"

Cache Control Setting

This option lets you control how long a client accessing objects from your S3 bucket will cache the content and thus lowering the number of access to your S3 storage. In conjunction with Amazon CloudFront, it controls the time an object stays in an edge location until it expires. After the object expires, CloudFront must go back to the origin server the next time that edge location needs to serve that object. By default, all objects automatically expire after 24 hours when no custom Cache-Control header is set.

The default setting is Cache-Control: public,max-age=2052000 when choosing to add a custom Cache-Control header in the Info panel which translates to a cache expiration of one month (one month in seconds equals more or less 60*60*24*30).

defaults write ch.sudo.cyberduck s3.cache.seconds 2052000

Tip: Use curl -I <http://<bucketname><key> to debug HTTP headers.

Server Side Encryption (SSE)

Server side encryption for stored files is supported and can be enabled by default for all uploads in the S3 preferences or for individual files in the File → Info (⌘-I) → S3. AWS handles key management and key protection for you.


Choose Preferences → S3 → Server Side Encryption to change the default.

  • None will not encrypt files (Default).
  • SSE-S3 will encrypt files using AES-256 with a default key provided by S3.
  • SSE-KMS will encrypt files with the default key stored in AWS Key Management Service (KMS).

You can override these default settings in the File → Info (⌘-I) → S3 panel per bucket.

Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)

When changing the setting for a folder or bucket you are prompted to confirm the recursive operation on all files contained in the selected bucket or folder.

Server-Side Encryption with AWS KMS-Managed Keys (SSE-KMS)

Version 5.0 or later required.

Among the default SSE-S3 (AES-256), the server side encryption (SSE) dropdown list allows to choose from all private keys managed in AWS Key Management Service (KMS). This requires the kms:ListKeys permission for the AWS credentials used to connect to S3.

When changing the setting for a folder or bucket you are prompted to confirm the recursive operation on all files contained in the selected bucket or folder.

Website Configuration

To host a static website on S3, It is possible to define a Amazon S3 bucket as a Website Endpoint. The configuration in File → Info (⌘-I) → Distribution allows you to enable website configuration. Choose Website Configuration (HTTP) from Delivery Method and define an index document name that is searched for and returned when requests are made to the root or the subfolder of your website.

To access this website functionality, Amazon S3 exposes a new website endpoint for each region (US Standard, US West, EU, or Asia Pacific). For example, is the endpoint for the Asia Pacific Region. The location is displayed in the Where field following the Origin.

To configure Amazon CloudFront for your website endpoints, refer to Website Configuration Endpoint Distributions with CloudFront CDN.


Multipart Uploads

Files larger than 100MB are uploaded in parts with up to 10 parallel connections as 10MB parts. Given these sizes, the file size limit is 100GB with a maximum of 10'000 parts allowed by S3. The number of connections used can be limited using the toggle in the lower right of the transfer window.

Multipart uploads can be resumed later when interrupted.


SSL certificate trust verification

When listing a bucket that has a . in its name, connecting will give a trust verification failure The certificate is not valid (host name mismatch) for the wildcard certificate * Because the wildcard only applies to one level in the domain name, you must manually trust this certificate. (Fix with version 4.8 or later.)