Cyberduck Mountain Duck CLI

Version 54 (modified by dkocher, on Jun 19, 2010 at 9:13:14 AM) (diff)

Add thirdparty S3 providers.

Cyberduck Help / Howto / Amazon S3 Support

Transfer files to your S3 account and browse the S3 buckets and files in a hierarchical way as you are used to with other remote file systems supported by Cyberduck. For a short overview of Amazon S3, refer to the Wikipedia article.

Connecting to Amazon S3

You must obtain the login credentials (Access Key ID and Secret Access Key) of your Amazon Web Services Account from the AWS Access Identifiers page. In the login prompt of Cyberduck upon connecting to S3 you enter the Access Key ID for the username and Secret Access Key for the password.

Thirdparty S3 providers

There are several third parties beside Amazon offering S3 compatible cloud storage software or solutions.

Use S3 without SSL

It is discouraged to enable this option to connect plaintext to Amazon S3.

If you have a S3 implementation in your local network and can't connect using SSL, you can enable a hidden configuration option to connect using HTTP only without transport layer security.

defaults write ch.sudo.cyberduck protocol.s3.enable true

You will then have the added option S3/HTTP (Amazon Simple Storage Service) in the protocol dropdown selection in the Connection and Bookmark panels.

Storage Class

You have the option to store files using the Reduced Redundancy Storage (RRS) to reduce costs by storing non-critical, reproducible data at lower levels of redundancy. Set the default storage class in Preferences → S3 and edit the storage class for already uploaded files using File → Info → S3.


To create a new bucket for your account, browse to the root and choose File → New Folder....

You can choose the bucket location in Preferences → S3. Note that Amazon has a different pricing scheme for different locations. Supported locations are:

  • EU (Ireland)
  • US Standard
  • US-West (Northern California)
  • Asia Pacific (Singapore)

Important: Because the bucket name must be globally unique the operation might fail if the name is already taken by someone else (E.g. don't assume any common name like media or images will be available).

Bucket Access Logging

When this option is enabled in the File → Info panel of a bucket or any file within, available log records for this bucket are periodically aggregated into log files and delivered to <bucketname>/logs.

Citing the Amazon S3 documentation: An Amazon S3 bucket can be configured to create access log records for the requests made against it. An access log record contains details about the request such as the request type, the resource with which the request worked, and the time and date that the request was processed. Server access logs are useful for many applications, because they give bucket owners insight into the nature of requests made by clients not under their control. 'There is no extra charge for enabling the server access logging feature on an Amazon S3 bucket, however any log files the system delivers to you will accrue the usual charges for storage (you can delete the log files at any time). No data transfer charges will be assessed for log file delivery, but access to the delivered log files is charged for data transfer in the usual way.

To toggle CloudFront access logging, select the the distribution panel in the File → Info window.


Creating a folder inside a bucket will create a placeholder object named after the directory, has no data content and the mimetype application/x-directory.

Access Control

Amazon S3 uses Access Control List (ACL) settings to control who may access or modify items stored in S3. By default, all buckets and objects created in S3 are accessible only to the account owner.

You must give Other read permissions for your objects in File → Info → Permissions to make them accessible using a regular web browser for everyone.

Distribution (CDN)

Refer to Amazon CloudFront distribution.


Signed URLs

Use File → Info to copy the signed public URL from the S3 section valid for 24 hours.

  • Choose the lifetime for publicly available auto-expiring signed URL using the hidden option s3.url.expire.seconds.
defaults write ch.sudo.cyberduck s3.url.expire.seconds 86400

BitTorrent URLs

Use File → Info → S3 to copy the BitTorrent URL of a selected file.


You can edit standard HTTP headers add custom HTTP headers to files to store metadata. Choose File → Info → S3 to edit headers.

Cache Control Setting

This option lets you control how long a client accessing objects from your S3 bucket will cache the content and thus lowering the number of access to your S3 storage. In conjunction with Amazon CloudFront, it controls the time an object stays in an edge location until it expires. After the object expires, CloudFront must go back to the origin server the next time that edge location needs to serve that object. By default, all objects automatically expire after 24 hours when no custom Cache-Control header is set.

The default setting to choose from in the File → Info panel in Cyberduck is Cache-Control: public,max-age=2052000 which translates to a cache expiration of one month (one month in seconds equals more or less 60*60*24*30).

  • Use the hidden configuration option s3.cache.seconds to set a custom default value
defaults write ch.sudo.cyberduck s3.cache.seconds 2052000

Tip: Use curl -I <http://<bucketname><key> to debug HTTP headers.