Cyberduck Mountain Duck CLI

Version 40 (modified by dkocher, on Jan 16, 2010 at 11:26:55 AM) (diff)


Cyberduck Help / Howto / Amazon S3 Support

Transfer files to your S3 account and browse the S3 buckets and files in a hierarchical way as you are used to with other remote file systems supported by Cyberduck. For a short overview of Amazon S3, refer to the Wikipedia article.

Connecting to Amazon S3

You must obtain the login credentials (Access Key ID and Secret Access Key) of your Amazon Web Services Account from the AWS Access Identifiers page. In the login prompt of Cyberduck upon connecting to S3 you enter the Access Key ID for the username and Secret Access Key for the password.


To create a new bucket for your account, browse to the root and choose File → New Folder....

You can choose the bucket location in Preferences → S3. Note that Amazon has a different pricing scheme for different locations. Supported locations are:

  • EU (Ireland)
  • US Standard
  • US-West (Northern California)

Important: Because the bucket name must be globally unique the operation might fail if the name is already taken by someone else (E.g. don't assume any common name like media or images will be available).


Creating a folder inside a bucket will create a placeholder object named after the directory, has no data content and the mimetype application/x-directory.

Access Control

Amazon S3 uses Access Control List (ACL) settings to control who may access or modify items stored in S3. By default, all buckets and objects created in S3 are accessible only to the account owner.

You must give Other read permissions for your objects in File → Info → Permissions to make them accessible using a regular web browser for everyone.

Distribution (CDN)

Refer to Amazon CloudFront distribution.

Signed URLs

Use File → Info to copy the signed public URL from the S3 section valid for 24 hours.

  • Choose the lifetime for publicly available auto-expiring signed URL using the hidden option s3.url.expire.seconds.
defaults write ch.sudo.cyberduck s3.url.expire.seconds 86400

Cache Control Setting

This option lets you control how long a client accessing objects from your S3 bucket will cache the content and thus lowering the number of access to your S3 storage. In conjunction with Amazon CloudFront, it controls the time an object stays in an edge location until it expires. After the object expires, CloudFront must go back to the origin server the next time that edge location needs to serve that object. By default, all objects automatically expire after 24 hours when no custom Cache-Control header is set.

The default setting to choose from in the File → Info panel in Cyberduck is Cache-Control: public,max-age=2052000 which translates to a cache expiration of one month (one month in seconds equals more or less 60*60*24*30).

  • Use the hidden configuration option s3.cache.seconds to set a custom default value
defaults write ch.sudo.cyberduck s3.cache.seconds 2052000

Bucket Access Logging

When this option is enabled in the 'File → 'Info panel of a bucket or any file within, available log records for this bucket are periodically aggregated into log files and delivered to <bucketname>/logs.

Citing the Amazon S3 documentation: An Amazon S3 bucket can be configured to create access log records for the requests made against it. An access log record contains details about the request such as the request type, the resource with which the request worked, and the time and date that the request was processed. Server access logs are useful for many applications, because they give bucket owners insight into the nature of requests made by clients not under their control. 'There is no extra charge for enabling the server access logging feature on an Amazon S3 bucket, however any log files the system delivers to you will accrue the usual charges for storage (you can delete the log files at any time). No data transfer charges will be assessed for log file delivery, but access to the delivered log files is charged for data transfer in the usual way.

BitTorrent Distribution

Use File → Info to copy the BitTorrent URL to your content from the S3 section.