Cyberduck Mountain Duck CLI

#7967 closed enhancement (fixed)

Preference to use path-style request URIs

Reported by: scytacki Owned by: dkocher
Priority: low Milestone: 4.5.2
Component: s3 Version: 4.4.4
Severity: normal Keywords:
Cc: Architecture:
Platform:

Description

I've run into the same s3 SSL certificate issue that has been reported before and is documented in the wiki https://trac.cyberduck.io/wiki/help/en/howto/s3#SSLcertificatetrustverification

It is common for users to put dots in the bucket name to support the S3 CNAME support.

A solution to this problem is for Cyberduck to use path-style requests instead of virtual hosted-style requests. This page shows the difference: http://docs.aws.amazon.com/AmazonS3/latest/API/APIRest.html

That page also says the virtual-hosted style is recommended, which is worrying.

However this page goes into much more detail and there is no indication that the path-style is legacy: http://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosting.html

After reading these pages it seems very reasonable to use the path-style, and then avoid the SSL issue when there are dots in the bucket name. I thought perhaps I could make that happen by creating a connection with "s3.amazonaws.com" for the hostname and a path of the bucket-name, this does make a connection, but it still has the SSL error so it looks like internally it is getting converted to the virtual host-style of "<bucket-name>.s3.amazonaws.com"

Change History (17)

comment:1 Changed on May 21, 2014 at 7:57:11 PM by dkocher

We do not support this, because using path-style URIs will lead redirects to the region specific endpoint such as s3-eu-west-1.amazonaws.com.

comment:2 Changed on May 21, 2014 at 7:58:47 PM by dkocher

  • Priority changed from normal to low
  • Summary changed from use s3.amazonaws.com instead <bucketname>.s3.amazonaws.com to Preference to use path-style request URIs
  • Type changed from defect to enhancement

comment:3 Changed on Aug 21, 2014 at 8:21:33 AM by dkocher

  • Resolution set to fixed
  • Status changed from new to closed

In r15042. Set the hidden setting s3.bucket.virtualhost.disable to true.

defaults write ch.sudo.cyberduck s3.bucket.virtualhost.disable true
Last edited on Aug 21, 2014 at 8:35:10 AM by dkocher (previous) (diff)

comment:4 Changed on Aug 21, 2014 at 8:26:03 AM by dkocher

This is error prone because S3 sometimes sends permanent redirects without a Location header for buckets not in the default us-east-1 region.

  • Example of HTTP/1.1 301 Moved Permanently but no location header:
GET /test.cyberduck.ch/?delimiter=%2F&max-keys=1000&prefix HTTP/1.1
Date: Thu, 21 Aug 2014 08:20:20 GMT
Host: s3.amazonaws.com:443
Connection: Keep-Alive
User-Agent: Cyberduck/0 (Mac OS X/10.9.4) (x86_64)

HTTP/1.1 301 Moved Permanently
x-amz-request-id: 36155ABAF41284F6
Content-Type: application/xml
Transfer-Encoding: chunked
Date: Thu, 21 Aug 2014 08:20:19 GMT
Server: AmazonS3

comment:5 Changed on Aug 21, 2014 at 8:36:32 AM by dkocher

Changing the setting is discouraged and will break accessing buckets not in region us-east-1. You will receive the error Received redirect response HTTP/1.1 301 Moved Permanently but no location header..

Last edited on Aug 21, 2014 at 8:37:17 AM by dkocher (previous) (diff)

comment:6 follow-ups: Changed on Nov 11, 2014 at 6:20:20 PM by eric herot

I would like to suggest that this be re-opened. The 301 Redirect issue mentioned above is a strange but valid behavior considering that regions in S3 are essentially like completely disparate services and a bucket in us-east-1 has no relation to (or knowledge of) a bucket in us-west-1.

In other words, the above 301 should probably just throw an "object not found" error, and the "path-style" API request method should be the default, with the understanding that the user must specify the correct AWS region in the connection profile.

The current behavior of just expecting that S3 users with dots in their bucket names (which is a totally normal and supported way to use S3) are just going to have to put up with certificate errors all of the time is, frankly, irresponsible programming. It contributes to desensitization towards certificate errors, which is already a pernicious problem in our industry.

If you really don't want to make this behavior the default, could we at least get a "path style" checkbox on the connection profile screen?

comment:7 Changed on Nov 11, 2014 at 6:46:03 PM by dkocher

  • Milestone set to 4.5.2

comment:8 in reply to: ↑ 6 Changed on Nov 11, 2014 at 7:43:37 PM by dkocher

Replying to eric herot:

The current behavior of just expecting that S3 users with dots in their bucket names (which is a totally normal and supported way to use S3) are just going to have to put up with certificate errors all of the time is, frankly, irresponsible programming. It contributes to desensitization towards certificate errors, which is already a pernicious problem in our industry.

I agree this is not optimal. We should possibly implement a lax hostname verifier that allows more than one level for wildcard certificates. However please note that you will have the same issues when accessing objects in a bucket with dots in its name with a web browser.

comment:9 in reply to: ↑ 6 Changed on Nov 11, 2014 at 7:45:05 PM by dkocher

Replying to eric herot:

In other words, the above 301 should probably just throw an "object not found" error, and the "path-style" API request method should be the default, with the understanding that the user must specify the correct AWS region in the connection profile.

We don't want this as it will make the connection attempt cumbersome and error prone for average the user. You would need different connection profiles for every bucket you want to access.

comment:10 follow-ups: Changed on Nov 11, 2014 at 7:51:52 PM by eric herot

My point is that buckets in different regions *should* require different connection profiles to access them. There are not very many S3 regions in the world so I doubt this will introduce much hardship to most users.

This is especially important because it is totally legal to have two buckets with the same name in different regions.

comment:11 in reply to: ↑ 10 Changed on Nov 11, 2014 at 8:00:53 PM by dkocher

Replying to eric herot:

This is especially important because it is totally legal to have two buckets with the same name in different regions.

I can't find a reference but to my knowledge bucket names are unique across all regions. When you attempt to create a bucket in another region with the same name you will get 409 Conflict response with Your previous request to create the named bucket succeeded and you already own it..

Last edited on Nov 11, 2014 at 8:01:57 PM by dkocher (previous) (diff)

comment:12 Changed on Nov 11, 2014 at 8:07:05 PM by dkocher

For further reference. We do support region specific connection profiles for Swift as shown in the CloudFiles profiles with a location constraint.

comment:13 in reply to: ↑ 10 Changed on Nov 11, 2014 at 8:08:40 PM by dkocher

Replying to eric herot:

This is especially important because it is totally legal to have two buckets with the same name in different regions.

Please also note that having files or containers with the same name is supported as shown with versioning or OpenStack Swift regions.

comment:14 in reply to: ↑ 10 Changed on Nov 11, 2014 at 8:13:10 PM by dkocher

Replying to eric herot:

My point is that buckets in different regions *should* require different connection profiles to access them.

Can you please open a new ticket with this enhancement request to support a Region qualifier in S3 profiles.

Last edited on Nov 11, 2014 at 8:14:01 PM by dkocher (previous) (diff)

comment:15 Changed on Nov 11, 2014 at 8:33:47 PM by dkocher

Added tests showing behaviour of path style requests in r15744.

comment:16 Changed on Nov 11, 2014 at 8:58:50 PM by eric herot

Filed 8322

comment:17 Changed on Dec 9, 2015 at 11:04:13 AM by dkocher

See #3813. In r18562. Default to lax hostname verification.

Note: See TracTickets for help on using tickets.
swiss made software