Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Preference to use path-style request URIs #7967

Closed
cyberduck opened this issue May 21, 2014 · 15 comments
Closed

Preference to use path-style request URIs #7967

cyberduck opened this issue May 21, 2014 · 15 comments
Assignees
Labels
enhancement fixed low priority s3 AWS S3 Protocol Implementation
Milestone

Comments

@cyberduck
Copy link
Collaborator

e5f53bf created the issue

I've run into the same s3 SSL certificate issue that has been reported before and is documented in the wiki
https://trac.cyberduck.io/wiki/help/en/howto/s3#SSLcertificatetrustverification

It is common for users to put dots in the bucket name to support the S3 CNAME support.

A solution to this problem is for Cyberduck to use path-style requests instead of virtual hosted-style requests. This page shows the difference: http://docs.aws.amazon.com/AmazonS3/latest/API/APIRest.html

That page also says the virtual-hosted style is recommended, which is worrying.

However this page goes into much more detail and there is no indication that the path-style is legacy:
http://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosting.html

After reading these pages it seems very reasonable to use the path-style, and then avoid the SSL issue when there are dots in the bucket name. I thought perhaps I could make that happen by creating a connection with "s3.amazonaws.com" for the hostname and a path of the bucket-name, this does make a connection, but it still has the SSL error so it looks like internally it is getting converted to the virtual host-style of ".s3.amazonaws.com"

@cyberduck
Copy link
Collaborator Author

@dkocher commented

We do not support this, because using path-style URIs will lead redirects to the region specific endpoint such as s3-eu-west-1.amazonaws.com.

@cyberduck
Copy link
Collaborator Author

@dkocher commented

In c65f104. Set the hidden setting s3.bucket.virtualhost.disable to true.

defaults write ch.sudo.cyberduck s3.bucket.virtualhost.disable true

@cyberduck
Copy link
Collaborator Author

@dkocher commented

This is error prone because S3 sometimes sends permanent redirects without a Location header for buckets not in the default us-east-1 region.

  • Example of HTTP/1.1 301 Moved Permanently but no location header:
GET /test.cyberduck.ch/?delimiter=%2F&max-keys=1000&prefix HTTP/1.1
Date: Thu, 21 Aug 2014 08:20:20 GMT
Host: s3.amazonaws.com:443
Connection: Keep-Alive
User-Agent: Cyberduck/0 (Mac OS X/10.9.4) (x86_64)

HTTP/1.1 301 Moved Permanently
x-amz-request-id: 36155ABAF41284F6
Content-Type: application/xml
Transfer-Encoding: chunked
Date: Thu, 21 Aug 2014 08:20:19 GMT
Server: AmazonS3

@cyberduck
Copy link
Collaborator Author

@dkocher commented

Changing the setting is discouraged and will break accessing buckets not in region us-east-1. You will receive the error Received redirect response HTTP/1.1 301 Moved Permanently but no location header..

@cyberduck
Copy link
Collaborator Author

6bc7c75 commented

I would like to suggest that this be re-opened. The 301 Redirect issue mentioned above is a strange but valid behavior considering that regions in S3 are essentially like completely disparate services and a bucket in us-east-1 has no relation to (or knowledge of) a bucket in us-west-1.

In other words, the above 301 should probably just throw an "object not found" error, and the "path-style" API request method should be the default, with the understanding that the user must specify the correct AWS region in the connection profile.

The current behavior of just expecting that S3 users with dots in their bucket names (which is a totally normal and supported way to use S3) are just going to have to put up with certificate errors all of the time is, frankly, irresponsible programming. It contributes to desensitization towards certificate errors, which is already a pernicious problem in our industry.

If you really don't want to make this behavior the default, could we at least get a "path style" checkbox on the connection profile screen?

@cyberduck
Copy link
Collaborator Author

@dkocher commented

Replying to [comment:6 eric herot]:

The current behavior of just expecting that S3 users with dots in their bucket names (which is a totally normal and supported way to use S3) are just going to have to put up with certificate errors all of the time is, frankly, irresponsible programming. It contributes to desensitization towards certificate errors, which is already a pernicious problem in our industry.

I agree this is not optimal. We should possibly implement a lax hostname verifier that allows more than one level for wildcard certificates. However please note that you will have the same issues when accessing objects in a bucket with dots in its name with a web browser.

@cyberduck
Copy link
Collaborator Author

@dkocher commented

Replying to [comment:6 eric herot]:

In other words, the above 301 should probably just throw an "object not found" error, and the "path-style" API request method should be the default, with the understanding that the user must specify the correct AWS region in the connection profile.

We don't want this as it will make the connection attempt cumbersome and error prone for average the user. You would need different connection profiles for every bucket you want to access.

@cyberduck
Copy link
Collaborator Author

6bc7c75 commented

My point is that buckets in different regions should require different connection profiles to access them. There are not very many S3 regions in the world so I doubt this will introduce much hardship to most users.

This is especially important because it is totally legal to have two buckets with the same name in different regions.

@cyberduck
Copy link
Collaborator Author

@dkocher commented

Replying to [comment:10 eric herot]:

This is especially important because it is totally legal to have two buckets with the same name in different regions.

I can't find a reference but to my knowledge bucket names are unique across all regions. When you attempt to create a bucket in another region with the same name you will get 409 Conflict response with Your previous request to create the named bucket succeeded and you already own it..

@cyberduck
Copy link
Collaborator Author

@dkocher commented

For further reference. We do support region specific connection profiles for Swift as shown in the CloudFiles profiles with a location constraint.

@cyberduck
Copy link
Collaborator Author

@dkocher commented

Replying to [comment:10 eric herot]:

This is especially important because it is totally legal to have two buckets with the same name in different regions.

Please also note that having files or containers with the same name is supported as shown with versioning or OpenStack Swift regions.

@cyberduck
Copy link
Collaborator Author

@dkocher commented

Replying to [comment:10 eric herot]:

My point is that buckets in different regions should require different connection profiles to access them.

Can you please open a new ticket with this enhancement request to support a Region qualifier in S3 profiles.

@cyberduck
Copy link
Collaborator Author

@dkocher commented

Added tests showing behaviour of path style requests in 3dae882.

@cyberduck
Copy link
Collaborator Author

6bc7c75 commented

Filed [8322]

@cyberduck
Copy link
Collaborator Author

@dkocher commented

See #3813. In 18562. Default to lax hostname verification.

@iterate-ch iterate-ch locked as resolved and limited conversation to collaborators Nov 26, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement fixed low priority s3 AWS S3 Protocol Implementation
Projects
None yet
Development

No branches or pull requests

2 participants