New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Preference to use path-style request URIs #7967
Comments
We do not support this, because using path-style URIs will lead redirects to the region specific endpoint such as |
In c65f104. Set the hidden setting
|
This is error prone because S3 sometimes sends permanent redirects without a
|
Changing the setting is discouraged and will break accessing buckets not in region |
I would like to suggest that this be re-opened. The 301 Redirect issue mentioned above is a strange but valid behavior considering that regions in S3 are essentially like completely disparate services and a bucket in us-east-1 has no relation to (or knowledge of) a bucket in us-west-1. In other words, the above 301 should probably just throw an "object not found" error, and the "path-style" API request method should be the default, with the understanding that the user must specify the correct AWS region in the connection profile. The current behavior of just expecting that S3 users with dots in their bucket names (which is a totally normal and supported way to use S3) are just going to have to put up with certificate errors all of the time is, frankly, irresponsible programming. It contributes to desensitization towards certificate errors, which is already a pernicious problem in our industry. If you really don't want to make this behavior the default, could we at least get a "path style" checkbox on the connection profile screen? |
Replying to [comment:6 eric herot]:
I agree this is not optimal. We should possibly implement a lax hostname verifier that allows more than one level for wildcard certificates. However please note that you will have the same issues when accessing objects in a bucket with dots in its name with a web browser. |
Replying to [comment:6 eric herot]:
We don't want this as it will make the connection attempt cumbersome and error prone for average the user. You would need different connection profiles for every bucket you want to access. |
My point is that buckets in different regions should require different connection profiles to access them. There are not very many S3 regions in the world so I doubt this will introduce much hardship to most users. This is especially important because it is totally legal to have two buckets with the same name in different regions. |
Replying to [comment:10 eric herot]:
I can't find a reference but to my knowledge bucket names are unique across all regions. When you attempt to create a bucket in another region with the same name you will get |
For further reference. We do support region specific connection profiles for Swift as shown in the CloudFiles profiles with a location constraint. |
Replying to [comment:10 eric herot]:
Please also note that having files or containers with the same name is supported as shown with versioning or OpenStack Swift regions. |
Filed [8322] |
I've run into the same s3 SSL certificate issue that has been reported before and is documented in the wiki
https://trac.cyberduck.io/wiki/help/en/howto/s3#SSLcertificatetrustverification
It is common for users to put dots in the bucket name to support the S3 CNAME support.
A solution to this problem is for Cyberduck to use path-style requests instead of virtual hosted-style requests. This page shows the difference: http://docs.aws.amazon.com/AmazonS3/latest/API/APIRest.html
That page also says the virtual-hosted style is recommended, which is worrying.
However this page goes into much more detail and there is no indication that the path-style is legacy:
http://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosting.html
After reading these pages it seems very reasonable to use the path-style, and then avoid the SSL issue when there are dots in the bucket name. I thought perhaps I could make that happen by creating a connection with "s3.amazonaws.com" for the hostname and a path of the bucket-name, this does make a connection, but it still has the SSL error so it looks like internally it is getting converted to the virtual host-style of ".s3.amazonaws.com"
The text was updated successfully, but these errors were encountered: