Cyberduck Mountain Duck CLI

#10582 closed defect (worksforme)

Retrieve default access key from standard AWS SDK credentials for opened links

Reported by: ehbbt Owned by: dkocher
Priority: normal Milestone: 7.0
Component: s3 Version: 6.9.0
Severity: normal Keywords:
Cc: Architecture:
Platform: macOS 10.14

Description

We are looking for an app that will let our users click on s3:// URLs and download the file the URL points to (like s3://bucket/path/file.txt). CyberDuck is almost there, the problem we're running into is that when the user clicks on an s3:// URL, CyberDuck opens a sheet requesting the user's access and secret keys even after those keys have been saved in the user's keychain and in their ~/.aws/credentials file. It would be much better if there were no additional user interaction required and CyberDuck could pull the access and secret keys from the credentials file or from the keychain, best would be from the credentials file with a profile set in the preferences.

Steps to reproduce:

Set up user's account so that they can use "aws s3 cp s3://PATH/TO/FILE /PATH/TO/LOCAL_DIR" to copy files from S3 to local. Nominally this means setting up ~/.aws/config and ~/.aws/credentials properly. Run CyberDuck and have it save the user's AWS access and secret keys in the Keychain. Set up a clickable s3:// URL (like s3://PATH/TO/FILE) Click on the s3:// URL

Expected Results:

CyberDuck downloads the file the s3:// URL points to with no user interaction required.

Actual Results:

CyberDuck opens a sheet on the transfers window requesting the user's access and secret keys.

Change History (13)

comment:1 Changed on Jan 23, 2019 at 12:17:42 PM by dkocher

  • Platform set to macOS 10.14
  • Summary changed from macOS: clicking on an s3://PATH/TO/FILE URL requires user to re-enter access keys every time even if access keys have been saved in ~/.aws/credentials and the Keychain to Clicking on an s3://PATH/TO/FILE URL requires user to re-enter access keys every time even if access keys have been saved in ~/.aws/credentials and the Keychain

comment:2 Changed on Jan 31, 2019 at 9:29:20 PM by dkocher

  • Milestone set to 6.9.3

comment:3 follow-up: Changed on Feb 3, 2019 at 12:44:56 PM by dkocher

  • Owner set to dkocher
  • Status changed from new to assigned

This should work if you include the access key in the URI such as s3://ACCESSKEY@container/PATH/TO/FILE.

comment:4 in reply to: ↑ 3 Changed on Feb 4, 2019 at 6:31:01 PM by ehbbt

Replying to dkocher:

This should work if you include the access key in the URI such as s3://ACCESSKEY@container/PATH/TO/FILE.

Good to know, though that will not work for our use case. We are looking to send out or post s3:// URLs for our internal users so they can get access to files. We will not know their access keys so, nor can we send a single email to multiple people and have this work.

Please let me know if there's anything else we can do to help out.

comment:5 Changed on Feb 5, 2019 at 9:27:59 PM by dkocher

  • Summary changed from Clicking on an s3://PATH/TO/FILE URL requires user to re-enter access keys every time even if access keys have been saved in ~/.aws/credentials and the Keychain to Retrieve default access key from standard AWS SDK credentials for opened links

comment:6 Changed on Feb 5, 2019 at 9:35:04 PM by dkocher

We will have a fix to obtain the AWS access key from the default profile in ~/.aws/credentials.

comment:7 Changed on Feb 5, 2019 at 9:42:18 PM by ehbbt

Sounds good. It'd be really great if we could define a profile to use (rather than the default), but do understand that's a more complex thing to do.

comment:8 Changed on Feb 6, 2019 at 6:27:35 AM by yla

  • Resolution set to fixed
  • Status changed from assigned to closed

Fixed in r46292.

comment:9 Changed on Mar 20, 2019 at 1:19:35 AM by ehbbt

  • Resolution fixed deleted
  • Status changed from closed to reopened

I tried this with CyberDuck 6.9.4 and it is not working for me, I'm still being prompted for access key and secret when I click on a valid s3:// URL. I've tried downloading the URL with the "aws s3" command and it works, and I verified that the "[default]" section of ~/.aws/credentials is valid and works with the command line "aws s3" command. Perhaps I'm missing something or have something set incorrectly, but if so I don't know what it would be - if there's anything to check on my end please let me know.

comment:10 Changed on Mar 21, 2019 at 9:17:38 AM by dkocher

  • Milestone changed from 6.9.3 to 6.9.5

comment:11 Changed on Apr 16, 2019 at 1:55:31 PM by dkocher

  • Resolution set to worksforme
  • Status changed from reopened to closed

Please use the format s3:/bucketname/ for the URI to refer to a bucket name and make use of the default hostname s3.amazonaws.com configured for S3. This will then allow the lookup of the default credentials to work. The URI format is documented in here.

comment:12 Changed on Apr 16, 2019 at 6:03:48 PM by ehbbt

Using that format does work for me, yay!

Unfortunately it does not align with what the AWS python client (https://aws.amazon.com/cli/) uses, and we also use that. Specifically we send out an aws s3... command line, and the s3://... portion is generally rendered as a clickable URI, so people can either copy the command line or click the URI to do the download.

It'd be nice if CyberDuck and "aws s3" aligned, but that would mean that S3 URI handling in CyberDuck would be different from all the other URIs that you support (and that doesn't seem like a good idea right off).

I'll see what I can do here to make something that can tweak the URI and pass it along, that might work for us.

comment:13 Changed on Jun 4, 2019 at 2:20:54 PM by dkocher

  • Milestone changed from 6.9.5 to 7.0

Ticket retargeted after milestone deleted

Note: See TracTickets for help on using tickets.
swiss made software