You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have several S3 buckets that each contain tens of thousands of data objects. These data objects are named using the full path and filename of the original file (they are backups). For example dir1/dir2/dir3/somefile.txt
When Cyberduck opens the bucket it begins to download the file list and becomes un-responsive during this time. The refresh cannot be stopped using the stop button and the app must be force-quit.
Is it possible to prevent this by first checking the number of files in a bucket and warning the user before attempting to fetch the list? Alternately, is it possible to download the file list in stages (ie 1000 at a time)?. The last possible solution I can think of is to form a request to get only the first (pseudo) subdir. Although from my basic understanding of the S3 API, this is not possible.
Thanks for making Cyberduck!
The text was updated successfully, but these errors were encountered:
I have several S3 buckets that each contain tens of thousands of data objects. These data objects are named using the full path and filename of the original file (they are backups). For example dir1/dir2/dir3/somefile.txt
When Cyberduck opens the bucket it begins to download the file list and becomes un-responsive during this time. The refresh cannot be stopped using the stop button and the app must be force-quit.
Is it possible to prevent this by first checking the number of files in a bucket and warning the user before attempting to fetch the list? Alternately, is it possible to download the file list in stages (ie 1000 at a time)?. The last possible solution I can think of is to form a request to get only the first (pseudo) subdir. Although from my basic understanding of the S3 API, this is not possible.
Thanks for making Cyberduck!
The text was updated successfully, but these errors were encountered: