Cyberduck Mountain Duck CLI

Opened 5 years ago

Closed 4 years ago

#7621 closed defect (fixed)

Uploads fail with broken pipe error

Reported by: ivanhassan Owned by: dkocher
Priority: normal Milestone: 4.5
Component: s3 Version: 4.4
Severity: normal Keywords: broken pipe
Cc: Architecture: Intel
Platform: Mac OS X 10.6

Description (last modified by ivanhassan)

HI

Longtime user just upgraded to v4.4 for Amazon S3 support, however when I'm attempting to transfer a 1Gb file to Amazon S3 in Sydney (i'm In NZ) and I keep getting broken pipe error messages.Please see screen grab. I have also enabled debug logging.

My other ftp client (3Hub) works ok.

Also sometimes after saying 'continue' the transfer window, the job does not resume. I have the number of transfers set to 1. Increasing to 2 sets off the next job. I have highlighted the transfer and pressed 'resume'

(Also i have seen impossible upload speeds - i.e 700Mbps when my link is 1Mbps)

My system is macpro running 10.6.8 connected by DSL 12Mbps down and 1Mbps up. I also have 3Hub and Expandrive installed.

Any help much appreciated.

Ivan

Attachments (4)

Screen shot 2013-11-19 at 09.05.49.png (27.9 KB) - added by ivanhassan 5 years ago.
Connection failed : javax.net.ssl.SSLException
Screen shot 2013-11-21 at 09.51.04.png (77.5 KB) - added by ivanhassan 5 years ago.
Screen shot 2013-11-21 at 10.10.33.png (32.9 KB) - added by ivanhassan 5 years ago.
new error. click CONTINUE and it carries on for a short while.
Screen shot 2013-11-22 at 12.45.55.png (48.8 KB) - added by ivanhassan 5 years ago.
transfer ok but hung on second job

Download all attachments as: .zip

Change History (27)

comment:1 Changed 5 years ago by ivanhassan

  • Description modified (diff)
  • Owner set to ivanhassan

Changed 5 years ago by ivanhassan

Connection failed : javax.net.ssl.SSLException

comment:2 Changed 5 years ago by dkocher

  • Component changed from core to s3
  • Milestone set to 4.4.1
  • Resolution set to fixed
  • Status changed from new to closed

Possibly fixed in r13590. Please update to the latest snapshot build available.

comment:3 Changed 5 years ago by dkocher

Reopened as problem still unresolved according to #7625.

comment:4 Changed 5 years ago by dkocher

Can you please try build 14077. It includes optimizations of r14071 and a possible fix in r14072 for partial lengths written with the multipart upload feature.

comment:5 Changed 5 years ago by dkocher

  • Milestone changed from 4.4.1 to 4.4.2
  • Summary changed from transfers to S3 (sydney) fail with repeated BROKEN PIPE to Uploads fail with broken pipe error

Changed 5 years ago by ivanhassan

comment:6 Changed 5 years ago by ivanhassan

  • Resolution fixed deleted
  • Status changed from closed to reopened

Hi As requested I downloaded the newest snapshop (14084) and attempted to upload my large files (250mb to 1.2gb in size) to S3.

Got the same error after approx 85mb of transfer (see screen grab).

Do you want me to turn on any logging/debugging?

Ivan

Changed 5 years ago by ivanhassan

new error. click CONTINUE and it carries on for a short while.

comment:7 Changed 5 years ago by dkocher

  • Owner changed from ivanhassan to dkocher
  • Status changed from reopened to new

At least for multipart uploads (triggered for uploads exceeding 100MB) you can resume the transfer to only upload missing parts.

comment:8 Changed 5 years ago by dkocher

Usually logging information will not give much clue to broken pipe failures which indicates a generic networking issue.

comment:9 Changed 5 years ago by dkocher

Is this specific to the SYD bucket or do uploads fail regardless of the location of the bucket?

comment:10 Changed 5 years ago by ivanhassan

Hi I have done some testing on different regional buckets (US standard, Oregon, Ireland) and they all fail with 10.6 workstation. I have connected via wire and wireless to remove local networking issues. Now trying on my old Win7 laptop with latest snapshot to see if its platform specific.

Also testing on 10.7 laptop to see if its anything to do with the java library issue between 10.6 and 10.7

All machines on different AWS keys just in case they are hitting some kind of login limit (which i doubt)

Any other thoughts?

Last edited 5 years ago by ivanhassan (previous) (diff)

comment:11 follow-up: Changed 5 years ago by ivanhassan

Just finished testing. I get the same results from 10.7 and win7 (much less though). That makes me think network however testing with CrossFTP, 3Hub and Forklift there are no issues.

Hence it looks like a Cyberduck issue??

Any thoughts?

ps tried pressing CONTINUE when the error happens but thats like every few mins - the whole transfer is scheduled to take hours....

comment:12 in reply to: ↑ 11 Changed 5 years ago by dkocher

Replying to ivanhassan:

Just finished testing. I get the same results from 10.7 and win7 (much less though). That makes me think network however testing with CrossFTP, 3Hub and Forklift there are no issues.

Hence it looks like a Cyberduck issue??

Yes, that makes it hard to argue it is not an issue on our end.

comment:13 Changed 5 years ago by dkocher

Can you try to set the number of maximum transfers to 1 at the lower right corner in the Transfers window which as of build r14094 will limit the number of concurrent connections for multipart uploads to 1 as well.

Last edited 5 years ago by dkocher (previous) (diff)

Changed 5 years ago by ivanhassan

transfer ok but hung on second job

comment:14 follow-up: Changed 5 years ago by ivanhassan

Hi With r14094 the file transfered ok however the next job did not start - got stuck with Maximum connection exceeded.

I assume this looks like the multi-part upload issue which would explain the S3 disconnection (especially if they interpreted the multipart upload as DoS?)

Shall I try on 2 uploads?

Many thanks again

Ivan

comment:15 Changed 5 years ago by dkocher

  • Resolution set to fixed
  • Status changed from new to closed

comment:16 in reply to: ↑ 14 Changed 5 years ago by dkocher

Replying to ivanhassan:

Hi With r14094 the file transfered ok however the next job did not start - got stuck with Maximum connection exceeded.

I assume this looks like the multi-part upload issue which would explain the S3 disconnection (especially if they interpreted the multipart upload as DoS?)

Shall I try on 2 uploads?

Many thanks again

Ivan

With a slow connection and bad latency we were just congesting the line with 5 open connections per multipart transfer. With the change there is no parallelism for multipart uploads.

The Maximum connection exceeded is displayed on our part for transfers waiting for a slot.

Last edited 5 years ago by dkocher (previous) (diff)

comment:17 follow-up: Changed 5 years ago by ivanhassan

So would you expect a 1Mb link to be able to support 2 transfers (ie 10 connections)?

Latency appears to be ok (33ms).

Could we build in an autochecker and if the line looks too congested, it reduces connections?

Just a thought...

Ivan

comment:19 in reply to: ↑ 17 Changed 5 years ago by dkocher

Replying to ivanhassan:

So would you expect a 1Mb link to be able to support 2 transfers (ie 10 connections)?

Latency appears to be ok (33ms).

Could we build in an autochecker and if the line looks too congested, it reduces connections?

Yes, we plan to do dynamic connection number adjustments based on failures and throughput.

comment:20 Changed 4 years ago by dkocher

#7625 closed as duplicate.

comment:21 Changed 4 years ago by dkocher

#6587 closed as duplicate.

comment:22 Changed 4 years ago by dkocher

  • Milestone changed from 4.4.2 to 4.5
  • Resolution fixed deleted
  • Status changed from closed to reopened

comment:23 Changed 4 years ago by dkocher

#7424, #6801, #6587 closed as duplicates.

comment:24 Changed 4 years ago by dkocher

  • Resolution set to fixed
  • Status changed from reopened to closed

Fix in r14903 with Expect: 100-continue header for PUT requests that resolves broken pipe failures for uploads to buckets where a 307 Temporary Redirect is returned.

Note: See TracTickets for help on using tickets.
swiss made software