Cyberduck Mountain Duck CLI

#7621 closed defect (fixed)

Uploads fail with broken pipe error

Reported by: ivanhassan Owned by: dkocher
Priority: normal Milestone: 4.5
Component: s3 Version: 4.4
Severity: normal Keywords: broken pipe
Cc: Architecture: Intel
Platform: Mac OS X 10.6

Description (last modified by ivanhassan)

HI

Longtime user just upgraded to v4.4 for Amazon S3 support, however when I'm attempting to transfer a 1Gb file to Amazon S3 in Sydney (i'm In NZ) and I keep getting broken pipe error messages.Please see screen grab. I have also enabled debug logging.

My other ftp client (3Hub) works ok.

Also sometimes after saying 'continue' the transfer window, the job does not resume. I have the number of transfers set to 1. Increasing to 2 sets off the next job. I have highlighted the transfer and pressed 'resume'

(Also i have seen impossible upload speeds - i.e 700Mbps when my link is 1Mbps)

My system is macpro running 10.6.8 connected by DSL 12Mbps down and 1Mbps up. I also have 3Hub and Expandrive installed.

Any help much appreciated.

Ivan

Attachments (4)

Screen shot 2013-11-19 at 09.05.49.png (27.9 KB) - added by ivanhassan on Nov 18, 2013 at 8:06:54 PM.
Connection failed : javax.net.ssl.SSLException
Screen shot 2013-11-21 at 09.51.04.png (77.5 KB) - added by ivanhassan on Nov 20, 2013 at 8:54:10 PM.
Screen shot 2013-11-21 at 10.10.33.png (32.9 KB) - added by ivanhassan on Nov 20, 2013 at 9:12:23 PM.
new error. click CONTINUE and it carries on for a short while.
Screen shot 2013-11-22 at 12.45.55.png (48.8 KB) - added by ivanhassan on Nov 21, 2013 at 11:54:48 PM.
transfer ok but hung on second job

Download all attachments as: .zip

Change History (27)

comment:1 Changed on Nov 18, 2013 at 7:54:49 PM by ivanhassan

  • Description modified (diff)
  • Owner set to ivanhassan

Changed on Nov 18, 2013 at 8:06:54 PM by ivanhassan

Connection failed : javax.net.ssl.SSLException

comment:2 Changed on Nov 19, 2013 at 2:43:00 PM by dkocher

  • Component changed from core to s3
  • Milestone set to 4.4.1
  • Resolution set to fixed
  • Status changed from new to closed

Possibly fixed in r13590. Please update to the latest snapshot build available.

comment:3 Changed on Nov 20, 2013 at 12:03:57 PM by dkocher

Reopened as problem still unresolved according to #7625.

comment:4 Changed on Nov 20, 2013 at 3:21:33 PM by dkocher

Can you please try build 14077. It includes optimizations of r14071 and a possible fix in r14072 for partial lengths written with the multipart upload feature.

comment:5 Changed on Nov 20, 2013 at 3:21:50 PM by dkocher

  • Milestone changed from 4.4.1 to 4.4.2
  • Summary changed from transfers to S3 (sydney) fail with repeated BROKEN PIPE to Uploads fail with broken pipe error

comment:6 Changed on Nov 20, 2013 at 8:55:15 PM by ivanhassan

  • Resolution fixed deleted
  • Status changed from closed to reopened

Hi As requested I downloaded the newest snapshop (14084) and attempted to upload my large files (250mb to 1.2gb in size) to S3.

Got the same error after approx 85mb of transfer (see screen grab).

Do you want me to turn on any logging/debugging?

Ivan

Changed on Nov 20, 2013 at 9:12:23 PM by ivanhassan

new error. click CONTINUE and it carries on for a short while.

comment:7 Changed on Nov 20, 2013 at 9:14:56 PM by dkocher

  • Owner changed from ivanhassan to dkocher
  • Status changed from reopened to new

At least for multipart uploads (triggered for uploads exceeding 100MB) you can resume the transfer to only upload missing parts.

comment:8 Changed on Nov 20, 2013 at 9:15:43 PM by dkocher

Usually logging information will not give much clue to broken pipe failures which indicates a generic networking issue.

comment:9 Changed on Nov 20, 2013 at 9:16:32 PM by dkocher

Is this specific to the SYD bucket or do uploads fail regardless of the location of the bucket?

comment:10 Changed on Nov 20, 2013 at 9:41:23 PM by ivanhassan

Hi I have done some testing on different regional buckets (US standard, Oregon, Ireland) and they all fail with 10.6 workstation. I have connected via wire and wireless to remove local networking issues. Now trying on my old Win7 laptop with latest snapshot to see if its platform specific.

Also testing on 10.7 laptop to see if its anything to do with the java library issue between 10.6 and 10.7

All machines on different AWS keys just in case they are hitting some kind of login limit (which i doubt)

Any other thoughts?

Last edited on Nov 20, 2013 at 9:42:16 PM by ivanhassan (previous) (diff)

comment:11 follow-up: Changed on Nov 20, 2013 at 10:46:50 PM by ivanhassan

Just finished testing. I get the same results from 10.7 and win7 (much less though). That makes me think network however testing with CrossFTP, 3Hub and Forklift there are no issues.

Hence it looks like a Cyberduck issue??

Any thoughts?

ps tried pressing CONTINUE when the error happens but thats like every few mins - the whole transfer is scheduled to take hours....

comment:12 in reply to: ↑ 11 Changed on Nov 21, 2013 at 9:38:01 AM by dkocher

Replying to ivanhassan:

Just finished testing. I get the same results from 10.7 and win7 (much less though). That makes me think network however testing with CrossFTP, 3Hub and Forklift there are no issues.

Hence it looks like a Cyberduck issue??

Yes, that makes it hard to argue it is not an issue on our end.

comment:13 Changed on Nov 21, 2013 at 1:18:24 PM by dkocher

Can you try to set the number of maximum transfers to 1 at the lower right corner in the Transfers window which as of build r14094 will limit the number of concurrent connections for multipart uploads to 1 as well.

Last edited on Nov 21, 2013 at 1:27:30 PM by dkocher (previous) (diff)

Changed on Nov 21, 2013 at 11:54:48 PM by ivanhassan

transfer ok but hung on second job

comment:14 follow-up: Changed on Nov 21, 2013 at 11:59:25 PM by ivanhassan

Hi With r14094 the file transfered ok however the next job did not start - got stuck with Maximum connection exceeded.

I assume this looks like the multi-part upload issue which would explain the S3 disconnection (especially if they interpreted the multipart upload as DoS?)

Shall I try on 2 uploads?

Many thanks again

Ivan

comment:15 Changed on Nov 22, 2013 at 10:17:50 AM by dkocher

  • Resolution set to fixed
  • Status changed from new to closed

comment:16 in reply to: ↑ 14 Changed on Nov 22, 2013 at 10:20:03 AM by dkocher

Replying to ivanhassan:

Hi With r14094 the file transfered ok however the next job did not start - got stuck with Maximum connection exceeded.

I assume this looks like the multi-part upload issue which would explain the S3 disconnection (especially if they interpreted the multipart upload as DoS?)

Shall I try on 2 uploads?

Many thanks again

Ivan

With a slow connection and bad latency we were just congesting the line with 5 open connections per multipart transfer. With the change there is no parallelism for multipart uploads.

The Maximum connection exceeded is displayed on our part for transfers waiting for a slot.

Last edited on Nov 22, 2013 at 10:20:28 AM by dkocher (previous) (diff)

comment:17 follow-up: Changed on Nov 22, 2013 at 7:01:59 PM by ivanhassan

So would you expect a 1Mb link to be able to support 2 transfers (ie 10 connections)?

Latency appears to be ok (33ms).

Could we build in an autochecker and if the line looks too congested, it reduces connections?

Just a thought...

Ivan

comment:19 in reply to: ↑ 17 Changed on Nov 22, 2013 at 10:32:13 PM by dkocher

Replying to ivanhassan:

So would you expect a 1Mb link to be able to support 2 transfers (ie 10 connections)?

Latency appears to be ok (33ms).

Could we build in an autochecker and if the line looks too congested, it reduces connections?

Yes, we plan to do dynamic connection number adjustments based on failures and throughput.

comment:20 Changed on Jul 15, 2014 at 3:24:03 PM by dkocher

#7625 closed as duplicate.

comment:21 Changed on Jul 15, 2014 at 3:24:43 PM by dkocher

#6587 closed as duplicate.

comment:22 Changed on Jul 15, 2014 at 4:01:43 PM by dkocher

  • Milestone changed from 4.4.2 to 4.5
  • Resolution fixed deleted
  • Status changed from closed to reopened

comment:23 Changed on Jul 15, 2014 at 4:10:36 PM by dkocher

#7424, #6801, #6587 closed as duplicates.

comment:24 Changed on Jul 15, 2014 at 4:11:13 PM by dkocher

  • Resolution set to fixed
  • Status changed from reopened to closed

Fix in r14903 with Expect: 100-continue header for PUT requests that resolves broken pipe failures for uploads to buckets where a 307 Temporary Redirect is returned.

Note: See TracTickets for help on using tickets.
swiss made software