Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Uploads fail with broken pipe error #7621

Closed
cyberduck opened this issue Nov 18, 2013 · 19 comments
Closed

Uploads fail with broken pipe error #7621

cyberduck opened this issue Nov 18, 2013 · 19 comments
Assignees
Labels
bug fixed s3 AWS S3 Protocol Implementation
Milestone

Comments

@cyberduck
Copy link
Collaborator

9af9111 created the issue

HI

Longtime user just upgraded to v4.4 for Amazon S3 support, however when I'm attempting to transfer a 1Gb file to Amazon S3 in Sydney (i'm In NZ) and I keep getting broken pipe error messages.Please see screen grab. I have also enabled debug logging.

My other ftp client (3Hub) works ok.

Also sometimes after saying 'continue' the transfer window, the job does not resume. I have the number of transfers set to 1. Increasing to 2 sets off the next job. I have highlighted the transfer and pressed 'resume'

(Also i have seen impossible upload speeds - i.e 700Mbps when my link is 1Mbps)

My system is macpro running 10.6.8 connected by DSL 12Mbps down and 1Mbps up. I also have 3Hub and Expandrive installed.

Any help much appreciated.

Ivan


Attachments

@cyberduck
Copy link
Collaborator Author

@dkocher commented

Possibly fixed in d70fb9e. Please update to the latest snapshot build available.

@cyberduck
Copy link
Collaborator Author

@dkocher commented

Reopened as problem still unresolved according to #7625.

@cyberduck
Copy link
Collaborator Author

@dkocher commented

Can you please try build 14077. It includes optimizations of d14b965 and a possible fix in 474238b for partial lengths written with the multipart upload feature.

@cyberduck
Copy link
Collaborator Author

9af9111 commented

Hi
As requested I downloaded the newest snapshop (14084) and attempted to upload my large files (250mb to 1.2gb in size) to S3.

Got the same error after approx 85mb of transfer (see screen grab).

Do you want me to turn on any logging/debugging?

Ivan

@cyberduck
Copy link
Collaborator Author

@dkocher commented

At least for multipart uploads (triggered for uploads exceeding 100MB) you can resume the transfer to only upload missing parts.

@cyberduck
Copy link
Collaborator Author

@dkocher commented

Usually logging information will not give much clue to broken pipe failures which indicates a generic networking issue.

@cyberduck
Copy link
Collaborator Author

@dkocher commented

Is this specific to the SYD bucket or do uploads fail regardless of the location of the bucket?

@cyberduck
Copy link
Collaborator Author

9af9111 commented

Hi
I have done some testing on different regional buckets (US standard, Oregon, Ireland) and they all fail with 10.6 workstation. I have connected via wire and wireless to remove local networking issues.
Now trying on my old Win7 laptop with latest snapshot to see if its platform specific.

Also testing on 10.7 laptop to see if its anything to do with the java library issue between 10.6 and 10.7

All machines on different AWS keys just in case they are hitting some kind of login limit (which i doubt)

Any other thoughts?

@cyberduck
Copy link
Collaborator Author

9af9111 commented

Just finished testing.
I get the same results from 10.7 and win7 (much less though).
That makes me think network however testing with CrossFTP, 3Hub and Forklift there are no issues.

Hence it looks like a Cyberduck issue??

Any thoughts?

ps tried pressing CONTINUE when the error happens but thats like every few mins - the whole transfer is scheduled to take hours....

@cyberduck
Copy link
Collaborator Author

@dkocher commented

Replying to [comment:11 ivanhassan]:

Just finished testing.
I get the same results from 10.7 and win7 (much less though).
That makes me think network however testing with CrossFTP, 3Hub and Forklift there are no issues.

Hence it looks like a Cyberduck issue??

Yes, that makes it hard to argue it is not an issue on our end.

@cyberduck
Copy link
Collaborator Author

@dkocher commented

Can you try to set the number of maximum transfers to 1 at the lower right corner in the Transfers window which as of build 8440ea7 will limit the number of concurrent connections for multipart uploads to 1 as well.

@cyberduck
Copy link
Collaborator Author

9af9111 commented

Hi
With 8440ea7 the file transfered ok however the next job did not start - got stuck with Maximum connection exceeded.

I assume this looks like the multi-part upload issue which would explain the S3 disconnection (especially if they interpreted the multipart upload as DoS?)

Shall I try on 2 uploads?

Many thanks again

Ivan

@cyberduck
Copy link
Collaborator Author

@dkocher commented

Replying to [comment:14 ivanhassan]:

Hi
With 8440ea7 the file transfered ok however the next job did not start - got stuck with Maximum connection exceeded.

I assume this looks like the multi-part upload issue which would explain the S3 disconnection (especially if they interpreted the multipart upload as DoS?)

Shall I try on 2 uploads?

Many thanks again

Ivan

With a slow connection and bad latency we were just congesting the line with 5 open connections per multipart transfer. With the change there is no parallelism for multipart uploads.

The Maximum connection exceeded is displayed on our part for transfers waiting for a slot.

@cyberduck
Copy link
Collaborator Author

9af9111 commented

So would you expect a 1Mb link to be able to support 2 transfers (ie 10 connections)?

Latency appears to be ok (33ms).

Could we build in an autochecker and if the line looks too congested, it reduces connections?

Just a thought...

Ivan

@cyberduck
Copy link
Collaborator Author

@dkocher commented

Replying to [comment:17 ivanhassan]:

So would you expect a 1Mb link to be able to support 2 transfers (ie 10 connections)?

Latency appears to be ok (33ms).

Could we build in an autochecker and if the line looks too congested, it reduces connections?

Yes, we plan to do dynamic connection number adjustments based on failures and throughput.

@cyberduck
Copy link
Collaborator Author

@dkocher commented

#7625 closed as duplicate.

@cyberduck
Copy link
Collaborator Author

@dkocher commented

#6587 closed as duplicate.

@cyberduck
Copy link
Collaborator Author

@dkocher commented

#7424, #6801, #6587 closed as duplicates.

@cyberduck
Copy link
Collaborator Author

@dkocher commented

Fix in e1f5ff8 with Expect: 100-continue header for PUT requests that resolves broken pipe failures for uploads to buckets where a 307 Temporary Redirect is returned.

@iterate-ch iterate-ch locked as resolved and limited conversation to collaborators Nov 26, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug fixed s3 AWS S3 Protocol Implementation
Projects
None yet
Development

No branches or pull requests

2 participants