Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Out of memory error when uploading larger files #10392

Closed
cyberduck opened this issue Jul 9, 2018 · 12 comments
Closed

Out of memory error when uploading larger files #10392

cyberduck opened this issue Jul 9, 2018 · 12 comments
Assignees
Labels
bug fixed high priority onedrive OneDrive Protocol Implementation
Milestone

Comments

@cyberduck
Copy link
Collaborator

eada457 created the issue

I try to upload the hole files of a VM (VHDD, snapshots and so on). The files are up to 11.6GB big. After ~2.4 GB the upload stops and I got an Java.lang.OutOfMemoryError. The debug log is empty, maybe because the upload was done in an external window. I'm using OneDrive as upload target.

My assumption was that Cyberduck try to buffer the hole file in RAM, while a default JVM doesn't have enough RAM assigned. But I found a lot of quite old tickets, which say that those buffering of hole files was fixed years ago. Since Cyberduck seems to be a mix of .NET and Java and I'm not a Java developer, I don't know to to increase those limits for Cyberduck. Maybe the default limit is to low for the default buffer?


Attachments

@cyberduck
Copy link
Collaborator Author

@dkocher commented

Ticket retargeted after milestone closed

@cyberduck
Copy link
Collaborator Author

eada457 commented

I tried again with Version 6.6.3 (28581 latest nightly build) and noticed a different behavior: Could upload a CentOS 7 DVD image (4,4GB) without any issues. Later I tried a VDI file from my VM with a total size of 39,3GB. After 13.0GB (14.0005.158.086 bytes, 33%) the upload stucks. The file transfer window say that we have a transfer rate of 345.2 KB/s, but this doesn't seem to be true: The transfered file size doesn't increase and the taskmanager show me a network usage from cyberduck of 0 Mbit/s. Even after some hours later, this behavior doesn't change.

But it's not possible to stop or delete the file transfer any more. Seems like Cyberduck get stuck here. To stop the upload I have to exit Cyberduck, confirm the killing of running transfers and start it again.

UPDATE: After uploading the same file again, the behavior of Cyberduck is the same, but now after 11.0GB instead of 13GB.

@cyberduck
Copy link
Collaborator Author

@dkocher commented

#10196 closed as duplicate.

@cyberduck
Copy link
Collaborator Author

eada457 commented

Since large files doesn't work well and I'd assume that OneDrive has file size limits, too (business has 15GB https://support.microsoft.com/en-us/help/3125202/restrictions-and-limitations-when-you-sync-files-and-folders), my workaround is to create 7z archives with a maximum size of 8GB per file. So it's possible to avoid those problems on big files and the total size shrinks by compression, too.

So one of my larger VM could be split in 2x8GB and 1x 0,8GB segments. I tried to upload all three files using Cyberduck and now the upload stucks at 1.7GB from 16.8GB total. The transfer window show me a rate of 40.8 KB/sec and about 110 hours remaining. Cyberducks main window is useable but the file transfer popup seems completely frozen, can't even move the window any more.

After restarting Cyberduck, I see that the smallest part #3 (0,8GB) seems uploaded successfully, but part 1 and 2 (8GB each) not. I re-uploaded part #1 and #2 and a few percent before they were transfered complete, I got an error dialog "Connection failed: Java.lang.OutOfMemoryError". After this error, the file transfer window doesn't freeze. It was possible to delete the failed transfer.

Try number 3: Instead of uploading both files (#1 and #2), I first uploaded #2 and then #1, this works.

@cyberduck
Copy link
Collaborator Author

@dkocher commented

#10534 closed as duplicate.

@cyberduck
Copy link
Collaborator Author

@AliveDevil commented

I'd like to know more about the time Cyberduck fails.

I have been able to upload huge amounts of data to OneDrive (32 GB splitted into 8 GB files) without Cyberduck yet crashing (at the time of writing this test does continue to run, currently 21 GB uploaded). So I have checked some things, which may indicate this OutOfMemory exception which is not reproducible on our systems could be harder to investigate than others.

So, my questions are:

  • How much memory do you have in your system?
  • How much memory is in use while Cyberduck uploads files and fails with OutOfMemory exceptions?

@cyberduck
Copy link
Collaborator Author

eada457 commented

Thanks for taking care about this issue.

At the time of writing, my system had 24GB memory, so it should not be a real lack of physical memory.
Since the issue is a few months old, in the meanwile I upgraded to 32GB.
Memory usage wasn't recorded, I can re-try it to see how much memory is used.

But I have a reconnect from my provider every 24 hours. So when I start uploading in the evening like now, my connection got corrupted in a few hours for some seconds. This shouldn't been a big issue for Cyberduck, or am I wrong?

Replying to [comment:8 jmalek]:

I'd like to know more about the time Cyberduck fails.

I have been able to upload huge amounts of data to OneDrive (32 GB splitted into 8 GB files) without Cyberduck yet crashing (at the time of writing this test does continue to run, currently 21 GB uploaded). So I have checked some things, which may indicate this OutOfMemory exception which is not reproducible on our systems could be harder to investigate than others.

So, my questions are:

  • How much memory do you have in your system?
  • How much memory is in use while Cyberduck uploads files and fails with OutOfMemory exceptions?

@cyberduck
Copy link
Collaborator Author

eada457 commented

I ran two tests:

Test 1
4 files with 2x7,1GB, 1x1,8GB and 1x1,7GB => total about ~17GB
This upload was done during night without errors. Ram usage between 80-100MB at beginning. When reaching 16 percent, it increased to ~140-160MB. Later I went to bed so no data exists. It was surprising but I remember that I changed the settings in Filetransfer > General > File transfers to "Use connection in browser". This was done several months ago to figure out if may wrong settings are the reason why large uploads doesn't work.

Test 2
29,5GB total in multiple ISO images from 4 to 7GB
Here the file transfer settings were changed to "Open multiple connections", which was the default I think. I started the uplad in the morning before I went to work, today in the evening it was aborted. File transfer window say "transfer incomplete" at 15,9 from 29,6GB. and in the line above "Resolve graph.microsoft.com".

The cyberduck.log in appdata folder has an entry near 3PM:

2018-11-16 14:54:52,405 [background-11] ERROR AsyncController - Unhandled exception during invoke
cli.System.OutOfMemoryException
2018-11-16 14:54:52,421 [cbafInh0-transfer-2] ERROR ch.cyberduck.core.Resolver - Waiting for resolving of graph.microsoft.com
java.lang.InterruptedException
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1324)
	at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
	at ch.cyberduck.core.Resolver.resolve(Resolver.java:78)
	at ch.cyberduck.core.LoginConnectionService.connect(LoginConnectionService.java:133)
	at ch.cyberduck.core.LoginConnectionService.check(LoginConnectionService.java:102)
	at ch.cyberduck.core.pool.StatelessSessionPool.borrow(StatelessSessionPool.java:71)
	at ch.cyberduck.core.worker.ConcurrentTransferWorker.borrow(ConcurrentTransferWorker.java:128)
	at ch.cyberduck.core.worker.AbstractTransferWorker$3.call(AbstractTransferWorker.java:381)
	at ch.cyberduck.core.worker.AbstractTransferWorker$3.call(AbstractTransferWorker.java:371)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:512)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:618)
	at ch.cyberduck.core.threading.NamedThreadFactory$1.run(NamedThreadFactory.java:58)
	at java.lang.Thread.run(Thread.java:955)

Again you can see that I got an OutOfMemoryException diretly before the DNS resolve error. This can't be related to my memory since my machine has 32GB memory. Currently about 22GB of them are usable. BUT since Cyberduck is a 32 Bit application, it can't allocate more than ~4GB for a single process.

To be sure, I looked in the logs of my DSL router. There were no entrys in the hole day, so the WAN connection seems stable. This doesn't automatically mean that there were no outages by the DNS server from the ISP. But on the other hand, even if there was a DNS failture: Why does this abort the hole upload process? Would expect that in this case Cyberduck wait a minute and re-try the upload process.

And I don't really belive that the DNS issue is the cause since the OutOfMemoryException was thrown only milliseconds before. It seems that for some reason the app ran out of memory and the DNS error is some aftereffect of this.

@cyberduck
Copy link
Collaborator Author

@AliveDevil commented

Thanks for retesting.
I created a test case where 90 % of memory is in use on an 8 GB VPS (Cyberduck running in the background before filling up the memory). After doing any action Cyberduck immediatly crashed.

So: This is a major issue I'm currently investigating but it might take time as this is not easily resolved. There is not "just" the 4 GB limit but also the limitation that if Cyberduck allocates space outside the first 4 GB of system memory it will crash.
Or in other words: It does not run out of memory (not even near maximum memory usage) but runs out of addressable memory space which presents itself in a OutOfMemory error.

The IP disconnect should not affect Cyberduck uploading files as it just repeats until it can upload successfully.

Just for my records: Out of 32 GB you had 22 GB left, then 10 GB had been in use which could lead to this error.

@cyberduck
Copy link
Collaborator Author

eada457 commented

Do you have any idea why there are problems with allocating memory outside the first 4GB?
It's the first time that I heard about this, which sounds as a not very uncommon situation since memory is cheap nowadays.

@cyberduck
Copy link
Collaborator Author

@dkocher commented

Fix for resolver in 4068469.

@cyberduck
Copy link
Collaborator Author

@dkocher commented

In 9814b2f. Please update to the latest snapshot build available.

@iterate-ch iterate-ch locked as resolved and limited conversation to collaborators Nov 26, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug fixed high priority onedrive OneDrive Protocol Implementation
Projects
None yet
Development

No branches or pull requests

2 participants