Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Set queue / transfer item as "exclusive connection" #927

Closed
cyberduck opened this issue Oct 18, 2006 · 3 comments
Closed

Set queue / transfer item as "exclusive connection" #927

cyberduck opened this issue Oct 18, 2006 · 3 comments
Assignees

Comments

@cyberduck
Copy link
Collaborator

8fbae90 created the issue

I would like to request a feature where certain files (well, transfers, since dirs count too) could be set as an "exclusive download" in the queue / transfers window.

The idea is that it must wait till no other transfers are in progress, and then to disallow other queue items from starting until it has been completed. (Basically setting the max connections to 1 (one) universally instead of by host, as I believe it does now. -- But also unsetting this automatically when any such transfer has finished.)

This way, if you have a massive file (e.g. ISO image) or hierarchy (e.g. the entire tree for a development project or web site perhaps), you could specify that it should have full use of your available bandwidth during its transfer.

Of course, you can merely make sure that you don't add other items to the queue... Or manually switch your preference setting to 1 connection and reset it after you're done. -- But that's inconvenient if you want to download say 17 700MB ISO's in a row, or want to do this regularly. You'd have to go back and add each one when the previous one finished, or constantly be flipping your preference setting... instead of being able to add them all, mark them as "exclusive", and walk away.

By allowing this option you do not risk having 5 of them complete 300 MB each (running concurrently) and then drop them if your connection "flutters" so that 1500MB was wasted on non-restartable servers (not common anymore, but "Yuck!" when you find them) -- but instead could have successfully downloaded 2 700MB ISO's and fail just 1 by having them all run consecutively instead of concurrently.

But this "per transfer option" also allows for doing so just once, while having your default concurrency rate at a more reasonable level for the "normal"-style transfers that you do all the time.

Okay, so it's not a "widely desired" feature, I'm sure. But it could be quite helpful to some people, such as myself who run into just such situations more often than I'd like. :)

@cyberduck
Copy link
Collaborator Author

@dkocher commented

Duplicate for #986.

@cyberduck
Copy link
Collaborator Author

8fbae90 commented

I have to disagree with your resolution. This is not actually a duplicate of #986 (especially since this came first, but that's an irrelevant point), but is in fact merely a related concept.

In #986 (which I would agree is a good idea) there is a universal preference setting for the maximum number of concurrent downloads across all servers, as an additional restriction above and beyond the maximum connections per server setting which already exists.

However, this request is really for a "one off" setting for individual transfers in the queue. I think I should give an example of the desired behavior to clear it up.

Say you have set a maximum of 3 transfers per server, and (assuming #986 is implemented) a max of 5 overall, and you want to download the following list of files from servers A, B, C, and D:

  • A/1.tbz
  • A/3.txt
  • B/19.pdf
  • B/24.eps
  • B/27.readme
  • B/28.rtf
  • C/46.dmg
  • C/73.dmg
  • C/78.dmg
  • C/94.dmg
  • D/141.txt
  • D/197.pdf
  • D/202.txt
  • D/418.eps

However, files on server C (the DMG files) are each 700 MB and you are on a wireless network which commonly disconnects you due to interference from competing signals. - And to further complicate matters, server C is a badly behaved server which doesn't allow resuming transfers! ICK!!

Naturally, you wouldn't want to run the DMG file transfers concurrently, because they should complete as quickly as possible in the hopes that you won't be disconnected in the middle and loose all that progress. (If all 3 of your max-per-server value were running, you loose all the data from all 3, right? If only one is running you might finish it and get cut off in the middle of the 2nd one, only wasting part of the time spent on the queue.)

Yet, most of the time, and for the rest of the files in the queue you don't want to limit yourself to just 1 download at a time because they may be on servers which won't fill your bandwidth unless there are multiple transfers going, plus they're nice servers which allow resuming anyway so that the possibility of a disconnect is not so troublesome. So setting your global max transfers setting to one is not really what you want to do.

Herein, comes the need for a "per transfer" setting to provide exclusivity. With such a setting, you could mark each of the DMG files as exclusive and the download queue would proceed as such:

A/1.tbz, A/3.txt, B/19.pdf, B/24.eps, and B/27.readme (1st 5 files) would start immediately.

When A/1.tbz finishes (assuming it's the first to do so) only the next 4 (still part of the 5) will continue to run, because the next file is from server B and you have a max-per-server of 3, which is still full.

When at least one of the running B files finishes, B/28.rtf will start to fill the empty "per server" slot, but C/46.dmg still won't start (even though it's from a different server and would normally try to fill the 5-overall slot that is still empty) because you've marked it as "exclusive" and it will wait until it has things all to itself.

When the first 6 have all completed, then C/46.dmg will start its exclusive download. C/73.dmg will start when it is done, then C/78.dmg, then C/94.dmg, each after the other.

Eventually, when C/94.dmg finishes, and the exclusivity settings no longer apply, then the queue will pick up with D/141.txt, D/197.pdf, and D/202.txt which will fill the 3-per-server max.

When one of them finishes, the final file D/418.eps will start. And the queue will finish up with the last 3 files of your transfer queue normally.

Anyway...

I hope that all makes sense. I'm sure it's too wordy, but I wanted to be sure to get the precise point across this time. ;-)

Obviously, its entirely within your prerogative to re-close this ticket as a "will not fix", but since it is not actually part of the same request you thought it was, I have re-opened it for your continued consideration.

Thank you!

@cyberduck
Copy link
Collaborator Author

@dkocher commented

This is too fine grained as a feature of general interest in my opinion. Just set the maximum number of transfers to '1', start the large download first and then add the other transfers to the queue.

@iterate-ch iterate-ch locked as resolved and limited conversation to collaborators Nov 26, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

2 participants