Cyberduck Mountain Duck CLI

#10612 closed defect (fixed)

Failure proposed upload exceeds the maximum allowed size

Reported by: MarkBlaise Owned by: dkocher
Priority: normal Milestone: 6.9.4
Component: s3 Version: 6.9.0
Severity: normal Keywords:
Cc: emby@… Architecture: Intel
Platform: Windows 8.1

Description

It may just be network hiccups, but Cyberduck v6.9.0 (29768) seems to have a lot of trouble uploading large files to my Wasabi cloud account.

I have been using both Cyberduck and the Duck CLI, and have seen the issue with both.

For example, this morning Duck failed to upload a 290 gb file, after 3 hours of trying. I have a gigabit fiber optic connection, so speed is not the issue.

I tried to enable debugging by following directions on https://trac.cyberduck.io/wiki/help/en/faq#Enabledebuglogging.

  • I created the file C:\Users\Mark\AppData\Roaming\Cyberduck\default.properties It has 1 line of text (logging=debug) - attached.
  • I also defined the system environment variable logging set equal to debug.

When I run a Duck command I get:

C:\Users\Mark>duck -l wasabisys://T5ZHIOV9VGMNBAKU1HQ4@Blaise-Archive/Magni/

  • log4j:WARN No appenders could be found for logger (ch.cyberduck.core.preferences.Preferences).

So it would seem I can't get Duck to log.

While the file is copying, there are 2 entries in the Cyberduck listing in gray text; 1 in the target folder, and 1 in the bucket root (see attached screen shot). These entries do not display a size. Once an upload completes the gray entry in the bucket root is gone, and the entry in the target folder is black, and displays a size.

When an upload fails, the two gray entries remain. If I attempt the upload 3 times and have 3 failures, I will have 6 gray entries (3 in the root, 3 in the target folder), all with the same name.

I guess the gray entries are temp files. Regrettably, when a failed upload is tried again, Duck does not "pickup where it left off" - it starts from the beginning again.

So, I have to delete the grayed root entry, which will also remove the entry in the target folder. One cannot delete the entry in the target folder.

Unfortunately, Wasabi cloud storage charges accrue for 3 months for any file uploaded, even if it is deleted prior to 3 months. This means that I will be paying for every failed file upload for 3 months of storage - and then also paying for the actual file once successfully uploaded.

I would love to figure out why these uploads fail and correct the issue, so I can stop paying for 3 months of storage for each failed upload file fragment.

I would also like to get logging working for Duck.

Right now another copy attempt is being made for this morning's failure, After that, I'll try a large upload from the Cyberduck GUI to see if that is logging.

Attachments (3)

default.properties (15 bytes) - added by MarkBlaise on Feb 13, 2019 at 6:33:06 PM.
my default properties file
GrayedEntries.jpg (108.4 KB) - added by MarkBlaise on Feb 13, 2019 at 6:33:38 PM.
screen shot of grayed entries
Copy-122gb-File.zip (2.1 MB) - added by MarkBlaise on Feb 16, 2019 at 2:13:33 PM.
log output of failed 122 GiB file upload to Wasabi via Duck -v

Change History (33)

Changed on Feb 13, 2019 at 6:33:06 PM by MarkBlaise

my default properties file

Changed on Feb 13, 2019 at 6:33:38 PM by MarkBlaise

screen shot of grayed entries

comment:1 Changed on Feb 14, 2019 at 7:34:30 AM by dkocher

  • Component changed from core to cli

I can confirm the grayed file entries are pending uploads. We will try to reproduce the issue with uploads not resuming using the --existing resume flag.

comment:2 Changed on Feb 14, 2019 at 2:17:06 PM by dkocher

  • Milestone set to 6.9.3
  • Owner set to dkocher
  • Status changed from new to assigned

comment:3 Changed on Feb 14, 2019 at 2:28:39 PM by MarkBlaise

Thank you Any ideas about why logging is not occurring?

Last edited on Feb 14, 2019 at 5:08:41 PM by MarkBlaise (previous) (diff)

comment:4 Changed on Feb 15, 2019 at 12:08:10 PM by dkocher

  • Milestone changed from 6.9.3 to 7.0

Ticket retargeted after milestone closed

comment:5 Changed on Feb 15, 2019 at 1:52:16 PM by dkocher

Can you run with the --verbose flag and look for any concluding error message.

Changed on Feb 16, 2019 at 2:13:33 PM by MarkBlaise

log output of failed 122 GiB file upload to Wasabi via Duck -v

comment:6 Changed on Feb 16, 2019 at 2:14:42 PM by MarkBlaise

Two things ...

(1) Last night a large file upload to the Wasabi cloud failed. I was working with Wasabi support, and I had turned on Wasabi account logging, Here's what they said:

I and one of the engineering team member who is working on this case looked into the logs and found that when the upload is almost complete, Cyberduck sends a PUT request of the whole file which in turn is rejected by our API as it is above the threshold of the multi-part upload for a single part. The error generated is given below:

<Error> 
 <Code>EntityTooLarge</Code> 
 <Message>Your proposed upload exceeds the maximum allowed size</Message> 
 <ProposedSize>297379400337</ProposedSize> 
 <MaxSizeAllowed>5368709120</MaxSizeAllowed> 
 <RequestId>03FF43F2B13795F8</RequestId>
 <HostId>l1BlqtLcUAY/sAC1U8mJhc7dsAlJf+qNqRz3MwqBwZh3DqbSuDaCP64ZkiCRtzkaGYdlLEva+/uH</HostId> 
</Error>

(2) Note that I am currently using a Wasabi trial account to test the feasibility of using their service and Cyberduck for our backup purposes. The trial has a 1 TB storage limit.

This morning I attempted to upload a 122 GiB file via Duck with the -v switch. There was more than 300 GiB of space available. The upload failed. Here is the command line used:

Z:\Backup-Staging\Uploaded>duck  -v -P -e rename --upload wasabisys://HKIAYRYIVIYKGMHKPSN8@Bax-Backu
p/Test/ \\odin\Backup-Staging\Uploaded\20190209-Guinan-Full-Bax-Bck_20190210015533.nbd > Copy-122gb-
File.log
log4j:WARN No appenders could be found for logger (ch.cyberduck.core.preferences.Preferences).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Z:\Backup-Staging\Uploaded>

I have attached a ZIP of the log to which output was redirected, Copy-122gb-File.zip

If I understand the Wasabi log correctly, this is a problem Cyberduck has uploading large files (to Wasabi); it is not about available space.

Please let me know what you determine.

Also, why can I not get Duck logging working?
log4j:WARN No appenders could be found for logger (ch.cyberduck.core.preferences.Preferences)
If I understand the information at the link in the error message
http://logging.apache.org/log4j/1.2/faq.html#noconfig
it has to do with Duck/Cyberduck Java configuration.

comment:7 Changed on Feb 16, 2019 at 6:12:20 PM by dkocher

  • Summary changed from Failure to upload large files (200+ gb) to Failure proposed upload exceeds the maximum allowed size

comment:8 Changed on Feb 16, 2019 at 6:12:52 PM by dkocher

[▮▮▮▮▮▮▮▮▮▮▮▮▮▮▮▮▮▮▮▮▮▮▮▮▮▮▮▮  ] 117.1 GiB (125,683,690,321 bytes) of 117.1 GiB (99%, 15.2 MB/sec, 1 seconds remaining)
< HTTP/1.1 200 OK
< Date: Sat, 16 Feb 2019 13:34:06 GMT
< ETag: "9bb0b914fb0a373e78e56d6a9e54d658"
< Server: WasabiS3/3.3.791-2019-02-06-b50720e (head05)
< x-amz-id-2: GzvYZEabUcTsMzOwIEbHDXO79ARrfZ8/PLqbVxmWpkAfuDkA6CUJQBQG+N3BCQNUIFX7STb1azOq
< x-amz-request-id: 3390275C746FBE5D
< Content-Length: 0
< HTTP/1.1 200 OK
< Date: Sat, 16 Feb 2019 13:34:06 GMT
< ETag: "1085654e1bee3fba264da3dc40e963e4"
< Server: WasabiS3/3.3.791-2019-02-06-b50720e (head04)
< x-amz-id-2: f48ge9U0jwYPjRvNWddZSJFrivID4IdTi+RzmC+HlNN41h2fwmZwJxt69y2bKlAhxerXW4VpdvUw
< x-amz-request-id: C9AD360D45D9980F
< Content-Length: 0
> PUT /Bax-Backup/Test/20190209-Guinan-Full-Bax-Bck_20190210015533.nbd HTTP/1.1
> Date: Sat, 16 Feb 2019 13:34:06 GMT
> Expect: 100-continue
> Content-Type: application/octet-stream
> x-amz-content-sha256: fed4d2ddbbdc9213fe7f7ddb5fbac6df82020731316745e878af092bbe46eabf
> Host: s3.wasabisys.com
> x-amz-date: 20190216T133406Z
> Authorization: ********
> Content-Length: 125684796350
> Connection: Keep-Alive
> User-Agent: Cyberduck/6.9.0.29768 (Windows 8.1/6.3) (x86)
Transfer incomplete…
Upload 20190209-Guinan-Full-Bax-Bck_20190210015533.nbd failed. Your proposed upload exceeds the maximum allowed size. Please contact your web hosting service provider for assistance.

comment:9 Changed on Feb 16, 2019 at 9:11:26 PM by dkocher

I cannot yet reproduce it with smaller multipart uploads where the expected POST request to complete the upload is sent.

comment:10 follow-up: Changed on Feb 16, 2019 at 9:51:46 PM by MarkBlaise

I could be wrong, but my admittedly limited understanding is that the file is uploaded, part by part, but in the end Cyberduck executes a PUT of the entire file.

From the Wasabi log, it seems that the maximum size of a "part" is 5 GiB.

I can say that I have successfully uploaded files as large as 35 GiB, perhaps even 90 GiB. But these files are also larger than the 5 GiB maximum part size, so it seems that Cyberduck does not always POST the entire file - otherwise an upload of any file larger than 5 GiB would fail.

When and why Cyberduck POSTs the entire file at the end of the upload, I cannot say.

comment:11 Changed on Feb 19, 2019 at 4:09:39 AM by MarkBlaise

FYI, Wasabi tech support suggested I try another 3rd party S3 utility.

I downloaded and installed the AWS CLI, and the same file that Cyberduck could not upload (117.1 GiB) was uploaded by AWS CLI with no problems.

Oh, I also installed the 6.9.3 Cyberduck update. The problem persists.

Any insight as to why Cyberduck is having this issue?

Thanks

Last edited on Feb 19, 2019 at 3:28:45 PM by MarkBlaise (previous) (diff)

comment:12 in reply to: ↑ 10 Changed on Feb 19, 2019 at 3:03:52 PM by donaldduckie

Hi Mark & dkocher,

I am encountering the same issue uploading a 109.58GB file to AWS S3 Bucket configured with Glacier lifecycle (Transition after 0 day) using macOS version of Cyberduck.

Had the same result as observed by Mark -> I could be wrong, but my admittedly limited understanding is that the file is uploaded, part by part, but in the end Cyberduck executes a PUT of the entire file.

Here is the last PUT command executed by Cyberduck before the error message is displayed:

PUT /***-to-glacier/***.mov HTTP/1.1
Date: Tue, 19 Feb 2019 10:29:42 GMT
Expect: 100-continue
Content-Type: video/quicktime
x-amz-content-sha256: c0b277c2f37f1e28ce7e774b44735f02f2992023eaa95e9d3e30523b2b88d210
x-amz-meta-storage-class: GLACIER
x-amz-meta-version-id: 2szEFFse7EDpQ4h3QmFRhRdtE5AFHTCx
x-amz-storage-class: GLACIER
Host: *****.s3.amazonaws.com
x-amz-date: 20190219T102942Z
Authorization: ********
Content-Length: 109582612915
Connection: Keep-Alive
User-Agent: Cyberduck/6.9.3.30061 (Mac OS X/10.14.3) (x86_64)

Error message displayed by Cyberduck is the same: Your proposed upload exceeds the maximum allowed size. Please contact your web hosting service provider for assistance.

Would be grateful if Cyberduck team could look in this problem.

Thanks very much.

Replying to MarkBlaise:

I could be wrong, but my admittedly limited understanding is that the file is uploaded, part by part, but in the end Cyberduck executes a PUT of the entire file.

From the Wasabi log, it seems that the maximum size of a "part" is 5 GiB.

I can say that I have successfully uploaded files as large as 35 GiB, perhaps even 90 GiB. But these files are also larger than the 5 GiB maximum part size, so it seems that Cyberduck does not always POST the entire file - otherwise an upload of any file larger than 5 GiB would fail.

When and why Cyberduck POSTs the entire file at the end of the upload, I cannot say.

comment:13 Changed on Feb 19, 2019 at 3:21:10 PM by MarkBlaise

For what it's worth, I successfully uploaded a 76.3 GiB (81,969,331,200 bytes) file this morning.

DonaldDuckie's failed file was 109,582,612,915 bytes = 102.1 GiB.

My last failed file was 118.6 GiB (127,324,166,144 bytes)

Perhaps there is there something that happens when file size crosses a certain threshold, maybe >= 100 GiB?

comment:14 Changed on Feb 21, 2019 at 8:58:39 AM by dkocher

This must be related to our current default settings use a maximum number of parts of 10'000 given a 10MB segment size. This limits the maximum object size to 100GB.

You can manually increase the size of the segments uploaded using the hidden default s3.upload.multipart.size. Use

defaults write ch.sudo.cyberduck s3.upload.multipart.size 104857600

comment:15 Changed on Feb 21, 2019 at 8:58:46 AM by dkocher

  • Component changed from cli to s3

comment:16 Changed on Feb 22, 2019 at 1:04:30 PM by dkocher

  • Milestone changed from 7.0 to 6.9.4

comment:17 Changed on Feb 22, 2019 at 1:23:05 PM by dkocher

The error is caused by our fallback for a failed multipart upload in S3ThresholdUploadService.

comment:18 Changed on Feb 22, 2019 at 1:48:36 PM by dkocher

Looks like we have a off-by-one error not honouring the maximum allowed part numbers of 10000.

> PUT /Bax-Backup/Test/20190209-Guinan-Full-Bax-Bck_20190210015533.nbd?uploadId=NRsgz5u5WjpWStxNsM_GiGfA0JqXkhFqIX65A-2CdckhO1HN0IMsus8d4tkAy5Erw-hZuh8KyawLEEHef-b1SFs-tkYkQCa3YD6Kw7T17nknrBvNc7W0ZVhKF5xnLbmJ&partNumber=10000 HTTP/1.1
> Date: Sat, 16 Feb 2019 13:34:04 GMT
> Expect: 100-continue
> Content-Type: application/octet-stream
> x-amz-content-sha256: 5a80beadfcb8c94f27eb3de82070b65379b861340d05b1c60acb8f73a3dbd9ff
> Host: s3.wasabisys.com
> x-amz-date: 20190216T133404Z
> Authorization: ********
> Content-Length: 12568479
> Connection: Keep-Alive
> User-Agent: Cyberduck/6.9.0.29768 (Windows 8.1/6.3) (x86)

> PUT /Bax-Backup/Test/20190209-Guinan-Full-Bax-Bck_20190210015533.nbd?uploadId=NRsgz5u5WjpWStxNsM_GiGfA0JqXkhFqIX65A-2CdckhO1HN0IMsus8d4tkAy5Erw-hZuh8KyawLEEHef-b1SFs-tkYkQCa3YD6Kw7T17nknrBvNc7W0ZVhKF5xnLbmJ&partNumber=10001 HTTP/1.1
> Date: Sat, 16 Feb 2019 13:34:01 GMT
> Expect: 100-continue
> Content-Type: application/octet-stream
> x-amz-content-sha256: c54312c5ef3d4ca036aeb70d2d17607cee8b60ad473b27abb3390e594b51e391
> Host: s3.wasabisys.com
> x-amz-date: 20190216T133401Z
> Authorization: ********
> Content-Length: 6350
> Connection: Keep-Alive
> User-Agent: Cyberduck/6.9.0.29768 (Windows 8.1/6.3) (x86)

Presumably this request is failing and followed by the fallback to upload the file in a single PUT with the full 125684796350 content length.

> PUT /Bax-Backup/Test/20190209-Guinan-Full-Bax-Bck_20190210015533.nbd HTTP/1.1
> Date: Sat, 16 Feb 2019 13:34:06 GMT
> Expect: 100-continue
> Content-Type: application/octet-stream
> x-amz-content-sha256: fed4d2ddbbdc9213fe7f7ddb5fbac6df82020731316745e878af092bbe46eabf
> Host: s3.wasabisys.com
> x-amz-date: 20190216T133406Z
> Authorization: ********
> Content-Length: 125684796350
> Connection: Keep-Alive
> User-Agent: Cyberduck/6.9.0.29768 (Windows 8.1/6.3) (x86)

comment:19 Changed on Feb 22, 2019 at 2:49:37 PM by MarkBlaise

Yesterday (21-Feb), dkocher said that I should increase the "part size" from 10 MB to 104857600 (100 MiB). With 10,000 parts, the max object size would then be about 976 GiB - plenty for my anticipated use.

This morning there was additional information about an "off by 1" error regarding the maximum number of parts, and the fallback of PUT-ting the entire object fails.

So should I still try increasing the part size? Or will there be a fixed Cyberduck build soon?

If I should change the part size .... I'm not sure what to do with

defaults write ch.sudo.cyberduck s3.upload.multipart.size 104857600

Sorry for bring obtuse, but please explain.

Does that mean to add a line to %AppData%\Cyberduck\default.properties? (running on Windows 8.1)

Like so?

ch.sudo.cyberduck s3.upload.multipart.size 104857600

I note that this string has 3 tokens, without an equals symbol (=):

ch.sudo.cyberduck   s3.upload.multipart.size   104857600

Is that right?

Last edited on Feb 22, 2019 at 3:08:49 PM by MarkBlaise (previous) (diff)

comment:20 Changed on Feb 22, 2019 at 5:06:30 PM by yla

  • Resolution set to fixed
  • Status changed from assigned to closed

Fixed in r46413. The latest snapshot build contains the fix. Thanks for reporting this.

comment:21 Changed on Feb 26, 2019 at 7:41:35 PM by MarkBlaise

FYI, I downloaded the snapshot build, v7.0.0.0 (30103)

I was able to successfully upload a 116 GiB file in the GUI client yesterday. Thanks for fixing that issue! ;-)

Note that the Duck CLI failed to upload a 142 GiB file this morning. Is this expected? I understand that the CLI may not have been updated, but I was under the impression that Duck is a CLI to the Cyberduck program - calling into the same libraries used by the GUI.

I looked for, but was not able to find, an update for the CLI, I'm using v6.9.3 (30061)

Please set my expectations.

Thank you.

comment:22 Changed on Feb 26, 2019 at 9:18:55 PM by dkocher

Replying to MarkBlaise:

For Cyberduck CLI you will have to switch to the snapshot build as well. Refer to Other installation options documented.

Last edited on Feb 27, 2019 at 9:01:20 AM by dkocher (previous) (diff)

comment:23 follow-up: Changed on Feb 26, 2019 at 9:48:06 PM by MarkBlaise

Replying to dkocher:

Sorry to be obtuse, but I don't see how to install a Duck CLI snapshot.

The "Cyberduck CLI" link in your post brings me to the Duck installation page. On that page, the "other installation options" link brings me to the Windows Installation section of the Cyberduck Wiki page.

There are 2 links there: one brings me to the chocolatey installation page on chocolatey.org, and the other link (MSI Download) brings me to the dist.duck.sh page, and the latest duck release available there is 6.9.3.30061, which was built on 15-February. That is before this is issue was reported, so that can't be the duck snapshot.

Please explain ...

Last edited on Feb 27, 2019 at 9:01:36 AM by dkocher (previous) (diff)

comment:24 in reply to: ↑ 23 Changed on Feb 27, 2019 at 9:32:22 AM by dkocher

Replying to MarkBlaise:

Appologies for the confusion. While the documentation to obtain snapshot builds should be clear for macOS and Linux, we are currently missing an option for Windows. You can obtain the build from

comment:25 Changed on Feb 27, 2019 at 8:06:18 PM by MarkBlaise

I can confirm that the snapshot build of Duck CLI v7.0.0.30142 is working - I uploaded a 211 GiB file this morning with no problems.

Thank you for your efforts and help!

comment:26 follow-up: Changed on Mar 3, 2019 at 11:33:48 PM by MarkBlaise

It looks like Cyberduck release v6.9.4 fixed this issue. Thank you!

The latest Duck CLI release version available appears to be 6.9.3, which does not have this issue fixed. There is a Duck snapshot build that has the fix, which I am currently using.

Any idea when a release version of Duck CLI with this fix will be available?

Thanks again!

comment:27 in reply to: ↑ 26 Changed on Mar 4, 2019 at 8:04:53 AM by dkocher

Replying to MarkBlaise:

It looks like Cyberduck release v6.9.4 fixed this issue. Thank you!

The latest Duck CLI release version available appears to be 6.9.3, which does not have this issue fixed. There is a Duck snapshot build that has the fix, which I am currently using.

Any idea when a release version of Duck CLI with this fix will be available?

Thanks again!

Version 6.9.4 for Windows is available on Chocolatey or from https://dist.duck.sh/.

comment:28 follow-up: Changed on Mar 4, 2019 at 1:09:50 PM by MarkBlaise

Thanks for the response ... but this is what I see on the Duck distribution list:

Index of /
Name                                Last modified      Size  Description
duck-src-6.9.3.30061.tar.gz.md5     15-Feb-2019 14:59   33   MD5 Hash
duck-src-6.9.3.30061.tar.gz         15-Feb-2019 14:59   29M  
duck-6.9.3.30061.tar.gz.md5         15-Feb-2019 14:59   33   MD5 Hash
duck-6.9.3.30061.tar.gz             15-Feb-2019 14:59   81M  
duck-6.9.3.30061.pkg.md5            15-Feb-2019 14:59   33   MD5 Hash
duck-6.9.3.30061.pkg                15-Feb-2019 14:59   81M  
duck-6.9.3.30061.msi                15-Feb-2019 14:55   43M  
duck-6.9.3.30061.exe                15-Feb-2019 14:55   43M  
duck-src-6.9.0.29768.tar.gz.md5     16-Jan-2019 10:29   33   MD5 Hash
duck-src-6.9.0.29768.tar.gz         16-Jan-2019 10:29   29M  

etc ...

The latest Duck CLI release version available appears to be 6.9.3.30061 from 15-Feb. What am I missing?

comment:29 in reply to: ↑ 28 Changed on Mar 4, 2019 at 1:35:05 PM by dkocher

Replying to MarkBlaise:

Thanks for the response ... but this is what I see on the Duck distribution list:

Index of /
Name                                Last modified      Size  Description
duck-src-6.9.3.30061.tar.gz.md5     15-Feb-2019 14:59   33   MD5 Hash
duck-src-6.9.3.30061.tar.gz         15-Feb-2019 14:59   29M  
duck-6.9.3.30061.tar.gz.md5         15-Feb-2019 14:59   33   MD5 Hash
duck-6.9.3.30061.tar.gz             15-Feb-2019 14:59   81M  
duck-6.9.3.30061.pkg.md5            15-Feb-2019 14:59   33   MD5 Hash
duck-6.9.3.30061.pkg                15-Feb-2019 14:59   81M  
duck-6.9.3.30061.msi                15-Feb-2019 14:55   43M  
duck-6.9.3.30061.exe                15-Feb-2019 14:55   43M  
duck-src-6.9.0.29768.tar.gz.md5     16-Jan-2019 10:29   33   MD5 Hash
duck-src-6.9.0.29768.tar.gz         16-Jan-2019 10:29   29M  

etc ...

The latest Duck CLI release version available appears to be 6.9.3.30061 from 15-Feb. What am I missing?

Looks like you are seeing a cached outdated copy of the page. There should be

[   ] duck-src-6.9.4.30164.tar.gz.md5     27-Feb-2019 18:26   33   MD5 Hash
[   ] duck-src-6.9.4.30164.tar.gz         27-Feb-2019 18:26   29M  
[   ] duck-6.9.4.30164.tar.gz.md5         27-Feb-2019 18:26   33   MD5 Hash
[   ] duck-6.9.4.30164.tar.gz             27-Feb-2019 18:26   81M  
[   ] duck-6.9.4.30164.pkg.md5            27-Feb-2019 18:26   33   MD5 Hash
[   ] duck-6.9.4.30164.pkg                27-Feb-2019 18:26   81M  
[   ] duck-6.9.4.30164.msi                27-Feb-2019 18:23   43M  
[   ] duck-6.9.4.30164.exe                27-Feb-2019 18:22   43M  

comment:30 Changed on Mar 4, 2019 at 1:40:10 PM by MarkBlaise

You're right ... effing Chrome.

I have looked at this page several times since last Thursday. It's all there after I click refresh ... sheesh!

Thanks!

Note: See TracTickets for help on using tickets.
swiss made software