Advice on massive file copies...

John Von Essen john at
Tue Dec 31 09:38:07 MST 2019

Well, that would explain why the initial copy mostly worked, the destination was empty so there were no hashes to compute.

One other option I was looking at was the unison utility. I’m trying it now for the initial copy, its very slow though, and we’ll see if subsequent re-syncs work.

I didn’t realize you could run rsync on the QNAP, I’ll look into that.

Back to the checksum thing though, in my scenario, the filenames alone are the source of truth, the QNAP never gets deleted, only added to. Is there a way to tell rsync to NOT check the checksums just look at the filename, then copy brand new files if they dont exist in the destination?


> On Dec 31, 2019, at 11:25 AM, Michael Torrie <torriem at> wrote:
> On 12/31/19 8:22 AM, John Von Essen wrote:
>> Any ideas what could be going on? Is this the best way to do this?
>> Maybe just doing cp would be easier/cleaner, is there something
>> better then rsync to use? I just dont want ot have to copy 800GB
>> everytime I sync. Maybe I use rsync in combination with find to walk
>> to file tree and rsync each file one by one?
> Rsync over cifs could be quite slow, especially if it has to compute
> hashes for all the files to see if they've changed. I think what might
> be happening is cifs doesn't report file times in a resolution high
> enough to satisfy rsync's algorithm.  Try the flags that are typically
> recommended for use on Windows file systems, like
> --modify-window 1
> And maybe also
> --size-only
> But try to avoid that last one if you can.  You might look into
> --ignore-errors also, but that really only applies to the --delete flags.
> The best bet in my mind is to rsync over ssh.  That way you're running
> on the real file system.  Pretty sure QNAP can do that out of the box.
> /*
> PLUG:, #utah on
> Unsubscribe:
> Don't fear the penguin.
> */

More information about the PLUG mailing list