Advice on massive file copies...
gabe at gundy.org
Fri Jan 3 14:21:19 MST 2020
Looks like some QNAP products are ZFS-based. If yours happens to be
ZFS under-the-hood, perhaps you might be able to do a ZFS send .
Just throwing it out there.
On Tue, Dec 31, 2019 at 8:22 AM John Von Essen <john at essenz.com> wrote:
> So I have been tinkering with this for a few days and keep hitting issues.
> Because my wife takes an absurd amount of photos, over the years we have built an insanely large amount of data, about 800GB. Its mostly photos, movies, but also docs, and old backups of old computers. New data gets into the QNAP NAS by way of my iMac, i.e. I attach my wifes phone or Canon camera or USB stcik, copy the new files into the QNAP mount on the iMac, that copy is manual by click and drag, or if its alot of stuff I just use cp in a terminal window.
> The QNAP is 2-Bay NAS with 2x2TB SATA drives in Raid 1.
> I dont feel super confident with the QNAP as the sole source, so I want a backup of the backup.
> To do this, I have a home linux server with a single 2TB drive. On that linux server I have mounted the QNAP (SMB/CIFS) at /nas and /data is my local 2TB drive on that server.
> I want to keep things in sync without re-copying everything every time so the plan was to use rsync to do the local copy. The first time is a big copy, then weekly cron jobs to sync the small changes.
> I was just using the command:
> # rsync -av /nas/ /data/
> This initially worked, I let it run overnight, when I returned, df -h showed that about 95% of the data was copied, when I reattached to tmux these were the last lines:
> rv trip/IMG_0631.HEIC
> rv trip/IMG_0632.HEIC
> rv trip/IMG_0637.HEIC
> sent 766,440,756,316 bytes received 2,979,804 bytes 49,916,554.50 bytes/sec
> total size is 766,321,547,494 speedup is 1.00
> rsync warning: some files vanished before they could be transferred (code 24) at main.c(1211) [sender=3.1.3]
> There is about 19G missing, based on df -h and the nothing else was reading/writing to the QNAP.
> So figured, no big deal, I’ll just run rsync again and let it sync up and grab the missing data, but now rsync bombs with an error after 30secs or so:
> # rsync -av /nas/ /data/
> sending incremental file list
> rsync: fstat failed: No such file or directory (2)
> rsync error: error in file IO (code 11) at sender.c(365) [sender=3.1.3]
> rsync: [sender] write error: Broken pipe (32)
> Any ideas what could be going on? Is this the best way to do this? Maybe just doing cp would be easier/cleaner, is there something better then rsync to use? I just dont want ot have to copy 800GB everytime I sync. Maybe I use rsync in combination with find to walk to file tree and rsync each file one by one?
> PLUG: http://plug.org, #utah on irc.freenode.net
> Unsubscribe: http://plug.org/mailman/options/plug
> Don't fear the penguin.
More information about the PLUG