I hate the way doing large #rclone syncs from slow cloud providers slows my entire #debian machine to a crawl, but because of the way iowait works and how rclone works, I don't think there's a good way around it? Has anyone found anything I missed?
@ireneista Isn't the issue caused by how slow the remote end is, though? Because Rclone is pretending to be a filesystem, so the kernel slows IO down because it's constantly waiting for data from rclone. So writing to disc isn't the actual bottleneck.
I can't put my finger on why, but this doesn't sound quite right. If you're syncing data down from a remote with the sync command, that's just a bulk download as many apps carry out. Why would your wider system delay I/O because of an expectation of some files being written to? And I didn't follow the "pretending to be a filesystem" bit.
@jscholes@ireneista So my understanding is that rclone mounts the cloud using fuse, then treats it as a virtual filesystem. So when sync is touching files to get time/date/size etc, you get high iowait, slowing the entire system.
That's what `rclone mount` does, not `rclone sync`. If you're using the latter, you can tweak it quite a bit, including telling it to sequence checks, transfers then deletions, instead of mixing them all together. If you're using the former to sync your files, I wouldn't. @ireneista
@jscholes@ireneista I'm using rclone sync to move a little over 900 gigs, of files between 20 kb and 5 mb in size. It's nowhere near maxing out CPU, bandwidth, or memory. But it brings the server to its knees anyway. Anything other than rclone that wants to do any IO takes multiple seconds.
@jscholes Nope! Debian 12 directly installed on a server right here beside me. It's on 3 gig symmetrical fiber, with 64 gigs of ram, and a quad-core AMD CPU.
@jscholes However, I do know that stuff that doesn't need to read or write to the filesystem is just fine. Like, there's no lag typing into SSH. But if I try to write to a file, then there is.
Is this an initial sync, or an update to a previous one? And if the latter, do you happen to remember if the slowdowns happened when initially downloading the files to an empty local destination?
@Eggfreckles Yeah, I think I'll do something like that next time. This copy only has 17 hours left, so not a huge deal; the other users on the machine can wait a day or so.
Irenes (many)
in reply to 🇨🇦Samuel Proulx🇨🇦 • • •🇨🇦Samuel Proulx🇨🇦
in reply to Irenes (many) • • •James Scholes
in reply to 🇨🇦Samuel Proulx🇨🇦 • • •🇨🇦Samuel Proulx🇨🇦
in reply to James Scholes • • •James Scholes
in reply to 🇨🇦Samuel Proulx🇨🇦 • • •🇨🇦Samuel Proulx🇨🇦
in reply to James Scholes • • •James Scholes
in reply to 🇨🇦Samuel Proulx🇨🇦 • • •🇨🇦Samuel Proulx🇨🇦
in reply to James Scholes • • •James Scholes
in reply to 🇨🇦Samuel Proulx🇨🇦 • • •🇨🇦Samuel Proulx🇨🇦
in reply to James Scholes • • •🇨🇦Samuel Proulx🇨🇦
in reply to James Scholes • • •James Scholes
in reply to 🇨🇦Samuel Proulx🇨🇦 • • •🇨🇦Samuel Proulx🇨🇦
in reply to James Scholes • • •🇨🇦Samuel Proulx🇨🇦
in reply to James Scholes • • •James Scholes
in reply to 🇨🇦Samuel Proulx🇨🇦 • • •🇨🇦Samuel Proulx🇨🇦
in reply to James Scholes • • •🇨🇦Samuel Proulx🇨🇦
in reply to James Scholes • • •Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Thomas Brand
in reply to 🇨🇦Samuel Proulx🇨🇦 • • •🇨🇦Samuel Proulx🇨🇦
in reply to Thomas Brand • • •