this post was submitted on 28 Nov 2023
94 points (96.1% liked)
Linux
48074 readers
768 users here now
From Wikipedia, the free encyclopedia
Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).
Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.
Rules
- Posts must be relevant to operating systems running the Linux kernel. GNU/Linux or otherwise.
- No misinformation
- No NSFW content
- No hate speech, bigotry, etc
Related Communities
Community icon by Alpár-Etele Méder, licensed under CC BY 3.0
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I'm curious, what file system do you use to mount your share? (SMB, SSHFS, WebDAV, NFS..?) I've never managed to get decent performance on a remote-mounted directory because of the latency, even on a local network, and this becomes an issue with large directories
Agreed on the latency issues. I tested SMB and NFS once and found them to be pretty much the same in that regard.
I'm interested to test iSCSI, as for some reason I think it might be better designed for latency.
If you want the lowest latency, you could try NBD. It's a block protocol but with less overhead compared to iSCSI. https://github.com/NetworkBlockDevice/nbd/tree/master
Like iSCSI, it exposes a disk image file, or a raw partition if you'd like (by using something like
/dev/sda3
or/dev/mapper/foo
as the file name). Unlike iSCSI, it's a fairly basic protocol (the API is literally only 9 commands). iSCSI is essentially just regular SCSI over the network.NFS and SMB have to deal with file locks, multiple readers and writers concurrently accessing the same file, permissions, etc. That can add a little bit of overhead. With iSCSI and NBD, it assumes only one client is using the file (because it's impossible for two clients to use the same disk image at the same time - it'll get corrupted) and it's just reading and writing raw data.
main thing to note is that NFS is an object based storage (acts like a share) where iSCSI is block based (acts like a disk). You'd really only use iSCSI for things like VM disks, 1:1 storage, etc. For home use cases unless you're selfhosting (and probably even then) you're likely gonna be better off with NFS.
if you were to do iSCSI I would recommend its own VLAN. NFS technically should be isolated too, but I currently run NFS over my main VLAN, so do what ya gotta do
Yeah, there are a few limitations to each. NFS, for example, doesn't play nicely with certain options if you're using a filesystem overlay (overlays), which can be annoying when using it for PXE environments. It does however allow you to mount in several remote machines simultaneously, which I don't think iSCSI would play nicely with.
SMB though has user-based authentication built in, watch can be quite handy esp if you're not into setting up a whole Kerberos stack in order to use that functionality with NFS.
I've found that NFS gives me the best performance and the least issues. For my use cases, single user where throughput is more important than latency, it's indistinguishable from a local disk. It basically goes as fast as my gigabit NIC allows, which is more or less the maximum throughput of the hard disks as well.
A benefit of NFS over SMB is that you can just use Unix ownerships and permissions. I do make sure to synchronize UIDs and GIDs across my devices because I could never get idmapping to work with my NAS.
idmap only works with Kerberos auth, but iirc I didn't have to set anything up specifically for it. Though I've also never really had to test it since my UIDs match coincidentally, I just tested with the nfsidmap command.