Hi all. I’m hoping to get some help from folks with more Linux experience than me. I’m not a Linux noob, but I’m far from an expert, and I have some huge gaps in my knowledge.
I have a Synology NAS that I am using for media storage, and I have a separate Linux server that is using that data. Currently the NAS is mounted with samba. it automatically mounts at boot via an entry in /etc/fstab. This is working okay, but I don’t like how samba handles file ownership. The whole volume mounts as the user who mounts it (specified in fstab for me), and all the files in the volume are owned by that user. So if I wanted two users on my server to have their own directory, I would need to mount each directory separately for each user. This is workable in simple scenarios, but if I wanted to move my Lemmy instance volumes to my NAS, the file ownership of the DB and the pictrs volumes would get lost and the users in the containers wouldn’t be able to access the data.
Is there a way to configure samba to preserve ownership? Or is there an alternate to samba that I can use that supports this?
Edit:
Okay, so I set up NFS, and it appears to do what I want. All of the user IDs carry over when I cp -a
my files. My two users can write to directories that I set up for them that are owned by them. It seems all good on the surface. So I copied my whole lemmy folder over and tried to start up the containers, and postgres still crashes. The logs say “Permssion denied” and “chmod operation not permitted” back and forth forever. I tried to log into my container and see what is going on. Inside the container, root can’t access a directory, which is bizarre. The container’s root user can access that directory when I am running the container in my local filesystem. As a test, I tried copying the whole lemmy directory from my local filesystem to my local filesystem (instead of from local to NFS), and it worked fine.
I think this exact thing might be out of the scope of my original question, and I might need to make a post on [email protected] instead, as what I wanted originally has been accomplished with NFS.
Did you specify the user and group ID in fstab? That might be what’s causing global permissions.
Also, consider using NFS instead of SMB. Synology supports both and I’ve generally found NFS easier to work with (but I just run a simple home server)
I am specifying user and group in fstab, and everything mounted is owned by the user & group specified. But if I wanted another user to write to it, one that isn’t in the group, it doesn’t have access to write. The main issue is users in containers, as they can’t just be added to a group. Or rather, it would be unnecessary complicated to add them to a group.
I will take a look at NFS and see if that fits my needs.
That’s pretty much how SMB in general works, but (assuming synology supports it, I’m not sure) you can force privileges for the files at the server end. In your case that would pretty much mean rw privileges for everyone, so it’s not ideal (security wise), but if your environment is suitable and that’s a compromise you’re willing to make it is possible. Also you could check if setfacl suits your needs.
And then of course NFS, but that has a tradeoff that if you need to access files with anything else than linux-box it’s not ideal either, specially if you’re after fine grained privileges over multiple systems.
MacOS does NFS completely fine, and Windows apparently does up to NFS 3 fwiw. But SMB is definitely more widely supported (no problem running both at the same time though).
For what it’s worth, NFS in my experience is also faster. I had a very similar use case (but QNAP instead of Sinology) and switched everything over to NFS and saw performance gain. Little things like previewing IP Camera security footage would feel slow on SMB, but snappier on NFS. I’d gotten over the user thing, but the speed is why I switched.
I did eventually wipe QNAP’s software in favor of stock Debian – but the prevailing wisdom seems to say Sinology’s OS is pretty good.
I can also confirm this being my experience. I probably didn’t tune samba correctly or something, but when browsing my NAS via samba it regularly took ~1 second per folder navigation, whereas NFS was instant. I didn’t care enough to figure out why, so NFS is what I use.
Samba always uses exactly one user (the one whose permissions you logged in with). NFS does what you want.
Thank you! I will take a look at NFS later tonight when I have some time.
Keep in mind that NFS only does what you want if the user numbers and group numbers match on both systems.
That is not true. I don’t have matching IDs on my MacBook vs. my Linux server and the only thing it affects is wrong user/group displayed in e.g. ls output (and I kinda feel like that’s idmap not working correctly on the Mac, though I’m also not too familiar with how it should work). If that’s what you mean by “does what you want”, sure, but permissions are handled correctly.
Yes, displaying the wrong user is a symptom of it not enforcing security.
I’m not sure what idmap is. Does it allow the user numbers to be translated per folder?
Consider this setup: Two users on the server, Bob: 1001 and Jane 1002, and they have each been given ownership and exclusive access to separate folders.
Then you mount that to another machine where the user numbers are swapped. In that case, Bob gets Jane’s files and Jane gets Bob’s files.
Or worse, someone else on the network connects to the share with the 1001 user number. Then they get access to all of Bob’s files. This can be prevented by limiting access to the share from a single IP.
Okay sure, if you’re talking about using it without authentication, then all bets are off anyway. IP-based access isn’t secure if you have a malicious/misconfigured device in the same network (and don’t lock your network down specifically to prevent this).
As far as I can tell (i.e. partially infer from behavior since I can’t find detailed documentation), idmap does two things:
- Map between local uids/gids and NFS user/group names (NFS v4 users/groups are strings, not integers), both for display purposes on the client (exactly for the ID mismatch problem) and access control on the server (since FS permissions use local IDs)
- Map between krb5 principal and NFS user names, for access control on the server
Also, idmap falls back to nobody/nogroup if it can’t map (which is configurable).
For example, my network uses the krb5 realm HOME.DBLSAIKO.NET. My user saiko has three parts, the local user saiko (with uid 1000 on NFS server and my desktop, but not the MacBook), the principal [email protected] and the nfs user string [email protected] which is automatically inferred from the two other names.
In a directory listing, the nfs server reads the directory, idmap converts the stored uid 1000 to [email protected] and sends that to the client, the client converts that back to uid 1000 to display in an ls listing or whatever.
When the client tries to access a file, the security ticket it sends with the request is for [email protected], which the server maps to uid 1000 and checks the permissions on the file system. So for security, the only thing that matters is that idmap correctly works on the server but is independent of client uids.
As a result, the displayed permissions and the actually enforced permissions are independent from one another since they map to two different things. That’s why on my MacBook, even though my user has id 501 and for some reason idmap doesn’t work so it shows my directory on the NFS share being owned by “1000 _lpoperator” instead of “saiko users”, I can still access it because I have the correct security ticket. (And conversely, if I get a security ticket for a different principal while logged in as saiko with working clientside idmap, the nfs share looks like I could access it according to displayed permissions but I get a permission denied error.)
Note that idmap can also work without authentication, but has to be explicitly enabled on the nfs/nfsd kernel module or in /sys. I assume then, instead of the security ticket, the client sends the nfs username with each request and that’s what it checks against.
Thanks for the detailed reply. I’ve seen mentions of authentication over the years, but the conclusion from every thread like this was that it was nearly impossible to setup.
This doesn’t sound too bad.
Yeah, from a complexity perspective it really isn’t a big deal if you just want a basic user/pass authentication setup without any other access controls, which is completely fine for a home network. You can run a single kdc on the same server as nfs, it doesn’t use a lot of resources, and there’s plenty of basic setup guides. And then once you have it, it could also be used to authenticate a bunch of other stuff like SSH,
or ironically also for Samba in case you do need it for something that can’t do nfs (e.g. a phone). I’ve yet to try those though.EDIT: No, you can’t use it for Samba, you need an AD domain apparently. Thanks Microsoft.
I’m not sure why everyone says it’s such a complex thing to set up, maybe the problem is rather more in-depth documentation, since it’s lacking and you often find conflicting and sometimes just plain wrong information.
For example, I’m still not sure why my MacBook can mount the NFS share without a host key despite everything I’ve read suggesting that one is necessary. Maybe to actually limit what computers can log in to krb5 I need to set up pkinit (which requires a PKI)? I can’t find answers and I’ve searched for a while now. Might be time to ask on the mailing list…
Note that having any kind of real authentication with NFS (other than “limiting client machines by IP and then trusting them to report the correct user”, which might be fine for your local network) and also encryption requires Kerberos. It’s not the end of the world to set up (I have it in my local network) but it is more involved than setting up Samba accounts.
The requirement of managing an LDAP or AD directory service just to get some auth for NFS is a dealbreaker for like 99% of people. It’s such a dumb protocol for the average user and was designed with only huge corporate clients in mind.
Just give people a simple password auth or let them exchange private/public keys between the devices that need to connect!
You don’t need LDAP or AD. Kerberos is a separate thing and nowhere near as insane as LDAP. Though it’s right that they are often combined (in AD for example). However, it’s also a purely authentication system, so no permission controls or anything except for kadmin, from what I can tell.
If I’m not forgetting anything, you need to do pretty much 3 things:
- either set up some DNS entries for autodiscovery of your kdc, or install a config file on each host (you probably want the config file either way to set the default realm so you don’t have to type it when logging in, but DNS makes it optional)
- set up user principals (you need this for samba too)
- create a principal for the NFS service
(Apparently you also need host principals for each machine that wants to connect to NFS, but my macbook can log in and mount the NFS share without a host principal, so maybe not. Still looking into that because I do actually want that for non-home-network purposes.)
Kerberos is the simple password authentication if you use it by itself. Sure, it does stuff that isn’t needed in a small home network such as multi-realm support, and they could have probably either built another authentication system for NFS like Samba’s, or make something that could authenticate users via SSH, but there’s probably a reason for that not being added until now. I assume it at least partially has to do with system-wide mounts.
And Kerberos really isn’t that bad. I set it up in under a day and most of that was spent debugging mounting NFS not working (which was finally solved by a reboot of the NFS server, still not sure what that was about >_>).
The default is that root does not have root access over nfs. In some situations, that results in root seeming to have LESS privs than other accounts.
But there are options to change that. On my home lan, root is root over nfs.
If your Synology NAS supports ssh, might want to check to see if you can use sshfs. I used to use Samba and NFS on my Debian home server, but switched to sshfs a few months ago. File transfers seem a little quicker than with Samba.
SSHFS has a lot of overhead from FUSE as well as the encryption. It’s much better to use NFS on the LAN if you care about speed.
Fstab is the way