Finally, I have flock locking (oocalc & perl, et al.) working with NFS. I
had to go to NFSv4. NFSv3 appears to not support flock locking even
though the docs make it wound like it will work (but they must be assuming
you're using v4).
NFSv4 is quite the strange beast. You have to make a pseudo-fs and do
wacky server-side binding and non-intuitive client-side mounting.
For anyone who cares, here's the gist of it:
SERVER
/etc/fstab
/data /nfs/data none bind 0 0
/etc/exports
/nfs 192.168.100.1(ro,async,no_subtree_check,no_root_squash,insecure,fsid=0)
/nfs/data 192.168.100.1(rw,async,no_subtree_check,no_root_squash,insecure,nohide)
fsid=0 is critical for the root, and nohide critical for the /data
must run services (on Fedora)
nfs
rpcidmapd
CLIENT
192.168.100.2:/data /data nfs4 rw,bg,hard,intr,nosuid,proto=tcp,timeo=15,retrans=5,rsize=8192,wsize=8192 0 0
note you mount /data not /nfs/data as you would have in v3.
must run services (on Fedora)
rpcbind
rpcidmapd
It looks like you don't need to run nfslock (Fedora) service on either as
it's part of the nfs core now. I'm open to correction on this.
rpcgssd appears to be only required if you want encryption/auth/etc. I
could be wrong on this too.
portmap doesn't seem to be required? Seems to work without it.
I'm still using my custom assigned ports in /etc/sysconfig/nfs and
iptables rules and it seems to work.
I'm still going to use the async option. Most examples show explicit
sync, but in my simple setup I feel the performance gain is worth the risk
as my server is RAID6 with good quality hardware and a beefy UPS. Its
uptime is currently: ;-)
18:22:08 up 445 days, 18 min, 1 user, load average: 0.00, 0.00, 0.00
My next task is to learn more about "crash recovery" for which many pages
say statd is required, but I'm not sure if that's v3 or v4. Sounds useful
but not really critical for my simple setup. I don't care if my locks get
a bit wonky if my server crashes. If anyone has a comment, let me know.