(fwd) Re: NFS Problem in Kernel 2.0.27: inode status not upd…

Top Page
Delete this message
Reply to this message
Author: Christoph Lameter
Date:  
To: exim-users
Subject: (fwd) Re: NFS Problem in Kernel 2.0.27: inode status not updated
Here is another article on the kernel mailing list regarding the NFS issue.

Newsgroups: linux.dev.kernel
Path: miriam.fuller.edu!psinntp!psinntp!howland.erols.net!newspump.sol.net!ddsw1!news.mcs.net!van-bc!ratatosk.yggdrasil.com!vger.gate.yggdrasil.com!not-for-mail
Approved: linux-kernel@???
References: <m0vetGd-000HRmC@???>
X-Env-Sender: owner-linux-kernel-outgoing@???
Message-ID: <199612310100.CAA06844@???>
X-Hdr-Sender: srb@???
From: srb@??? (Stephen R. van den Berg)
Date: Tue, 31 Dec 1996 02:00:58 +0100
Subject: Re: NFS Problem in Kernel 2.0.27: inode status not updated
Sender: owner-linux-kernel@???
Lines: 45
Xref: miriam.fuller.edu linux.dev.kernel:23300

Olaf Kirch <okir@???> wrote:
>On Sat, 28 Dec 1996 10:25:29 PST, Christoph Lameter wrote:
>> It first opens a lockfile with a unique name and then links that one
>> to the classic /var/spool/mail/username.lock lockfile. Then it checks the
>> number of links on that file. If there are two then the lock is
>> successful. Problem is Linux fstat call always returns 1 for the number of
>> links even though a link() was done immediately prior.


> So I would assume the author's claim means that this
>locking technique `works better over NFS on the systems I tested it on.'


Well, since (as far as I know) I'm the one that first invented that scheme
I can indeed vouch for that. The difference between the actual scheme
used by exim (as described above) and the scheme used by me is that the
exitcode of the link() call is being used to avoid relying on the
stat() call in the most common cases.

However...

>There is nothing in the NFS spec that requires the client to update its
>cached link count[*].


I think you're wrong here. NFS, if used from a UNIX host is supposed to
map UNIX filesystem semantics onto the NFS filesystem and back.
If, after a link() call, the attribute would still display only one hardlink
due to caching, then your caching mechanism clearly is broken.
The very least one could expect would be that the hardlink count would
be increased in the cached copy. This should, however, only be done
*if* the hardlink returned success. If the hardlink returns failure,
the attribute cache *must* be flushed (a consistent view of the
filesystem cannot be guaranteed otherwise).

>[*] The NFS spec is deliberately vague about how and to which extent
>the client caches information. Vague being an overstatement here.
>For those interested, the bare-bones protocol spec is available as
>RFC 1094. There's also a (fairly expensive) specification available


The actual NFS specs aren't even that important at this point.  Simply
the fact that you're trying to present UNIX filesystem semantics already
dictates when the cache needs to be updated or flushed.
-- 
Sincerely,                                                          srb@???
           Stephen R. van den Berg (AKA BuGless).


Time is nature's way of making sure everything doesn't happen at once.