D&C Lug - Home Page
Devon & Cornwall Linux Users' Group

[ Date Index ][ Thread Index ]
[ <= Previous by date / thread ] [ Next by date / thread => ]

Re: [LUG] huge file recurring, hiding



"Brough, Tom" wrote:

        At the risk of opening a closed debate this is rather reminicent of
a problem we had with one of our AIX SP boxes that runs an Oracle Database.
It would fill up the filesystem /tmp but the files / file sizes reported
didnt add up. In the end it turned out that oracle was creating temporary
files that had inode(s) but no (visible ?) file name, but then subsequently
neglected to remove them when the process was dead and gone.

        We now have a routine that runs pstat (I cant find this command in
linux but it may just be a plagerised version of stat, who knows !) to
locate these inodes (something to do with a link count of 0) and kill them
off ( when its safe ). But then again this is pushing the realms of my
understanding of filesystems and could be a load of tosh !

I think it refers to what Nick and I mentioned.

I've never seen what you describe, usually you delete a file,
the relevant inode link count is decreased by 1. When you open a
file it is (or is effectively) increased by 1.

So if a file exists - is opened for writing - is deleted - it
continues to exist till the writing process terminates even
though there is no entry in the directory structure for the
file.

The common problem is log files, if a process keeps it's log
file open, and you remove it, the logged data, and new log
entries exist on disk till the process finishes. That is why you
truncate log file with ">logfile" rather than "rm logfile ;
touch logfile". 

This also explains why if you do "rm logfile ; touch logfile"
you don't get any log entries. Several times on *nix I've been
told - we have a problem but we haven't got anything in our
logfiles, not even the stuff we use to get when it was working
normally <doh>.

I always assumed it is the OSes job to clean up stray open file
handles once a process is terminated without closing the file,
since that would change the inode status, I don't think you can
accidentally create files with a link count of zero in the file
system. 

You can "leak disk space" into directories on some old
filesystems, by creating directories with huge numbers of files
(with huge names), but I think these days most filesystems have
been hardened against such bizarre resource exhaustion attacks.

--
The Mailing List for the Devon & Cornwall LUG
Mail majordomo@xxxxxxxxxxxx with "unsubscribe list" in the
message body to unsubscribe.


Lynx friendly