D&C GLug - Home Page

[ Date Index ] [ Thread Index ] [ <= Previous by date / thread ] [ Next by date / thread => ]

Re: [LUG] inodes

 

Simon Waters wrote:
> Anton Channing wrote:
>> My webhost is complaining that I have
>> exceeded my inode limit.  I never even
>> realised there was such a limit, and
>> certainly didn't know what an inode was.
> 
> I think you need to be clear exactly which inode limit.
> 
> Older Unix and Linux file systems had a per file system limit, but many
> file systems now have as many inodes as needed (e.g. reiserfs).

No, I'm on a shared hosting account and the host
claims says that by exceeding 50,000 inodes (I have
between 100,000 and 150,000), I am breaking the
terms and conditions and that they have imposed
this arbitrary limit to 'preserve disk integrity
for the other users'.

So I can have unlimited domains, but only 50,000
inodes.  Seems like a lot, but many of the CMS
systems I use have thousands of files each.

I just wanted to see if there was an easy way
to identify which of the main directories in
my home folder contained the most files (inodes),
so I could see if there was anything unimportant
I could delete that would bring me back within
my limit.

It maybe that there is not.

My host was trying to get me on to a dedicated
server, but I really can't afford that right
now...

Anton

>> I can't seem to find a command that
>> will tell me how many inodes are
>> located in each directory (including
>> those recursively in that directories
>> sub-directories).
> 
> If it is lack of inodes in a filesystem (rather than a kernel limit):
> 
> df -i
> 
> Will show which file system.
> 
> Last time it happened to me, which was an old version of HP-UX, I just
> used "find" to discover where a huge number of files exist.
> 
> If you really need to count inodes, "ls -Ri" and judicious use of sort
> and uniq and wc would give an answer. But I suspect just find, and
> you'll see the millions of files (or directories) you didn't know about
> somewhere.
> 
> Usually the cause is something insane in /tmp or /var/tmp, or similar.
> 
> Some database locking schemes in Unix use to use an inode table entry
> per lock, which in Unix kernels that had a fixed sized inode table,
> meant you could run out of space in the kernels inode table with just
> one big and busy database file. Hence the query about which inode limit.
> Although I think the Linux kernels inode table is just a list, so it
> just runs out of memory when such databases go mad.
> 
> I haven't see an out of inode error in ages, but then when given a
> choice I pick reiserfs over ext3. But any modern file system (i.e. not
> ext3) should be fine. I know dishing ext3 is unpopular, but hey I never
> meant to be popular, I just mean to keep my systems working.
> 
> I don't suppose the host is imposing this limit deliberately. And unless
> they did odd things when creating the file systems, there is probably
> something very silly going on, like a script creating very small files
> and directories very quickly.
> 


-- 
The Mailing List for the Devon & Cornwall LUG
http://mailman.dclug.org.uk/listinfo/list
FAQ: http://www.dcglug.org.uk/linux_adm/list-faq.html