D&C Lug - Home Page
Devon & Cornwall Linux Users' Group

[ Date Index ][ Thread Index ]
[ <= Previous by date / thread ] [ Next by date / thread => ]

Re: [LUG] Unix file system folder limits



On Mon, Mar 17, 2003 at 12:41:33PM -0000, Robert Nowik wrote:

> I am working on a large website that potentially will have a lot of
> static html files. I need to know if folders in Unix have a limit on
> the number  of files / subfolders that they contain. (I know Windows
> FAT32 has a limited of around 32000 files per folder). I am
> anticipated having folders with up to 1million files in. (I am
> probably going to run the site on a dedicated redhat server). 

Put too many files in a folder and you'll find that opening any of
them will takes years. This is because it takes ages to stat() the
directory.

The soltuion I always use is, instead of having:

data/1001343.xyz

have:

data/1/0/0/1/3/4/3.xyz

This means that there are never more than 10 files or directories in
any one directory. This will require an extra log_10(n) inode table
lookups, which will take no time at all.

It's quite easy to add an order of magnitude.

It also means your files are in order.

Another system, which scales naturally is:

data/1.xyz	(1)
data/2.xyz	(2)
data/3.xyz	(3)
data/...	(to 9)
data/1/0.xyz	(10)
data/1/1.xyz	(11)
data/1/2.xyz	(12)
...
data/1/0/0.xyz	(100)
data/1/1/0.xyz	(110)
..
data/2/1.xyz	(21)
data/2/0.xyz	(22)
data/1/0/0/1/3/4/3.xyz (1001343)

No need to ever worry about scaling.

It's just harder to read, unless you know what's going on.

Steve

--
The Mailing List for the Devon & Cornwall LUG
Mail majordomo@xxxxxxxxxxxx with "unsubscribe list" in the
message body to unsubscribe.


Lynx friendly