D&C GLug - Home Page

[ Date Index ] [ Thread Index ] [ <= Previous by date / thread ] [ Next by date / thread => ]

Re: [LUG] File system for heavy I/O

 

On Mon, Mar 24, 2014 at 08:35:55PM +0000, bad apple wrote:
> Are you free to rebuild the kernel on the current host?

Free as in 'no one else minds what I do to the kernel'? Yes. Free as in
'I've got plenty of time to play around with things'? No. :-)

Thanks for your suggestions though! I'll try to play around with them,
although I've decided to wait until next week, when I'm near the machine
and can press buttons if things go wrong. :-)

> Speaking of RAM, I don't know how big the dataset is that you're hitting
> so hard on the server, but is it small enough to be mounted on a RAM
> disk? All my compiles take place within a TMPFS that is whipped up on
> demand (a trick I originally learnt from temerge vs emerge on Gentoo) by
> a script. The performance gains are spectacular, unsurprisingly.

The dataset is many GBs big, but most of the IO takes place within a
small, but changing, subset.* A few months ago I rewrote everything so
that most of the IO now takes place on a TMPFS. A cron job moves files
to the hard drive when it's unlike they'll change.

I have to say, the performance improvements were a little disappointing.
Which probably shows I'm still doing something seriously wrong
somewhere. Or perhaps I'm just asking way too much from the machine.

* They're all emails - a few dozen files for each email. These are all
stored in 'hourly' subdirectories and most IO happens in directories
corresponding to recent emails.

Martijn.



-- 
The Mailing List for the Devon & Cornwall LUG
http://mailman.dclug.org.uk/listinfo/list
FAQ: http://www.dcglug.org.uk/listfaq