D&C GLug - Home Page

[ Date Index ] [ Thread Index ] [ <= Previous by date / thread ] [ Next by date / thread => ]

Re: [LUG] File system for heavy I/O

 

On 21/03/14 20:35, Martijn Grooten wrote:
> I have some servers that run software (scripts, daemons) that do a lot
> of input/output (typically writing to and reading from small files). So
> much that occasionally, the load becomes too heavy and a server
> becomes unresponsive and has to be rebooted.
> 
> Now the person who wrote those scripts is neither a fantastic
> programmer, nor a brilliant sysadmin (actually, it is me), so while a
> lot of improvements have been made that reduce the amount of I/O, I'm
> sure a lot more can be done. But that's not feasible at this moment.
> 
> What is feasible is to migrate to a new server, where I'm able to choose
> the file system. Is there a Linux file system that is particularly good
> at dealing with this kind of stuff?
> 
> Thanks!
> 
> Martijn.
> 
> 


What are the specs on the current overwhelmed box so we know what we're
aiming for? Particularly what is the current storage hw/fs? RAID? There
are probably still tricks we can use to speed it up like kernel tuning
for small file IO, etc. If it's XFS you could enable delayed logging,
ext could have noatime as a mount option, etc, etc. We need to know what
the server load is as well - caching may help.

Bang your new system together and do some quick benchmarking (sysbench,
bonnie, dd, there are many to choose from) with the obvious candidate
filesystems is the only true answer of course. Throwing money at
hardware for parallel data streams is even more obvious - ZFS with tons
of RAM and SSDs for ZIL/L2 ARC cache with the spinning disks spread
across as many controllers as possible ideally.

More info would really help.

Regards

-- 
The Mailing List for the Devon & Cornwall LUG
http://mailman.dclug.org.uk/listinfo/list
FAQ: http://www.dcglug.org.uk/listfaq