D&C GLug - Home Page

[ Date Index ] [ Thread Index ] [ <= Previous by date / thread ] [ Next by date / thread => ]

Re: [LUG] File system for heavy I/O

 

Martijn,

Interesting. I have been in a similar situation, although load sounds lower than yours - user wait times (it was a web app) was noticable and I wanted to reduce things.

Also not being an amazing and talented programmer, but a pragmatist, I was firmly of the opinion that better hardware was cheaper than my time in optimising a fairly extensive system and learning how to do it - all the while creating a real risk of service failure in a working environment.

So in my case, I threw hardware at the problem until it went away. Faster processor, more ram - but of course the biggest difference was disks. Specifically ssd, and if you have a real need, raid striped ssds. If you're running on spinny disks now, just that may be enough of a change.  (Also, if there's a database, I've made improvements by moving the database off the existing heavily io-bound drive (but lan light) to another server nearby. That dealt with database responses and lookups which freed cpu and io on the local server, actually negating a hardware change on that occasion)

But that's not the question you asked...

I have played around a little but creating small partitions and using different formats. I wrote a script file to test various reads, writes, deletes, updates on files and timed it on the various files. Took an evening, it was moderately interesting and some fs's were unusually high (journalling ones, typically). I forget my results, but a quick google suggests others have done so too. Here's one a couple of years old that has a script;

http://unix.stackexchange.com/questions/28756/what-is-the-most-high-performance-linux-filesystem-for-storing-a-lot-of-small-fi

There's probably various pretty graphed sites that compare more visually too, but unless you do it yourself with the exact use-case you have, there's a risk that other people's tests won't be real enough for you.




On 21 March 2014 20:35, Martijn Grooten <martijn@xxxxxxxxxxxxxxxxxx> wrote:
I have some servers that run software (scripts, daemons) that do a lot
of input/output (typically writing to and reading from small files). So
much that occasionally, the load becomes too heavy and a server
becomes unresponsive and has to be rebooted.

Now the person who wrote those scripts is neither a fantastic
programmer, nor a brilliant sysadmin (actually, it is me), so while a
lot of improvements have been made that reduce the amount of I/O, I'm
sure a lot more can be done. But that's not feasible at this moment.

What is feasible is to migrate to a new server, where I'm able to choose
the file system. Is there a Linux file system that is particularly good
at dealing with this kind of stuff?

Thanks!

Martijn.


--
The Mailing List for the Devon & Cornwall LUG
http://mailman.dclug.org.uk/listinfo/list
FAQ: http://www.dcglug.org.uk/listfaq

-- 
The Mailing List for the Devon & Cornwall LUG
http://mailman.dclug.org.uk/listinfo/list
FAQ: http://www.dcglug.org.uk/listfaq