D&C GLug - Home Page

[ Date Index ] [ Thread Index ] [ <= Previous by date / thread ] [ Next by date / thread => ]

Re: [LUG] File system for heavy I/O

 

On 26/03/14 22:58, Martijn Grooten wrote:
> On Wed, Mar 26, 2014 at 10:00:17PM +0000, bad apple wrote:
>> * Sorry if I sound a little peeved, strictly no offence intended to
>> anyone but I seriously can't believe that two of you so far regard
>> custom kernels as some kind of pointless voodoo that's only done by
>> neckbeards for fun on their home slackware boxes.
> 
> When I said 'playing around', I meant I'd try some different kernel
> settings - making informed decisions about which settings are likely
> going to solve my issues - and see if they improve the situation.
> 
> Apologies for the confusion. Next time, I'll use the appropriate
> language and say I'll be going to spend a few hours in the realm of the
> divine sysadmins, hoping that the skills of a mortal like me will
> help me squeeze a few percentage of performance out of my inferior code.
> 
>> After all, it wasn't me who posted with critical performance problems on
>> my Linux server... probably because I don't have any. I wonder why!
> 
> Yes, I know it's my code that's sub-optimal in many ways. If I had three
> months of 40-hour weeks to spend on the code, I'd rewrite everything from
> scratch and end up with something significantly better. Unfortunately, I
> will have a hard time finding an hour or two to "play around". It's sad
> and it's frustrating, but such is life.
> 
> Martijn.


Thanks for yet another more balanced and equanimous reply than my
slightly tetchy post warranted... you just never get your rage face on
in public forums do you? Whilst I normally manage to not fly off the
handle I can't help being a bit barbed sometimes. Stupid internet is
slowly eroding my manners it seems.

Well it seems at this point everyone - especially you, which helps - is
firmly pointing their finger at your code as the problem, which is fair
enough. Unless it's sensitive in some way, why not do the usual which
these days is posting it on github? I'm not one of them, but there are
plenty of sharp coders on this list who could probably clean it up.

The only other sensible suggestion I have from atop my lofty perch in
the realm of divine systems administration is actually the simplest and
most ancient trick we have in our holy texts: throw more hardware at it
of course, what else! However, if the bulk of your IO is already TMPFS
based the SSD(s) I was going to suggest is hardly going to be any
faster, so scratch that.

Have you had a *really* good bash at the problem with iostat and vmstat?

cfq scheduler is really pretty poor except for home PCs - at least
switch to the noop (preferred) or deadline elevator on any server: don't
worry, these are already in your kernel so no monkeying about with
recompiles are needed to access them. You can temporarily switch to them
on reboot by passing "elevator=noop" to the kernel as usual. Set vm
swappiness to 0. These are things you should do anyway and will take but
a moment.

Anyway, good luck in your limited time.

Cheers

-- 
The Mailing List for the Devon & Cornwall LUG
http://mailman.dclug.org.uk/listinfo/list
FAQ: http://www.dcglug.org.uk/listfaq