D&C GLug - Home Page

[ Date Index ] [ Thread Index ] [ <= Previous by date / thread ] [ Next by date / thread => ]

Re: [LUG] make -jn

 

On 22/02/15 21:56, Jay Bennie wrote:
your not going to like this:

Most programs wont run faster, ironically simple programs that are multi threaded 
can run more slowly due to the inefficiencies of parallel programming.

What you will find is apps running in parallel will run more efficiently, and daemon 
processes that spawn threads will yield faster.


In low end multi proc systems its all about the speed of the UI experience, when you 
can have a PROCESS working the IO, one working the display, one doing the work and 
another doing background task, thing go a lot smoother than it all being on 1 core 
and each blocking.
and this is not the same as making a SINGLE threaded computation doing real work 
(i.e. crunching some numbers) MULTI threaded.

A single computation that can be converted to a parallel calculation and it can be 
made to scale well up to 8 way, beyond that you get diminishing returns. (excluding 
specific use cases)


Most systems therefore use a mix of techniques to improve overall throughput, not 
single task throughput. i.e. 4 core with 4 threads = v fast, 1 core with 1 thread = 
v slow , 4 core with 1 thread is better than 1 core with 4 threads, 2 core with 2 
threads is a good balance.


So i feel you might get some benefit from doing make -j(2),  but don't do make -j(4) 
 , if you take up all the capacity, your overall system response will be slowed, 
which might case unpredictable IO blocking and task swapping.
by doing 2, you don't hog the available threads, but allow the option should it 
exist.

also make -j(2) wont change the binaries outputed, only the way it compiles them.


do a kernel + modules compile and time make -j(1) vs make -j(2), at guess it  might 
be 30%+ faster.


On 22 Feb 2015, at 21:16, Tom <madtom1999@xxxxxxxxxxxxxx> wrote:

Wot with all these multiprocessor jobies about these days I've been trying to find a 
valid reason for choosing make -j(n+1 or 2) where n is the number of cores. I've not 
been able to find anything 'scientific' just suggestions that adding one or two to 
the core count may allow for some of the tasks waiting for io to be replaced with 
other tasks but that's wishful thinking rather than 'good software engineering'.
Anyone know different or how to test a running program so the value can be optimised?
Tom te tom te tom

--
The Mailing List for the Devon & Cornwall LUG
http://mailman.dclug.org.uk/listinfo/list
FAQ: http://www.dcglug.org.uk/listfaq


Most of the things I'm interested in will run faster - I tend do do things that need lots of cpu - flash from the uni of chicago does stellar explosions amongst other things and the number of cpu's available is used in building the models. Most of the music stuff I want the pi for benefit from multicore compilation. I'm trying to find some way of analysing what's good and what's bad. I now see that the -j is not passed on to the finished job in all cases - some build code uses it to optimise the code (or at least I think that's what's going on - some of the build processes are larger and more complicated than the programs themselves these days. I often wonder if we are reaching some kind of plateau where things are just so complicated you cant get round to understanding them fully....
Tom te tom te tom

--
The Mailing List for the Devon & Cornwall LUG
http://mailman.dclug.org.uk/listinfo/list
FAQ: http://www.dcglug.org.uk/listfaq