D&C GLug - Home Page

[ Date Index ] [ Thread Index ] [ <= Previous by date / thread ] [ Next by date / thread => ]

Re: [LUG] double your money for a kernel upgrade

 

Gordon Henderson wrote:
On Mon, 7 Sep 2009, Simon Waters wrote:

That sounds wrong.

Even the benchmark folk don't claim that lead for Chrome javascript.

Sure there isn't a plugin fouling up. The JIT definitely works on this
processor?

Dunno. It's an Intel Atom or 4. (Dual core HT - 2GB of RAM)

I have a real processor and 2GB on the test box to hand

Just tried again and chrome is definately a lot faster than FF3.5.2. Maybe I'm missing some other tweak in FF - I haven't realy looked that much into making it faster.

This:

  http://games.slashdot.org/article.pl?sid=09/09/07/1417255

took about 20 seconds before it printed 'Done' in firefox, and about 14 in Chrome. A reload in firefox takes 17 seconds now. Firefox just has user agent switcher (disabled), adblock+ and flashblocker installed, nothing else.

Hmm, I get about 3 seconds in Iceweasel 3.0, then disabled NoScript, and it took 6 seconds (fsdn.org IIRC - did it this morning).

The "script" on Slashdot is stylistically unusual.

Might be better to try one of the well known Javascript benchmarks and see if the results you see are consistent with others before digging deeper. As I was thinking maybe a slashdot specific issue.

Now at work I have a fun one to try and disentangle. Anyone know how Squid decides to manage outbound connections?

A page with 200 odd elements from the same "server". Does max-connections-per-server in firefox apply per URL domain or per IP address (as I can add IP addresses more easily!). With the default "max-connections-per-server" it wastes 10 seconds queuing on a 20 second page load (ouch). I'd prefer not to invent extra server names just to get sensible browser behaviour, and as always there is more to the problem than a lot of requests to the same server (hence the interest in Squid behaviour).

Some of the firefox bug reports on tuning max-connections related parameters are interesting.

Also this page trigger an error I've not see in Google Page Speed before:

Due to a URL conflict, the Firefox browser cache can store only one of these resources at a time. Changing the URLs of some resources can fix this problem. Consult the Page Speed documentation for information on how to disambiguate these URLs.

Looks like hash collisions, or some such.

Either way if I'm seeing 50% change in blocking one bit of Javascript, and on another page 50% change based on tweaking connections to server, one does wonder how relevant shaving another 5% of Javascript performance is in the grand scheme of things. Of course shaing 5% of JS performance is easier to conceptualise, implement and test, than say adaptive tuning of connection limits, which is mired in real world experience and imponderables.

--
The Mailing List for the Devon & Cornwall LUG
http://mailman.dclug.org.uk/listinfo/list
FAQ: http://www.dcglug.org.uk/linux_adm/list-faq.html