D&C GLug - Home Page

[ Date Index ] [ Thread Index ] [ <= Previous by date / thread ] [ Next by date / thread => ]

Re: [LUG] OT: Sage Line 50 (was Hard Drives)

 

rich@xxxxxxxxxxx wrote:
Hi Guys

Just to fill you in a bit. We have a Mac Pro running 2 quad core processors with 8gb ram on a gigabit ethernet. All the Mac Pro currently does is to serve Sage Line 50 files. It is fast - very fast but the boss wants to squeeze more speed. e.g. A Sage search can take up to 12 seconds to process! Ok the real problem is that while waiting for that search other folks are getting locked out and Sage is crashing maybe 2/3 times a day. The real answer is to buy the proper tool. I've been trying to get the client to look at mysql and some front end support and build your own - erp, crm tool. We could then switch to a full Linux system. However, in the meantime we have crashes to deal with.

So the data:
Gets written to a lot
Gets read a lot
Searches crash the computers

Any thoughts on whether speeding up a hard drive would make that much difference please?

Thanks

Rich


Hi Rich,

I recently graduated from university as a mature student with a BSc(H) in Software Development[1]. I mention that so that you know the following is based mainly on the theory we were taught rather than experience of practical application - although having been involved in IT 20 years there is also a smattering of logic used :) Also I have never used Sage so I cannot comment on the specific elements the more knowledgeable members of the LUG have raised.

Your original email mentioned that three drives are currently in use and you asked if a 15000rpm drive would solve speed issues. I'm assuming from this new information that all the data from the three would be amalgamated onto the new drive if you chose to do that? If that's the case,logically I would think that 3x the number of R/W actions would occur on a drive only double the speed, so you would end up ~1/3 slower.

Given that you've mentioned 3 drives, can I assume that you have some form of datawarehousing going on, or do they simply house three different live databases? If the latter is the case could I suggest installing all three databases on a new drive as a master datawarehouse, and using the three current drives as datamarts? Splitting the databases into more specific chunks would certainly reduce the frequency of lockouts as people would not be accessing the same data as frequently. If the marts are on separate drives you're speeding up access times as each drive isn't spinning to search so many places at once. The ideal solution might be if the databases could be held on separate network connections, allowing less traffic on each.

Theoretical suggestion, which may be shot down in flames. Is it possible to add a second NIC to the Mac? If so you could map different paths to different databases, e.g.:

database 1:  192.168.2.1/database1
database 2:  192.168.2.2/database2

That way even though they might physically be on the same machine they would be accessed via different network paths, on different hard drives - the net result of which, I would think, should be faster access, and less lockouts.

Hope this helps somehow :)

Kind regards,

Julian

[1] Somehow managed to get a 1st :) Totally irrelevant to your question I'm afraid but I had to tell someone :)

--
The Mailing List for the Devon & Cornwall LUG
http://mailman.dclug.org.uk/listinfo/list
FAQ: http://www.dcglug.org.uk/linux_adm/list-faq.html