[ Date Index ] [ Thread Index ] [ <= Previous by date / thread ] [ Next by date / thread => ]
On Sun, 15 Mar 2009, Matthew Macdonald-Wallace wrote:
Hi Gordon, Quoting Gordon Henderson <gordon+dcglug@xxxxxxxxxx>:So I'm putting together a small "High Availability" setup for a client for a project they're running. It's a LAMPy type of server, so nothing special there and it'll be 2 servers running MySQL replication and Linux HA between then - again, nothing special there.A few hints on this - store the PHP sessions in the MySQL database, this allows you to install joomla and the like on a cluster without worrying about sessions. If you're using multiple vhosts, storing the apache access logs in the database gives you hugely powerful data-mining possibilities. Email me off-list if you want more info on these.
Thanks, but that's not really my issue here. It won't be running an off the shelf package, and I won't be responsible for that side of things. It's a dedicated application for my client which they are writing for themselves.
Good blog entry about it though.It's a sort of CRM type thing to replace something they're already using which is also hosted (but not by me and the're paying for it, and it's not reliable!)
They're already an existing client of some years now, so I know how they work and what they get up to...
I'll speak to them about the session stuff though, thanks.
But one thing I'm pondering over - 2 disks in each server or just one? Now this client is on a bit of a budget here and although disks aren't that expensive, we could spend the £130 or so elsewhere (1TB in each box and WDC drives are in the £60 range now) The server is going to the co-lo which is 4-5 hours away and normally I'd not think twice about using 2 drives per server running in RAID-1 (mirror) mode, but 2 servers gives us 2 drives...4 disks minimum IMHO, two in each server configured as RAID-1. If you can afford it (although seeing the budget I suspect not!) 6 disks in each server, RAID-5 mirrored as RAID-1 (is this RAID-6 or RAID-15? I can never remember and I don't trust Wikipedia any more!).
Power is also a consideration here. (as in electrical power). CPU power isn't really an issue and RAID-6 is overkill. I don't currently pay for juice in my co-lo, but they make mutterings about it from time to time. I suspect if I were to turn up with some monster servers they might start!!! (One reason I'm keen to move to lower power systems when I think they'll work OK
I'd rather 2 disks per server myself, but I know how they operate, I'm also confident that if one drive/server fails the other will be just fine for the duration of resurecting the first server and I also know the impact of downtime to them - part of my plan (discussed with them) is to use one of their other servers (I host a few more for them already) act as a "cold standby" too) Of-couse they might end up paying me more than simply buying more drives ;-)
After spending many years building and running biggish servers for companies up in Silicon Gorge (aka Bristol ;-) I'm more of the opinion that the more spindles you have, the more trouble you get - and for the capacity, RAID-1 (mirroring) is fine for capacities up to 1TB now, and 2TB in a few weeks time.
(And one companys just sold of a fully populated Dell MD1000 unit plus a server to go with it which I built for them ... 15x500GB drives in 2 SAS chains - Almost wanted to bid for it myself, but I couldn't afford the electricity to run it!)
I will not *ever* build a server with RAID-5 anymore either. It's mirroring or RAID-6. After losing an array that had about 1 TB some years back when a 2nd disk failled in a RAID-5 array during a rebuild after one drive failed, it's not somewhere I want to go again, so now it's RAID-6 for more than 2 disks with regular disk surface scans.
Using RAID-1 to mirror 2 RAID-5 arrays is novel though :) Linux has a true RAID-6 with 2-drive redundancy. I have one server of my own right now with 5 drives in a RAID-6 configuration. If seemed like a good idea at the time, but it was just before drive capacities skyrocketed and prices dropped - it's 5 x 360GB drives and I'd replace them tomorow with a pair of 1TB drives if I could justify spending the money on myself!
There's even a part of my nowadays that would design a HA Cluster using SAN or NAS as the storage backend and remove all storage requirements from the servers. It also has the added benefits of allowing you to add extra nodes to the cluster without worrying about replicating the data, just mount the SAN/NAS share in the appropriate place.
And if the SAN/NAS box is a Linux box ... ;-)Where do you draw the line though... If I was offering general purpose hosting, etc. it's certianly something I'd consider - a HA filestore with front-end servers. Almost everything I do is custom though, so doesn't always lend itself to that sort of approach.
I did use one set of SCSI RAID controllers many years back that had 2 host interfaces on them. We had dual-SCSI controllers in each of the servers, so 8 servers, 8 disk units, but each server could see 2 disk units. That was (um) 1995 and I've never seen anything like them since. It wasn't without it's issues though, so maybe I'm not surprised!
And look out for Drobo if you're into that sort of thing....
Hope this of some help,
-- The Mailing List for the Devon & Cornwall LUG http://mailman.dclug.org.uk/listinfo/list FAQ: http://www.dcglug.org.uk/linux_adm/list-faq.html