D&C GLug - Home Page

[ Date Index ] [ Thread Index ] [ <= Previous by date / thread ] [ Next by date / thread => ]

Re: [LUG] diy cloud

 

On Wed, 27 Oct 2010, tom wrote:

On 27/10/10 12:27, tom wrote:
http://www.linuxjournal.com/article/10626

Might try that later...
tom te tom te tom

Rats!
Only one of my five machines has VT extensions....

I'm quite bemused by all this talk of cloud computing - and I don't think any 2 people would give the same answer if you asked them!

I think it's grossly overhyped.

If you think "cloud" is just a name for a bunch of physical or virtual servers then that's close enough for me... The key part is how you manage it - afterall, we've been clustering processors, etc. for at least 2 decades now and probaby a lot longer before it became popular. (VAX clusters, anyone?) I built a clustered supercomputer in 1992 which had 256 compute nodes, each node had 128MB of RAM, 1GB of disk and dual 66MHz Hypersparc processors... Was that a "cloud"? Years before that we had huge transputer based computers - were they clouds?

I use the word cloud because it's a good buzzword and as far as I'm concerend, it's just a fancy term for virtual server - you sell punters a real server for X, or a virtual server for Y where Y is a fraction of X, and N times Y is always > X where N is the number of virtual server on one physical server... So you make more money selling virtual servers because you oversell to a degree, knowing that most people will never utilise all the resources they're given...

This is a model that web hosts have been using for years with apache virtual hosts, and so on...

So I'm a bit cynical about the whole cloud thing (other than as a marketing buzz-word) - it will not make our programms run faster, will not give us more resources (on the contrary, it's giving us less), may not help with disaster recovery, (but might be a part of it), and migrating to more resources? Sure, but would you mind shutting it down before we migrate it for you... (Actually, I'm sure it can be done live, but is it really needed all the time?) If you want to make your programs run faster, you're looking at proper parallelisation and really - you want dedicated hardware for that, so it might not be a solution - except it might be cheaper... You don't own the computers, someone else does and you just rent them when you need them... Hm. Didn't we do this in the 70's? :)

Anyway, if you want server virtualisation in a softer way, then look at LXC - it's what I use. I have a few dozen LXC instances hosted over a much smaller number of physical servers - it's a "soft" virtualisation rather than "hard" - ie. each instance shares the same kernel, so no running Windoze in one container, Linux in another, etc. but it is virtualisation in that each container has it's own network stack, process ID list, view of the filesystem, etc.

My own shared hosting server (hydra.drogon.net - it holds the opendevon.org.uk site amongst others) is an LXC container inside another host.... I'm using it more for server management than anything else - even when a client wants a dedicated server, I put one container on the host server - makes it much easier to manage should the hardware fail or of the client wants to move to different hardware, etc.

It's ps list starts:

gordon @ hydra: ps ax
  PID TTY      STAT   TIME COMMAND
    1 ?        Ss     0:37 init [2]
   84 ?        Ss     2:27 /sbin/syslogd
   89 ?        Ss     0:08 /sbin/klogd -x
  103 ?        Ssl   14:35 /usr/sbin/named -u bind
...

Note subtle lack of kernel processes :)

The key thing for me is how easy it is to migrate from one physical host to another or how to allocate more (or less!) resources to a contianer. Right now, and LXC container can't be live migrated - my migration involves: rsync, shutdown, rsync again, boot on new hardware - that can take seconds to minutes depending on the 2nd rsync time. I'm currently giving most of the hosts an equal share of all the system resources too, but there are ways to limit that which I'm looking into for another project.

LXC is arguably the fastest virtualisation mechanism as there is only one kernel and it doesn't need VT or any special hardware support. One kernel switching a lot of procecsses running natively, rather than one kernel switching a number of kernels each switching a number of processes...

The documenation is sparse but getting better. It's actually easy to setup without any fancy tools more than a few shell scripts.

You can put as many instances together on one server as you have the resources for - my test bed was an old Celeron 1.8GHz box with 512MB of RAM - I created 20 containers in that and ran 20 instances of my PBX inside it -it worked remarkably well under stress test - so much so that it's now part of my commercial offerings for a hosted PBX! (On new hardware though!)

--
The Mailing List for the Devon & Cornwall LUG
http://mailman.dclug.org.uk/listinfo/list
FAQ: http://www.dcglug.org.uk/listfaq