D&C GLug - Home Page

[ Date Index ] [ Thread Index ] [ <= Previous by date / thread ] [ Next by date / thread => ]

Re: [LUG] The Command Line

 

On 24/06/11 22:09, Joe Buckle wrote:
> 
> How many sessions would you have open at any one time? The other day I
> was SSH'd into 5 different servers just waiting for the nod on various
> SQL query commits. Most for me to date. Maybe 5 is small fry to you guys
> though - haha

I think if you have much more than that you want better tools to manage
them.

When I was doing Unix workstations I had a scripts for "run this command
on 23 workstations". Sometimes the command was open a shell and run the
software install app, because the software installer sometimes went
wrong and with 23 workstations you could manually deal with the one or
two that might fail each time manually.

I suspect there are relatively few organisations so large that they
genuinely need more than one server doing the same thing for performance
reasons these days (redundancy yes, but performance?). Sure the Googles,
Facebooks and Amazons of this world probably do needs a large number of
identically configured servers, but then "identically configured" scales
a lot easier.

For Desktop management I've mostly only dealt with numbers in the 100's,
and at that point they all want to be identical, or in a very limited
number of configurations.

I worked closely with SUN at one point, and was involved with managing
lots of systems for them. Their approach was pretty straight forward,
and internally they had corporate Jump start profiles and
recommendations, but of course as an IT company they were heavily
involved in managing exceptions to these configurations. Though the
tools used weren't amazingly sophisticated, and most (all?) came on the
regular Solaris install media, and Big Admin website.

HP use to have some nice (but expensive) tools to group systems, stage
roll outs, queue up actions for machines that are currently not
available, and the like (mostly Microsoft Windows and HP-UX, but handled
some other systems as well). These were surprisingly well polished, so I
assume they were widely used at the time.

Recently many tools seem to assume your environment is more anarchic
than that, and has some sort of "ideal" configuration, and tries to
identify systems that aren't ideal, and migrate them to the Utopian
view. HP labs in Bristol were also doing some security tools which
worked in a similar fashion, which seems to assume that big corporate
networks will always be a mess and you just need to know how bad it is.

That said good scripting is essential for this sort of stuff, and the
first thing you learnt for big Microsoft Windows desktop deployments was
how to run a script when a user logs in, although a lot of things those
scripts use to do is now "built-in". I think here the distinction is
scripting from the command line, the command line is cool, but scripts
are what allows you to easily specify how to handle variation, an failure.

-- 
The Mailing List for the Devon & Cornwall LUG
http://mailman.dclug.org.uk/listinfo/list
FAQ: http://www.dcglug.org.uk/listfaq