D&C GLug - Home Page

[ Date Index ] [ Thread Index ] [ <= Previous by date / thread ] [ Next by date / thread => ]

Re: [LUG] Return to Linux


On 09/01/17 22:49, Daniel Robinson via list wrote:
> Thank you for your warm welcomes.
>     "Ubuntu Mate is as good a place to start as any, for a second option I'd
>     recommend going with Arch Linux or a derivative"
> I prefer Debian, it's just more sane than the other distribution
> flavours, I've been using it since Fedora lost the plot with Core 5 & 6.
> Two raspberry pi servers I have are set and forget, SSH'd into
> occasionally to do updates.
>     "As for your extremely sensible question about backup/restore of your
>     bare metal testing instance you unfortunately have the usual entire
>     plethora of *nix options to choose from - there are so many ways to do
>     this. I'm going to ignore user-level tools such as rsync and it's many
>     front ends/wrappers, professional backup apps and a whole bunch of other
>     options and concentrate on the two that might actually help in your case
>     - one "online" and one "offline""
> When I wrote this topic's original email I was planning to run a backup
> strategy from the same computer, perhaps using a rescue disc of sorts to
> repair it, and saving snapshot images to the spare HDD's.
> Since your comments I have begun to start liking the idea of a offline
> backup.
> Sitting to my left is a Game server (old Q6600). I'm no longer hosting
> games from it so I am thinking perhaps using it as a backup server.
> Now, my knowledge isn't as great as I'd sometimes proclaim it to be, so
> what are your thoughts on PXE?
> Would it be simple enough to run a script using dd to create an ISO
> image of SSD - then shove it through my LAN to the PC on my left to hold
> onto, then using a PXE boot to recover from said ISO.
> Have I got this completely wrong, is there a much better (ready made)
> solution to this? Please bear in mind I'm not the sharpest tool in the box.
> I'm just looking for something simple, power consumption isn't a
> concern, I would just like something quick and reliable for when I
> think, "Ohh - i'll just try this how--to-guide" and click something to
> back up quickly. Then proceed with the guide.
> (because i'll get to the end of the guide and something is likely to go
> wrong because I didn't understand something correctly/completely).
> Would you mind providing some further suggestions on such a back up method?
>     ave yourself a LOT of hassle by versioning important parts of your
>     filesystem (which is arguably all of it - another reason to have a
>     snapshotting COW system of course) but at minimum just install etckeeper
>     (sudo apt install etckeeper) which will save you when a badly behaved
>     package upgrade *doesn't* backup your original /etc configuration files
>     before replacing them with the new defaults. It's just generally a good
>     idea to version /etc anyway especially if you're doing heavy
>     configuration.
> When you say "versioning important parts of your filesystem", I
> interpret that as partitioning the hard disc for /home /var /etc /opt
> My current Ubuntu mint install just gobbles up the whole disc, I didn't
> bother to create a separate /home as all my data is stored away from
> this computer anyway.
> Please feel free to re-educate me, and please be specific if you can.

Well, there's a lot of stuff to address here so I'll try and be brief
for once...

Firstly, if you plan on doing so much potentially catastrophic
experimenting that your system will be faster to restore from full image
backup than simply fix, you need to rethink using Linux as your bare
metal OS. It sounds like you've got a fast gaming-style rig with power
to spare so install a hypervisor on Windows (Hyper-V is native, VBox is
recommended, VMWare costs ÂÂÂ) and run a beefy Linux VM for a while.
Given all the resources you can dedicate and running the image from a
SSD in full screen with 3D acceleration and guest additions loaded is
almost indistinguishable from the "real" thing these days. You'll get
snapshotting, clones and all the super easy management of virtualisation
and 95% of the bare metal experience - experiment to your hearts
content, spin up and try out new distros and ideas with 0% risk and no
lengthy downtimes as you muck about with image restorations.

Once you're confident enough to not just casually trash your OS and are
more used to fixing minor hiccups, go bare metal of course. If you stick
with VBox/VMWare then you can just reinstall either on Linux (they're
fully cross platform) and move all your extant VMs across as well.
Eventually you'll learn to test all potentially dangerous system
modifications in a VM *first* before applying it to any "production"
boxes, thus avoiding any need for bare metal restores :]

Moral of the story: if you're doing a lot of bare metal restores for any
other reason than hardware issues, you're doing it wrong. VMs are the
answer to everything these days and the only people who don't think that
are people who haven't used virtualisation enough yet!

Your old Q6600 is a case in point: that's a nice little box and a good
resource, so I'd recommend immediately installing - wait for it - a
hypervisor on  it (a server based one this time, so either Linux+KVM,
Xenserver (recommended) or free ESX) which will immediately make it
considerably more useful. Install all your experiments for backup
servers, PXE booting and whatever else you want to do on it as VMs and
once again, you can immediately fix whatever you break whilst testing.
There is literally zero reason not to virtualise the sort of stuff you
want to play with.

This will keep you busy for a while so I'd hold off on your complex PXE
netbooting backup server setup for a while until you've got the hang of
the essentials first, but yes, you most definitely can do that. I
actually do exactly that scenario with (virtualised, obviously) Linux
backup servers that are scheduled to backup entire companies PC assets
in the dead of night by wakeonlan'ing VLAN segments in rotation and
staging tiny boot images via DHCP/PXE to them which run scripts to
incrementally clone, sync or whatever accordingly to the NAS or SAN.
It's not exactly the sort of thing that you'd normally need or even want
for a home scenario though - talk about overkill! Getting all this lot
working reliably in production and properly tied in with reporting,
centralised logging, etc, is a pretty major technical exercise - in
fact, if you could knock all that out comfortably I know several places
that would employ you right now.

"Versioning" in this case was as in Version Control System, which is
another topic by itself so I'll just leave these links here:


In the context of versioning etc think of it more as a way of keeping
track of all historical changes to the directory contents, primarily to
monitor and potentially fix/reverse config changes. No dedicated
partitioning system necessary.

Not particularly brief but oh well. Seriously though, the answer to all
your questions initially is virtualise All The Things, at least until
you've stopped wrecking your host Linux OS to the point where re-imaging
is quicker than fixing. Make sense?

The Mailing List for the Devon & Cornwall LUG
FAQ: http://www.dcglug.org.uk/listfaq