D&C GLug - Home Page

[ Date Index ] [ Thread Index ] [ <= Previous by date / thread ] [ Next by date / thread => ]

Re: [LUG] Return to Linux

 

On 09/01/17 09:15, Daniel Robinson via list wrote:
> Hello folks,
> 
> I've returned to using Linux as my desktop OS after a few years away.
> I'm using Ubuntu mate for simplicity (minimal).
> 
> # Could you advise on a system backup solution?
> I would like to be able to take a snapshot of my system and then when i
> inevitably break it I can quickly and easily restore it.
> 
> My system drive is currently 120GB SSD, and I have 2x 1TB HDD for backup's,
> Private data is stored on another device.
> 
> _
> 
> I often follow guides online to find solutions to problems. quite often
> the guides don't fix my issue. I end up changing settings and running
> apt-get install a lot and my system soon fills up with a great deal of
> stuff that my system doesn't require and often things will break.
> 
> # How do you keep track of applications and requisites that are
> installed on your system via apt-get?
> 
> Is there a line of commands that can undo all of these without reverting
> to a system backup.
> 
> _
> 
> Ports 135 - 139
> 
> I've noticed that although these ports are closed, they're not stealthed.
> 
> # Should i be concerned?
> 
> Many thanks


Ah, Daniel-san - welcome back!

You're obviously not a complete beginner which makes things a little
easier: I'd start off by splitting your 120Gb SSD straight down the
middle and giving yourself enough space for two side-by-side bare metal
Linux installations for now. Partly so you can experiment with different
distributions and partly so you've always got a second Linux instance
left to boot and chroot into the other one to fix it in the event of
"accidents". It looks like you've wisely stashed your important data
elsewhere as well so while you're easing back in I'm presuming you can
survive completely trashing the drive with the worst result being a
reinstall? So far so good.

Ubuntu Mate is as good a place to start as any, for a second option I'd
recommend going with Arch Linux or a derivative (Manjara, Antergos, etc)
- it's what all the cool kids are using these days. It's a sophisticated
rolling release with arguably the best forums, support and community
contributed package selection (I think the selection from Arch's AUR
repositories at least rivals the Ubuntu PPAs by now, if not surpasses
them). It took me a while to personally warm to it - I'm a Debian sort
of guy at heart - and there can be more than a whiff of the old Gentoo
elitism around it at times but Arch is well worth a look these days.

As for your extremely sensible question about backup/restore of your
bare metal testing instance you unfortunately have the usual entire
plethora of *nix options to choose from - there are so many ways to do
this. I'm going to ignore user-level tools such as rsync and it's many
front ends/wrappers, professional backup apps and a whole bunch of other
options and concentrate on the two that might actually help in your case
- one "online" and one "offline".

Offline is easiest so I'll do that first - this is the good old
fashioned "boot from a different system and image the disk" option. It's
slower, rather crude and interrupts your work flow but it's by far the
most bullet-proof method. Boot from USB and use a fancy tool or just
good old "dd if=/my/ssd of=/my/2TBdrive/images/$date.img bs=1M" to clone
the entire drive, boot sectors, multiple OS's and all to a flat file -
when things go south, just reverse the process. Not exactly granular or
subtle and in my experience you usually end up skipping doing the entire
disk dump that One Damn Time before running the operation that does
actually trash the system but it is robust. If you have a Windows
install still sitting around on that system even better - look into
using a free (as in beer only) imaging tool instead such as Macrium
Reflect. This way you can have a more functional OS to loiter around in
for 30 minutes or so while the SSD is dumped and although Macrium
doesn't have deep hooks into ext filesystems in the same way it does
NTFS (so no VSS, etc) it will do incremental/differential
backup/restores, will keep a nice list of jobs, can be automated, etc.

Online is more fun but more complicated to implement and even more
complex to use properly - you'll probably want to experiment with this
quite a lot using the offline method initially 'cos this is exactly the
sort of thing that will trash your system when you get it wrong. Base
your install on a COW filesystem and you can snapshot your bare-metal
system just like a virtual machine: LVM snapshotting doesn't count here
so you'll want either BTRFS or ZFS and by that really I just mean ZFS.
There's way too much material to cover here so I'd suggest googling
"arch zfs" and reading around to discover more of:

A: What amazing stuff it can do for you
B: How much of a headache it will be to actually configure and use

This is Linux, so inevitably there is a bit of a trade-off to be made
here! but I promise that basing your new Linux system on a proper,
modern COW instance will definitely solve your backup needs and then
some. Unless you trash the bootloader or partition table of course, in
which case you'll be back to restoring from image...

As for maintaining package lists, most distributions have some kind of
software/updates GUI application (e.g, YaST2, Synaptic, etc) that can
generate or restore system states/package lists, or on Ubuntu you can do
it with get/set selections on dpkg/apt, but it's often more trouble than
it's worth - you'll also have to make sure PPAs and signing keys are
transferred, release updates will break things, and so on. If you find
yourself repeatedly reinstalling the same selections of programs again
and again (because you're testing new systems a lot) you'd be better off
just scripting it once properly and calling it a day.

Save yourself a LOT of hassle by versioning important parts of your
filesystem (which is arguably all of it - another reason to have a
snapshotting COW system of course) but at minimum just install etckeeper
(sudo apt install etckeeper) which will save you when a badly behaved
package upgrade *doesn't* backup your original /etc configuration files
before replacing them with the new defaults. It's just generally a good
idea to version /etc anyway especially if you're doing heavy configuration.

Ports 135-139 are of course for Microsoft stuff, but you presumably know
that - as to stealthed or closed, well it's up to you I guess. You might
want them to be actually open if you're planning on some legacy SMB
action? It's not like you're offering up those ports to the internet at
large anyway. I hope!

Welcome back by the way.

Cheers
-- 
The Mailing List for the Devon & Cornwall LUG
https://mailman.dclug.org.uk/listinfo/list
FAQ: http://www.dcglug.org.uk/listfaq