D&C GLug - Home Page

[ Date Index ] [ Thread Index ] [ <= Previous by date / thread ] [ Next by date / thread => ]

Re: [LUG] OT: Reading single RAID 1 HD from failed array.

 

No need to apologise Simon, I'm grateful for the advice you gave. I now have the small problem that the NAS will only recognise one of the new HDs. Once that's sorted I'll be able to mount the old volume in the caddy and just copy everything across.

Julian

On 08/06/17 19:31, Simon Avery wrote:
JulianÂ

Sorry I wasn't around to help you, but you've clearly managed better for that. Well done, must be a relief. Hope it all transfers off ok.

S

On 8 June 2017 at 18:08, Julian Hall <linux@xxxxxxxxxxxx> wrote:
Sorry all me again. Typically after hours of strife and then calling for help, I found out how to do it.

In short I had to install mdadm and lvm2 (both already done) and then:

sudo apt-get install mdadm [already done]
sudo mdadm --assemble --scan
sudo apt-get install lvm2 [already done]
sudo vgscan --mknodes

Now [this was for a Live Ubuntu session] run Disk Utility, select "vg1" in "Multi-disk Devices" option. Select "Start Volume Group". Now you should be able to see new device in list - "1.41.3-1167" for me.

Using a normal Mint 18 session I didn't need to use the Disk Utility; I just opened the Computer and then found the device which was immediately accessible. Now to find something to copy it all onto :)

Julian


On 08/06/17 16:07, Julian Hall wrote:
Sorry that should be /dev/sdc for the real device.. /dev/sdc3 is where the data is.

On 08/06/17 16:02, Julian Hall wrote:
Hi Simon,

I took the working drive out of the NAS and put it in a USB caddy. Since when I have been trying for a few hours to get it to mount in Linux - I keep going round in circles. From what I can make out Linux has created a software RAID device, /dev/md2 which is busy, and mapped the real device to that; stopping it does nothing helpful. The drive itself is - according to GParted - /dev/sd3. As it is a RAID1 volume with all the data intact - I could see it all in Synology's Disk Station Manager (DSM) software, what would happen if I just took the RAID flag off in GParted?

Kind regards,

Julian

On 04/06/17 20:37, Simon Avery wrote:
Hi again,

You could try mounting the non-dead drive onto your linux machine - could work and shouldn't break if it it's not already. (If you just mount it, but mount read-only to be sure)

But I don't know what crazy tricks Synology do with their software - so it's not really a linux issue, even if their boxes run stock linux without any mods, which seems unlikely.Â

Good luck with it. Hope you had a back up. (Repeat the mantra: RAID is not a backup...)Â

On 4 June 2017 at 19:44, Julian Hall <linux@xxxxxxxxxxxx> wrote:
Hi Simon,

I understand that's how it's /supposed/ to work, and indeed the last time a single disk failed in the same NAS that is exactly what happened. However this time I was able to access the surviving disk once only, and not since. The Synology software says the array is 'degraded'. I tried repairing it as per the Synology support pages and it /said/ both disks were now normal but still refused to mount the volume. Then when I restarted the NAS the troubled disk (no. 2) disappeared and now only no. 1 displays.

That's when I emailed for help. I suspect I will have to wait until the new HD on order arrives and see what happens then.

Kind regards,

Julian


On 04/06/17 18:41, Simon Avery wrote:
Hi Julian,

The whole point of RAID 1 is that work continues as normal in a single HDD failure.

Synology device should continue working with one dead drive, if there are two and it's RAID 1. Read their support pages.

A lot depends on the software used to make the raid. mdadm is the most common linux software, and each drive can be mounted separately as a straight device, but hardware and fakeraid (BIOS software raid) can be wildly different and it's hard to know without more details. Not all RAID setups are equal, y'see.

HTH

On 3 June 2017 at 23:05, Julian Hall <linux@xxxxxxxxxxxx> wrote:
Hi All,

One of the two HDs in my Synology DS212j (RAID 1 array) has packed in, and the last time this happened I was still able to read the data on the other surviving disk. This time I was able to access it /once/ but not since - it refuses to mount at all. A friend mentioned he was able to read from a single RAID 1 disk after a crash in Linux but couldn't remember how; we've searched the forums all afternoon without success.

Any advice gratefully received.

Kind regards,

Julian

--
The Mailing List for the Devon & Cornwall LUG
https://mailman.dclug.org.uk/listinfo/list
FAQ: http://www.dcglug.org.uk/listfaq





--
The Mailing List for the Devon & Cornwall LUG
https://mailman.dclug.org.uk/listinfo/list
FAQ: http://www.dcglug.org.uk/listfaq












--
The Mailing List for the Devon & Cornwall LUG
https://mailman.dclug.org.uk/listinfo/list
FAQ: http://www.dcglug.org.uk/listfaq





-- 
The Mailing List for the Devon & Cornwall LUG
https://mailman.dclug.org.uk/listinfo/list
FAQ: http://www.dcglug.org.uk/listfaq