Web lists-archives.com

Re: Auto-assembled SW RAID in rescue mode




Le 11/05/2018 à 13:34, Niclas Arndt a écrit :
At boot, I was prompted with a BIOS message saying that there was no boot device.

No, a BIOS upgrade doesn't modify fstab. I believe that EFI has anti-tampering mechanisms that might have been triggered by the BIOS upgrade. (At least it's currently my best guess. It is in line with the fact that a clean install after this BIOS upgrade has no problem.)

I suspect that the UEFI firmware upgrade deleted the EFI boot entries, and running grub-install just re-created them. I found that EFI boot entries are not reliable, so in order to avoid such breakage I install a copy of GRUB in the EFI removable path as a fallback bootloader in the case there is no more valid EFI boot entry, with :

grub-install --removable

However, Debian's installer rescue mode does indeed contain functionality that can modify fstab for you:

When entering rescue mode, you are prompted to "Select the partitions to assemble into a RAID array. If you select 'Automatic', then all devices containing RAID physical volumes will be scanned and assembled. Note that a RAID partition at the end of a disk may sometimes cause that disk to be mistakenly detected as containing a RAID physical volume.

Indeed that could happen with the obsolete superblock version 0.90 which was located at the end of the RAID partition. It cannot happen with the current default superblock version 1.2 which is located near the start of the RAID partition. I doubt it could happen even with the non-default superblock version 1.0 which is also located at the end of the RAID partition, because it contains a super_offset field which stores the superblock offset relative to the start of the partition and would mismatch if taken relative to the start of the whole disk.

Either way, at least the menu option for graphical rescue mode lets you navigate to a listing of possible operations that correspond to the regular installation sequence. One of these operations is partitioning. Manual partitioning presents a correct listing of my RAID volumes, although as "do not use" (and hence without a mount point). When assigning and writing "ext4" and mount point / for /dev/md0, swap for /dev/md1, and "ext4" /storage for /dev/md2 I could finally write new grub. I suspect that it is this step that somehow transmogrified my single-partition /dev/md2 (up to this point referred to as /dev/md2) into a /dev/md2 RAID gpt volume with a partition now referred to as /dev/md2p1.

Using the installer partitioning tool while in rescue mode, specially assigning mount points, is a very bad idea IMO. I suspect that it may actually mount filesystems and write into the /etc/fstab file on the filesystem assigned to / (actually /target).

However, IIRC, the partitioning tool cannot create a partition table in a RAID array, it can only use one which already exists and create partitions in it.

Could you post the output of

blkid /dev/md2*
file -sk /dev/md2
wipefs /dev/md2 # don't worry, without option it does not wipe anything