Thanks, everyone, for all the good advice. I decided it was worth the effort to rebuild the RAID array using properly typed partitions. gdisk was instantly familiar, having used Linux's older fdisk more times than I care to remember.
On 10/06/2018 05:53 AM, Trevor Cordes wrote:
On 2018-10-04 Gilbert E. Detillieux wrote:
No, the LVM commands will not affect MD configuration at all. Strictly speaking, the mdadm.conf file (location may vary, depending on distro) isn't necessary. Without it, the MD arrays will still be detected and assembled at boot time, but you may get different device names assigned to them (e.g. /dev/md127, instead of /dev/md0).
Not anymore: even without an mdadm.conf you can still get devices named as you want them (/dev/md0, etc.). You can set it at creation time or after the fact somehow, though how to do it completely eludes my memory at this moment. But I know for a fact I'm right: my box has no mdadm.conf and I have md1, md2, md3, md4 and md127 -- 1-4 were chosen, 127 is after I forgot how. :-)
On 2018-10-05 Adam Thompson wrote:
Without partitions, you'll find that you need an mdadm.conf file to instruct mdadm that there's an array there that needs to be started. Normally the Linux kernel looks for the magic "auto detect raid" partition type. -Adam
But you still don't need to specify the raid arrays (which is redundant and easy to get out of sync with your changing hardware as disks die/get replaced/arrays change), just the constituent devices:
DEVICE partitions DEVICE /dev/sd[ab]
for example (you could use the by-name or by-uuid paths too if you prefer). You can even just put in all your drives, or /dev/sd[a-z] and let the kernel sort it out, it'll just ignore the ones without raid superblocks.
As mentioned: I strongly recommend using partitions as your basis, not raw drives, because then it's easier to manipulate things in the future, boot from them, add a MBR, absolutely no mdadm.conf, etc. Not critical, but handy.
If you are using partitions, you mark each as type fd (linux raid).
As Gilbert said, gdisk is pretty good. And the newest versions of fdisk support GPT also. I just redid some of my RAID1's with GPT (while online! no reboots! no USB boots!) instead of DOS partition tables. Worked out great. gdisk gives extra options you may need for boot partitions. Always leave a 32M or so partition you can use as a "BIOS boot partition" (YMMV with EFI).
My favorite thing about GPT, even if you aren't >2TB, is you can have gobs of partitions without messing with DOS "extended partitions" which are such a drag. Grub2 fully supports GPT, and from what I can tell, even ancient boxes will boot from GPT with MBR chaining to the "BIOS boot partition".
Lastly, Linux md raid is the best thing ever. I would almost never use any hardware RAID (or other software raid like IMSM) if I can help it. Your chances of recovering from big disasters is massively increased with all the leeway and options mdadm gives you. H/w RAID is such a black box that if things don't go according to plan, you're usually SOL.
Very last lastly: go ahead and boot off or RAID1. You don't need anything special for RAID1, it just works. Just put the grub-install MBR on both disks (again, assuming non-EFI) in case one dies. Make partitions for at least your boot/root/swap and make RAID1 on top of each. It's literally as easy as what you've already done. I'll also mention I use RAID even on SSDs as they fail too, usually in horrific ways. _______________________________________________ Roundtable mailing list Roundtable@muug.ca https://muug.ca/mailman/listinfo/roundtable