Circumstances finally forced me to bite the bullet and learn something I had been putting off for too long: setting up a RAID array under Linux. I'm almost embarrassed, now that I've done it, that I waited so long because it was way simpler than I'd imagined. I thought I'd have to figure out all kinds of magic with LVM, parted, and mdadm, but I found this tutorial that showed a simple set of mdadm commands to set up RAID 1:
https://www.digitalocean.com/community/tutorials/how-to-create-raid-arrays-w...
This was actually one of the simplest scenarios: the system had an SSD for the root/boot partition, and two 2 TB hard drives for data storage (/dev/sdb & sdc), so I just needed to set up RAID 1 and not worry about the intricacies of booting from RAID. So, software RAID seemed like the quick & easy way to go with a minimum of fuss.
Now the nagging question: is it really this simple, or does the tutorial above oversimplify and omit some important steps? Can someone with ample RAID and mdadm experience advise or provide tips on anything else I should do or lookout for?
Thanks, Gilles
I think the tutorial is complete enough. The only thing I'd do differently is create appropriate partition tables on the raw disks (GPT if the disks are >2TB or if you need to use GPT for other reasons, but the older MS-DOS partition tables would be fine otherwise), set up one partition on each drive and tag them with the applicable partition type to indicate they're MD RAID partitions, and use the device names for the partitions rather than the raw drives in the mdadm commands. While this isn't necessary, I think it would help in post-mortem recovery, and in keeping your sanity when you're (or someone else is) trying to figure out what you did a few years later.
LVM has some nice features to offer, e.g. if you anticipate wanting to add more capacity to this file system in the future, or you want to split a large array into multiple file systems. But for simple use cases, I wouldn't bother. If you do use LVM, don't use its RAID features; use LVM over top of MD.
If you go with GPT partition format, and don't want to deal with the arcane syntax of parted commands, there are alternatives: gparted for the full-GUI, Partition-Magic-like experience, or gdisk for a simple fdisk-like, retro, text-menu-based interface.
Gilbert
On 04/10/2018 8:33 AM, Gilles Detillieux wrote:
Circumstances finally forced me to bite the bullet and learn something I had been putting off for too long: setting up a RAID array under Linux. I'm almost embarrassed, now that I've done it, that I waited so long because it was way simpler than I'd imagined. I thought I'd have to figure out all kinds of magic with LVM, parted, and mdadm, but I found this tutorial that showed a simple set of mdadm commands to set up RAID 1:
https://www.digitalocean.com/community/tutorials/how-to-create-raid-arrays-w...
This was actually one of the simplest scenarios: the system had an SSD for the root/boot partition, and two 2 TB hard drives for data storage (/dev/sdb & sdc), so I just needed to set up RAID 1 and not worry about the intricacies of booting from RAID. So, software RAID seemed like the quick & easy way to go with a minimum of fuss.
Now the nagging question: is it really this simple, or does the tutorial above oversimplify and omit some important steps? Can someone with ample RAID and mdadm experience advise or provide tips on anything else I should do or lookout for?
Thanks, Gilles
Here are my notes from the last time I build a Linux RAID on LVM. This was on 16.04LTS
I think my approach was slightly different. The RAID device is created on the LVM devices.
1. create partitions of type Linux RAID Autodetect on both disks fdisk -l /dev/sdb fdisk -l /dev/sdc 2. create a RAID array called md0 using mdstat sudo mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sdb1 /dev/sdc1 3. add the md0 raid device to the LVM pool sudo pvcreate /dev/md0 4. create a volume group called datavg sudo vgcreate datavg /dev/md0 5. create a logical volume called datalv within the volume group sudo lvcreate --name datalv --size 1.8T datavg 6. format the newly created logical volume sudo mkfs.ext4 /dev/datavg/datalv 7. move home files to a temporary location, create a new home and mount the newly formatted device there, copy the original home files to the new device sudo mv home home.orig; sudo mkdir home ; sudo chmod 777 home ; sudo mount /dev/mapper/datavg-datalv /home ; sudo cp -a /home.orig/* home 8. edit fstab for startup config for this disk
On 18-10-04 09:42 AM, Gilbert E. Detillieux wrote:
I think the tutorial is complete enough. The only thing I'd do differently is create appropriate partition tables on the raw disks (GPT if the disks are >2TB or if you need to use GPT for other reasons, but the older MS-DOS partition tables would be fine otherwise), set up one partition on each drive and tag them with the applicable partition type to indicate they're MD RAID partitions, and use the device names for the partitions rather than the raw drives in the mdadm commands. While this isn't necessary, I think it would help in post-mortem recovery, and in keeping your sanity when you're (or someone else is) trying to figure out what you did a few years later.
LVM has some nice features to offer, e.g. if you anticipate wanting to add more capacity to this file system in the future, or you want to split a large array into multiple file systems. But for simple use cases, I wouldn't bother. If you do use LVM, don't use its RAID features; use LVM over top of MD.
If you go with GPT partition format, and don't want to deal with the arcane syntax of parted commands, there are alternatives: gparted for the full-GUI, Partition-Magic-like experience, or gdisk for a simple fdisk-like, retro, text-menu-based interface.
Gilbert
On 04/10/2018 8:33 AM, Gilles Detillieux wrote:
Circumstances finally forced me to bite the bullet and learn something I had been putting off for too long: setting up a RAID array under Linux. I'm almost embarrassed, now that I've done it, that I waited so long because it was way simpler than I'd imagined. I thought I'd have to figure out all kinds of magic with LVM, parted, and mdadm, but I found this tutorial that showed a simple set of mdadm commands to set up RAID 1:
https://www.digitalocean.com/community/tutorials/how-to-create-raid-arrays-w...
This was actually one of the simplest scenarios: the system had an SSD for the root/boot partition, and two 2 TB hard drives for data storage (/dev/sdb & sdc), so I just needed to set up RAID 1 and not worry about the intricacies of booting from RAID. So, software RAID seemed like the quick & easy way to go with a minimum of fuss.
Now the nagging question: is it really this simple, or does the tutorial above oversimplify and omit some important steps? Can someone with ample RAID and mdadm experience advise or provide tips on anything else I should do or lookout for?
Thanks, Gilles
Thanks, Scott.
One of the steps in the tutorial is to save the MD RAID configuration in /etc/mdadm/mdadm.conf. They suggest using "sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf", which I did. In your approach, that step isn't done. Is this a detail that pvcreate looks after for you, either by adding to that file itself, or saving the setup elsewhere? This was the only part of DigitalOcean's procedure that I found to be a bit kludgy. I was surprised that there wasn't something right in mdadm to manage the saving of the configuration more automatically. Adding the fstab entry was as I'd expect for any file system type. Other than that, things were pretty plug-and-play, with no messing around with systemctl or anything like that required. After a reboot, the RAID array was back in action just as it should be.
On 10/04/2018 10:07 AM, Scott Toderash wrote:
Here are my notes from the last time I build a Linux RAID on LVM. This was on 16.04LTS
I think my approach was slightly different. The RAID device is created on the LVM devices.
- create partitions of type Linux RAID Autodetect on both disks
fdisk -l /dev/sdb fdisk -l /dev/sdc 2. create a RAID array called md0 using mdstat sudo mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sdb1 /dev/sdc1 3. add the md0 raid device to the LVM pool sudo pvcreate /dev/md0 4. create a volume group called datavg sudo vgcreate datavg /dev/md0 5. create a logical volume called datalv within the volume group sudo lvcreate --name datalv --size 1.8T datavg 6. format the newly created logical volume sudo mkfs.ext4 /dev/datavg/datalv 7. move home files to a temporary location, create a new home and mount the newly formatted device there, copy the original home files to the new device sudo mv home home.orig; sudo mkdir home ; sudo chmod 777 home ; sudo mount /dev/mapper/datavg-datalv /home ; sudo cp -a /home.orig/* home 8. edit fstab for startup config for this disk
On 18-10-04 09:42 AM, Gilbert E. Detillieux wrote:
I think the tutorial is complete enough. The only thing I'd do differently is create appropriate partition tables on the raw disks (GPT if the disks are >2TB or if you need to use GPT for other reasons, but the older MS-DOS partition tables would be fine otherwise), set up one partition on each drive and tag them with the applicable partition type to indicate they're MD RAID partitions, and use the device names for the partitions rather than the raw drives in the mdadm commands. While this isn't necessary, I think it would help in post-mortem recovery, and in keeping your sanity when you're (or someone else is) trying to figure out what you did a few years later.
LVM has some nice features to offer, e.g. if you anticipate wanting to add more capacity to this file system in the future, or you want to split a large array into multiple file systems. But for simple use cases, I wouldn't bother. If you do use LVM, don't use its RAID features; use LVM over top of MD.
If you go with GPT partition format, and don't want to deal with the arcane syntax of parted commands, there are alternatives: gparted for the full-GUI, Partition-Magic-like experience, or gdisk for a simple fdisk-like, retro, text-menu-based interface.
Gilbert
On 04/10/2018 8:33 AM, Gilles Detillieux wrote:
Circumstances finally forced me to bite the bullet and learn something I had been putting off for too long: setting up a RAID array under Linux. I'm almost embarrassed, now that I've done it, that I waited so long because it was way simpler than I'd imagined. I thought I'd have to figure out all kinds of magic with LVM, parted, and mdadm, but I found this tutorial that showed a simple set of mdadm commands to set up RAID 1:
https://www.digitalocean.com/community/tutorials/how-to-create-raid-arrays-w...
This was actually one of the simplest scenarios: the system had an SSD for the root/boot partition, and two 2 TB hard drives for data storage (/dev/sdb & sdc), so I just needed to set up RAID 1 and not worry about the intricacies of booting from RAID. So, software RAID seemed like the quick & easy way to go with a minimum of fuss.
Now the nagging question: is it really this simple, or does the tutorial above oversimplify and omit some important steps? Can someone with ample RAID and mdadm experience advise or provide tips on anything else I should do or lookout for?
Thanks, Gilles
On 04/10/2018 1:23 PM, Gilles Detillieux wrote:
Thanks, Scott.
One of the steps in the tutorial is to save the MD RAID configuration in /etc/mdadm/mdadm.conf. They suggest using "sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf", which I did. In your approach, that step isn't done. Is this a detail that pvcreate looks after for you, either by adding to that file itself, or saving the setup elsewhere?
No, the LVM commands will not affect MD configuration at all. Strictly speaking, the mdadm.conf file (location may vary, depending on distro) isn't necessary. Without it, the MD arrays will still be detected and assembled at boot time, but you may get different device names assigned to them (e.g. /dev/md127, instead of /dev/md0).
If you want consistent device names, it's best to have the mdadm.conf file. (If you're going to use UUID's or logical volume names to refer to your devices, then the actual assigned md device name doesn't matter.)
Gilbert
This was the only part of DigitalOcean's procedure that I found to be a bit kludgy. I was surprised that there wasn't something right in mdadm to manage the saving of the configuration more automatically. Adding the fstab entry was as I'd expect for any file system type. Other than that, things were pretty plug-and-play, with no messing around with systemctl or anything like that required. After a reboot, the RAID array was back in action just as it should be.
On 10/04/2018 10:07 AM, Scott Toderash wrote:
Here are my notes from the last time I build a Linux RAID on LVM. This was on 16.04LTS
I think my approach was slightly different. The RAID device is created on the LVM devices.
- create partitions of type Linux RAID Autodetect on both disks
fdisk -l /dev/sdb fdisk -l /dev/sdc 2. create a RAID array called md0 using mdstat sudo mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sdb1 /dev/sdc1 3. add the md0 raid device to the LVM pool sudo pvcreate /dev/md0 4. create a volume group called datavg sudo vgcreate datavg /dev/md0 5. create a logical volume called datalv within the volume group sudo lvcreate --name datalv --size 1.8T datavg 6. format the newly created logical volume sudo mkfs.ext4 /dev/datavg/datalv 7. move home files to a temporary location, create a new home and mount the newly formatted device there, copy the original home files to the new device sudo mv home home.orig; sudo mkdir home ; sudo chmod 777 home ; sudo mount /dev/mapper/datavg-datalv /home ; sudo cp -a /home.orig/* home 8. edit fstab for startup config for this disk
In my case it stayed as md0 but in theory I think it could assign anything, as you say.
On 18-10-04 01:33 PM, Gilbert E. Detillieux wrote:
On 04/10/2018 1:23 PM, Gilles Detillieux wrote:
Thanks, Scott.
One of the steps in the tutorial is to save the MD RAID configuration in /etc/mdadm/mdadm.conf. They suggest using "sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf", which I did. In your approach, that step isn't done. Is this a detail that pvcreate looks after for you, either by adding to that file itself, or saving the setup elsewhere?
No, the LVM commands will not affect MD configuration at all. Strictly speaking, the mdadm.conf file (location may vary, depending on distro) isn't necessary. Without it, the MD arrays will still be detected and assembled at boot time, but you may get different device names assigned to them (e.g. /dev/md127, instead of /dev/md0).
If you want consistent device names, it's best to have the mdadm.conf file. (If you're going to use UUID's or logical volume names to refer to your devices, then the actual assigned md device name doesn't matter.)
Gilbert
This was the only part of DigitalOcean's procedure that I found to be a bit kludgy. I was surprised that there wasn't something right in mdadm to manage the saving of the configuration more automatically. Adding the fstab entry was as I'd expect for any file system type. Other than that, things were pretty plug-and-play, with no messing around with systemctl or anything like that required. After a reboot, the RAID array was back in action just as it should be.
On 10/04/2018 10:07 AM, Scott Toderash wrote:
Here are my notes from the last time I build a Linux RAID on LVM. This was on 16.04LTS
I think my approach was slightly different. The RAID device is created on the LVM devices.
- create partitions of type Linux RAID Autodetect on both disks
fdisk -l /dev/sdb fdisk -l /dev/sdc 2. create a RAID array called md0 using mdstat sudo mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sdb1 /dev/sdc1 3. add the md0 raid device to the LVM pool sudo pvcreate /dev/md0 4. create a volume group called datavg sudo vgcreate datavg /dev/md0 5. create a logical volume called datalv within the volume group sudo lvcreate --name datalv --size 1.8T datavg 6. format the newly created logical volume sudo mkfs.ext4 /dev/datavg/datalv 7. move home files to a temporary location, create a new home and mount the newly formatted device there, copy the original home files to the new device sudo mv home home.orig; sudo mkdir home ; sudo chmod 777 home ; sudo mount /dev/mapper/datavg-datalv /home ; sudo cp -a /home.orig/* home 8. edit fstab for startup config for this disk
On October 5, 2018 7:09:50 a.m. CDT, Scott Toderash scott@100percenthelpdesk.com wrote:
In my case it stayed as md0 but in theory I think it could assign anything, as you say.
On 18-10-04 01:33 PM, Gilbert E. Detillieux wrote:
On 04/10/2018 1:23 PM, Gilles Detillieux wrote:
Thanks, Scott.
One of the steps in the tutorial is to save the MD RAID
configuration
in /etc/mdadm/mdadm.conf. They suggest using "sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf", which I did. In your approach, that step isn't done. Is this a detail that pvcreate looks
after for you, either by adding to that file itself, or saving the setup elsewhere?
No, the LVM commands will not affect MD configuration at all.
Strictly
speaking, the mdadm.conf file (location may vary, depending on
distro)
isn't necessary. Without it, the MD arrays will still be detected
and
assembled at boot time, but you may get different device names assigned to them (e.g. /dev/md127, instead of /dev/md0).
If you want consistent device names, it's best to have the mdadm.conf
file. (If you're going to use UUID's or logical volume names to
refer
to your devices, then the actual assigned md device name doesn't
matter.)
Gilbert
This was the only part of DigitalOcean's procedure that I found to
be
a bit kludgy. I was surprised that there wasn't something right in mdadm to manage the saving of the configuration more automatically. Adding the fstab entry was as I'd expect for any file system type. Other than that, things were pretty plug-and-play, with no messing around with systemctl or anything like that required. After a
reboot,
the RAID array was back in action just as it should be.
On 10/04/2018 10:07 AM, Scott Toderash wrote:
Here are my notes from the last time I build a Linux RAID on LVM. This was on 16.04LTS
I think my approach was slightly different. The RAID device is created on the LVM devices.
- create partitions of type Linux RAID Autodetect on both disks
fdisk -l /dev/sdb fdisk -l /dev/sdc 2. create a RAID array called md0 using mdstat sudo mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sdb1 /dev/sdc1 3. add the md0 raid device to the LVM pool sudo pvcreate /dev/md0 4. create a volume group called datavg sudo vgcreate datavg /dev/md0 5. create a logical volume called datalv within the volume group sudo lvcreate --name datalv --size 1.8T datavg 6. format the newly created logical volume sudo mkfs.ext4 /dev/datavg/datalv 7. move home files to a temporary location, create a new home and mount the newly formatted device there, copy the original home
files
to the new device sudo mv home home.orig; sudo mkdir home ; sudo chmod 777 home ;
sudo
mount /dev/mapper/datavg-datalv /home ; sudo cp -a /home.orig/*
home
- edit fstab for startup config for this disk
Roundtable mailing list Roundtable@muug.ca https://muug.ca/mailman/listinfo/roundtable
Without partitions, you'll find that you need an mdadm.conf file to instruct mdadm that there's an array there that needs to be started. Normally the Linux kernel looks for the magic "auto detect raid" partition type. -Adam
On 2018-10-04 Gilbert E. Detillieux wrote:
No, the LVM commands will not affect MD configuration at all. Strictly speaking, the mdadm.conf file (location may vary, depending on distro) isn't necessary. Without it, the MD arrays will still be detected and assembled at boot time, but you may get different device names assigned to them (e.g. /dev/md127, instead of /dev/md0).
Not anymore: even without an mdadm.conf you can still get devices named as you want them (/dev/md0, etc.). You can set it at creation time or after the fact somehow, though how to do it completely eludes my memory at this moment. But I know for a fact I'm right: my box has no mdadm.conf and I have md1, md2, md3, md4 and md127 -- 1-4 were chosen, 127 is after I forgot how. :-)
On 2018-10-05 Adam Thompson wrote:
Without partitions, you'll find that you need an mdadm.conf file to instruct mdadm that there's an array there that needs to be started. Normally the Linux kernel looks for the magic "auto detect raid" partition type. -Adam
But you still don't need to specify the raid arrays (which is redundant and easy to get out of sync with your changing hardware as disks die/get replaced/arrays change), just the constituent devices:
DEVICE partitions DEVICE /dev/sd[ab]
for example (you could use the by-name or by-uuid paths too if you prefer). You can even just put in all your drives, or /dev/sd[a-z] and let the kernel sort it out, it'll just ignore the ones without raid superblocks.
As mentioned: I strongly recommend using partitions as your basis, not raw drives, because then it's easier to manipulate things in the future, boot from them, add a MBR, absolutely no mdadm.conf, etc. Not critical, but handy.
If you are using partitions, you mark each as type fd (linux raid).
As Gilbert said, gdisk is pretty good. And the newest versions of fdisk support GPT also. I just redid some of my RAID1's with GPT (while online! no reboots! no USB boots!) instead of DOS partition tables. Worked out great. gdisk gives extra options you may need for boot partitions. Always leave a 32M or so partition you can use as a "BIOS boot partition" (YMMV with EFI).
My favorite thing about GPT, even if you aren't >2TB, is you can have gobs of partitions without messing with DOS "extended partitions" which are such a drag. Grub2 fully supports GPT, and from what I can tell, even ancient boxes will boot from GPT with MBR chaining to the "BIOS boot partition".
Lastly, Linux md raid is the best thing ever. I would almost never use any hardware RAID (or other software raid like IMSM) if I can help it. Your chances of recovering from big disasters is massively increased with all the leeway and options mdadm gives you. H/w RAID is such a black box that if things don't go according to plan, you're usually SOL.
Very last lastly: go ahead and boot off or RAID1. You don't need anything special for RAID1, it just works. Just put the grub-install MBR on both disks (again, assuming non-EFI) in case one dies. Make partitions for at least your boot/root/swap and make RAID1 on top of each. It's literally as easy as what you've already done. I'll also mention I use RAID even on SSDs as they fail too, usually in horrific ways.
Thanks, everyone, for all the good advice. I decided it was worth the effort to rebuild the RAID array using properly typed partitions. gdisk was instantly familiar, having used Linux's older fdisk more times than I care to remember.
On 10/06/2018 05:53 AM, Trevor Cordes wrote:
On 2018-10-04 Gilbert E. Detillieux wrote:
No, the LVM commands will not affect MD configuration at all. Strictly speaking, the mdadm.conf file (location may vary, depending on distro) isn't necessary. Without it, the MD arrays will still be detected and assembled at boot time, but you may get different device names assigned to them (e.g. /dev/md127, instead of /dev/md0).
Not anymore: even without an mdadm.conf you can still get devices named as you want them (/dev/md0, etc.). You can set it at creation time or after the fact somehow, though how to do it completely eludes my memory at this moment. But I know for a fact I'm right: my box has no mdadm.conf and I have md1, md2, md3, md4 and md127 -- 1-4 were chosen, 127 is after I forgot how. :-)
On 2018-10-05 Adam Thompson wrote:
Without partitions, you'll find that you need an mdadm.conf file to instruct mdadm that there's an array there that needs to be started. Normally the Linux kernel looks for the magic "auto detect raid" partition type. -Adam
But you still don't need to specify the raid arrays (which is redundant and easy to get out of sync with your changing hardware as disks die/get replaced/arrays change), just the constituent devices:
DEVICE partitions DEVICE /dev/sd[ab]
for example (you could use the by-name or by-uuid paths too if you prefer). You can even just put in all your drives, or /dev/sd[a-z] and let the kernel sort it out, it'll just ignore the ones without raid superblocks.
As mentioned: I strongly recommend using partitions as your basis, not raw drives, because then it's easier to manipulate things in the future, boot from them, add a MBR, absolutely no mdadm.conf, etc. Not critical, but handy.
If you are using partitions, you mark each as type fd (linux raid).
As Gilbert said, gdisk is pretty good. And the newest versions of fdisk support GPT also. I just redid some of my RAID1's with GPT (while online! no reboots! no USB boots!) instead of DOS partition tables. Worked out great. gdisk gives extra options you may need for boot partitions. Always leave a 32M or so partition you can use as a "BIOS boot partition" (YMMV with EFI).
My favorite thing about GPT, even if you aren't >2TB, is you can have gobs of partitions without messing with DOS "extended partitions" which are such a drag. Grub2 fully supports GPT, and from what I can tell, even ancient boxes will boot from GPT with MBR chaining to the "BIOS boot partition".
Lastly, Linux md raid is the best thing ever. I would almost never use any hardware RAID (or other software raid like IMSM) if I can help it. Your chances of recovering from big disasters is massively increased with all the leeway and options mdadm gives you. H/w RAID is such a black box that if things don't go according to plan, you're usually SOL.
Very last lastly: go ahead and boot off or RAID1. You don't need anything special for RAID1, it just works. Just put the grub-install MBR on both disks (again, assuming non-EFI) in case one dies. Make partitions for at least your boot/root/swap and make RAID1 on top of each. It's literally as easy as what you've already done. I'll also mention I use RAID even on SSDs as they fail too, usually in horrific ways. _______________________________________________ Roundtable mailing list Roundtable@muug.ca https://muug.ca/mailman/listinfo/roundtable
Good point. I didn't do that step, but I found when I reboot the machine it sorts everything out on its own anyway. For safety, it is probably better to include that step.
On 18-10-04 01:23 PM, Gilles Detillieux wrote:
Thanks, Scott.
One of the steps in the tutorial is to save the MD RAID configuration in /etc/mdadm/mdadm.conf. They suggest using "sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf", which I did. In your approach, that step isn't done. Is this a detail that pvcreate looks after for you, either by adding to that file itself, or saving the setup elsewhere? This was the only part of DigitalOcean's procedure that I found to be a bit kludgy. I was surprised that there wasn't something right in mdadm to manage the saving of the configuration more automatically. Adding the fstab entry was as I'd expect for any file system type. Other than that, things were pretty plug-and-play, with no messing around with systemctl or anything like that required. After a reboot, the RAID array was back in action just as it should be.
On 10/04/2018 10:07 AM, Scott Toderash wrote:
Here are my notes from the last time I build a Linux RAID on LVM. This was on 16.04LTS
I think my approach was slightly different. The RAID device is created on the LVM devices.
- create partitions of type Linux RAID Autodetect on both disks
fdisk -l /dev/sdb fdisk -l /dev/sdc 2. create a RAID array called md0 using mdstat sudo mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sdb1 /dev/sdc1 3. add the md0 raid device to the LVM pool sudo pvcreate /dev/md0 4. create a volume group called datavg sudo vgcreate datavg /dev/md0 5. create a logical volume called datalv within the volume group sudo lvcreate --name datalv --size 1.8T datavg 6. format the newly created logical volume sudo mkfs.ext4 /dev/datavg/datalv 7. move home files to a temporary location, create a new home and mount the newly formatted device there, copy the original home files to the new device sudo mv home home.orig; sudo mkdir home ; sudo chmod 777 home ; sudo mount /dev/mapper/datavg-datalv /home ; sudo cp -a /home.orig/* home 8. edit fstab for startup config for this disk
On 18-10-04 09:42 AM, Gilbert E. Detillieux wrote:
I think the tutorial is complete enough. The only thing I'd do differently is create appropriate partition tables on the raw disks (GPT if the disks are >2TB or if you need to use GPT for other reasons, but the older MS-DOS partition tables would be fine otherwise), set up one partition on each drive and tag them with the applicable partition type to indicate they're MD RAID partitions, and use the device names for the partitions rather than the raw drives in the mdadm commands. While this isn't necessary, I think it would help in post-mortem recovery, and in keeping your sanity when you're (or someone else is) trying to figure out what you did a few years later.
LVM has some nice features to offer, e.g. if you anticipate wanting to add more capacity to this file system in the future, or you want to split a large array into multiple file systems. But for simple use cases, I wouldn't bother. If you do use LVM, don't use its RAID features; use LVM over top of MD.
If you go with GPT partition format, and don't want to deal with the arcane syntax of parted commands, there are alternatives: gparted for the full-GUI, Partition-Magic-like experience, or gdisk for a simple fdisk-like, retro, text-menu-based interface.
Gilbert
On 04/10/2018 8:33 AM, Gilles Detillieux wrote:
Circumstances finally forced me to bite the bullet and learn something I had been putting off for too long: setting up a RAID array under Linux. I'm almost embarrassed, now that I've done it, that I waited so long because it was way simpler than I'd imagined. I thought I'd have to figure out all kinds of magic with LVM, parted, and mdadm, but I found this tutorial that showed a simple set of mdadm commands to set up RAID 1:
https://www.digitalocean.com/community/tutorials/how-to-create-raid-arrays-w...
This was actually one of the simplest scenarios: the system had an SSD for the root/boot partition, and two 2 TB hard drives for data storage (/dev/sdb & sdc), so I just needed to set up RAID 1 and not worry about the intricacies of booting from RAID. So, software RAID seemed like the quick & easy way to go with a minimum of fuss.
Now the nagging question: is it really this simple, or does the tutorial above oversimplify and omit some important steps? Can someone with ample RAID and mdadm experience advise or provide tips on anything else I should do or lookout for?
Thanks, Gilles
Thanks, bro! I've already started copying data onto the RAID array, so I don't think the lack of partitions is a big enough issue to warrant rebuilding. I intend to document my steps anyway to preserve my future sanity. :-)
Good to know I was on the right track in using mdadm rather than LVM, and I'll keep it in mind that I can use LVM over MD in the future. In this case we only wanted one big (redundant) filesystem anyway.
I'm actually setting this up remotely through a VPN on a system in Miami, so I preferred to stick with CLI tools as much as possible. I had thought of setting up a single partition on each disk using parted, as some tips online suggested that, but I figured why bother if the DigitalOcean tutorial didn't mention the need to do that. I probably will add in that step the next time I set up a similar array. I hadn't heard of gdisk before, so I'll check that out.
On 10/04/2018 09:42 AM, Gilbert E. Detillieux wrote:
I think the tutorial is complete enough. The only thing I'd do differently is create appropriate partition tables on the raw disks (GPT if the disks are >2TB or if you need to use GPT for other reasons, but the older MS-DOS partition tables would be fine otherwise), set up one partition on each drive and tag them with the applicable partition type to indicate they're MD RAID partitions, and use the device names for the partitions rather than the raw drives in the mdadm commands. While this isn't necessary, I think it would help in post-mortem recovery, and in keeping your sanity when you're (or someone else is) trying to figure out what you did a few years later.
LVM has some nice features to offer, e.g. if you anticipate wanting to add more capacity to this file system in the future, or you want to split a large array into multiple file systems. But for simple use cases, I wouldn't bother. If you do use LVM, don't use its RAID features; use LVM over top of MD.
If you go with GPT partition format, and don't want to deal with the arcane syntax of parted commands, there are alternatives: gparted for the full-GUI, Partition-Magic-like experience, or gdisk for a simple fdisk-like, retro, text-menu-based interface.
Gilbert
On 04/10/2018 8:33 AM, Gilles Detillieux wrote:
Circumstances finally forced me to bite the bullet and learn something I had been putting off for too long: setting up a RAID array under Linux. I'm almost embarrassed, now that I've done it, that I waited so long because it was way simpler than I'd imagined. I thought I'd have to figure out all kinds of magic with LVM, parted, and mdadm, but I found this tutorial that showed a simple set of mdadm commands to set up RAID 1:
https://www.digitalocean.com/community/tutorials/how-to-create-raid-arrays-w...
This was actually one of the simplest scenarios: the system had an SSD for the root/boot partition, and two 2 TB hard drives for data storage (/dev/sdb & sdc), so I just needed to set up RAID 1 and not worry about the intricacies of booting from RAID. So, software RAID seemed like the quick & easy way to go with a minimum of fuss.
Now the nagging question: is it really this simple, or does the tutorial above oversimplify and omit some important steps? Can someone with ample RAID and mdadm experience advise or provide tips on anything else I should do or lookout for?
Thanks, Gilles