So I'm attempting to virtualize one of our Redhat 5 boxes using VMWare's P2V functionality, and it's copied all the data into the VM without any trouble, but now I can't seem to get the virtual machine to boot because it's not detecting the virtualized partitions correctly. The physical system was using LVM on top of dm-raid to local disks. When VMWare virtualized the disks, it created a single disk, and copied the LVM layout onto it, so I think what's happening is on boot, the system is looking for the RAID disks and can't find them. When I try to boot it, it says that it can't find the logical volumes and goes into a kernel panic.
I've booted with a rescue disk, and when I run a pvscan on the new virtual disk, all the volumes are there and can be mounted, so the problem must be that the system isn't looking for the volumes in the right place. I've tried changing the root device to /dev/sda in the device.map comfig file, and I rebuilt the initrd file with both the --force-lvm-probe and --omit-raid-modules and it still loads dm-raid on boot.
So how do I get the system to completely forget about RAID, and boot off the new virtual disk? Is there a RAID config somewhere that I'm missing?
-- Wyatt Zacharias
What's the partition type set to? 0xFD or 0x8E, by any chance? Should probably be 0x83 for /boot, 0x8e for the PV. Otherwise, start from scratch instead of trying to "fix" it and you'll find the problem faster. VM boots? Yes. Loads boot sector? Yes. Loads GRUB? Yes. Loads kernel? Yes. Loads initrd? Maybe... probably. Can pivot_root? Maybe... probably not. Can achieve multi-user runlevel? No.
So somewhere between loading initrd (which probably worked) and pivot'ing root (probably) lies your problem. Extract your initrd and trace through the behaviour one line at a time.
Better yet, examine what modules are baked into the original initrd (from the original physical server) and compare with the P2V'd list of modules.
The loading of dm_raid is harmless; your problem is LVM modules or detection somehow.
-Adam
On 15-11-30 01:25 PM, Wyatt Zacharias wrote:
So I'm attempting to virtualize one of our Redhat 5 boxes using VMWare's P2V functionality, and it's copied all the data into the VM without any trouble, but now I can't seem to get the virtual machine to boot because it's not detecting the virtualized partitions correctly. The physical system was using LVM on top of dm-raid to local disks. When VMWare virtualized the disks, it created a single disk, and copied the LVM layout onto it, so I think what's happening is on boot, the system is looking for the RAID disks and can't find them. When I try to boot it, it says that it can't find the logical volumes and goes into a kernel panic.
I've booted with a rescue disk, and when I run a pvscan on the new virtual disk, all the volumes are there and can be mounted, so the problem must be that the system isn't looking for the volumes in the right place. I've tried changing the root device to /dev/sda in the device.map comfig file, and I rebuilt the initrd file with both the --force-lvm-probe and --omit-raid-modules and it still loads dm-raid on boot.
So how do I get the system to completely forget about RAID, and boot off the new virtual disk? Is there a RAID config somewhere that I'm missing?
-- Wyatt Zacharias
Roundtable mailing list Roundtable@muug.mb.ca http://www.muug.mb.ca/mailman/listinfo/roundtable
Loading dm-mem-cache.ko module Loading dm-region_hash.ko module Loading dm-message.ko module Loading dm-raid45.ko module device-mapper: dm-raid45: initialized v0.25941 Waiting for driver initialization. Scanning and configuring dmraid supported devices Trying to resume from /dev/vg00/lvol00 Unable to access resume device (/dev/vg00/lovl00) Creating root device. Mounting root filesystem. mount: could not find filesystem '/dev/root' Settting up other filesystems. Setting up a new root fs setuproot: moving /dev/ failed: No such file or directory no fstab.sys, mounting internal defaults setuproot: error mounting /proc: No such file or directory setuproot: error mounting /sys: No such file or directory Switching to new root and running init. unmounting old /dev unmounting old /proc unmounting old /sys switchroot: mount failed: No such file or directory Kernel panic - not sycing: Attempted to kill init!
So here's the last section of the boot log before it panics, I've also tried with 'noresume' in the kernel options, and that gets rid of the resume device error. So it looks like something's wrong at mkrootdev step. Is there a way to debug what devicemapper is doing to tell if it's picking up the logical volumes correctly?
echo "Loading dm-raid45.ko module" stabilized --hash --interval 1000 /proc/scsi/scsi mkblkdevs echo Scanning and configuring dmraid supported devices resume /dev/vg00/lvol00 echo Creating root device. mkrootdev -t ext4 -o defaults,ro /dev/vg00/lvol01 echo Mounting root filesystem. mount /sysroot echo Setting up other filesystems. setuproot echo Switching to new root and running init. switchroot
-- Wyatt Zacharias
On Mon, Nov 30, 2015 at 2:19 PM, Adam Thompson athompso@athompso.net wrote:
What's the partition type set to? 0xFD or 0x8E, by any chance? Should probably be 0x83 for /boot, 0x8e for the PV. Otherwise, start from scratch instead of trying to "fix" it and you'll find the problem faster. VM boots? Yes. Loads boot sector? Yes. Loads GRUB? Yes. Loads kernel? Yes. Loads initrd? Maybe... probably. Can pivot_root? Maybe... probably not. Can achieve multi-user runlevel? No.
So somewhere between loading initrd (which probably worked) and pivot'ing root (probably) lies your problem. Extract your initrd and trace through the behaviour one line at a time.
Better yet, examine what modules are baked into the original initrd (from the original physical server) and compare with the P2V'd list of modules.
The loading of dm_raid is harmless; your problem is LVM modules or detection somehow.
-Adam
On 15-11-30 01:25 PM, Wyatt Zacharias wrote:
So I'm attempting to virtualize one of our Redhat 5 boxes using VMWare's P2V functionality, and it's copied all the data into the VM without any trouble, but now I can't seem to get the virtual machine to boot because it's not detecting the virtualized partitions correctly. The physical system was using LVM on top of dm-raid to local disks. When VMWare virtualized the disks, it created a single disk, and copied the LVM layout onto it, so I think what's happening is on boot, the system is looking for the RAID disks and can't find them. When I try to boot it, it says that it can't find the logical volumes and goes into a kernel panic.
I've booted with a rescue disk, and when I run a pvscan on the new virtual disk, all the volumes are there and can be mounted, so the problem must be that the system isn't looking for the volumes in the right place. I've tried changing the root device to /dev/sda in the device.map comfig file, and I rebuilt the initrd file with both the --force-lvm-probe and --omit-raid-modules and it still loads dm-raid on boot.
So how do I get the system to completely forget about RAID, and boot off the new virtual disk? Is there a RAID config somewhere that I'm missing?
-- Wyatt Zacharias
Roundtable mailing listRoundtable@muug.mb.cahttp://www.muug.mb.ca/mailman/listinfo/roundtable
Roundtable mailing list Roundtable@muug.mb.ca http://www.muug.mb.ca/mailman/listinfo/roundtable
On 2015-11-30 Wyatt Zacharias wrote:
--force-lvm-probe and --omit-raid-modules and it still loads dm-raid on boot.
dm-raid?? Not md-raid? What sort of RAID was on this old box? Can you still boot the old physical box itself? What does cat /proc/mdstat say? Is there a mdadm.conf? Or was this back in the raidtools days?
Adam is right, we need a fdisk -l /dev/sdX from the original box for each drive, too. If it's 0xFD then changing that might help.
So how do I get the system to completely forget about RAID, and boot off the new virtual disk? Is there a RAID config somewhere that I'm missing?
I assume you mean this box was RAID1, right? Except the partition type there's not much that stops you using a single RAID1 disk as just a normal disk. Adam's right, you don't need to try to get rid of all md/dm references in the initrd.
You could also try mdadm --zero-superblock option (see manpage). That plus getting rid of 0xFD means that single disk is for sure now just a single disk. The ancient box might not have that option for mdadm, but booting of a SRC (or similar) boot cd will let you do it. DON'T DO THIS ON THE ORIGINAL BOX unless you have some good full-disk backups.
Oh ya, send us a cat /etc/fstab. It's probably set to link root to the md/dm device or label or UUID. You'll need to change that to a single disk, UUID is easiest, use blkid to get it.
Oh ya #2, you also need to check your grub (probably grub1?) and what that says your root is, and change that just like you do for fstab. You'll need to reinstall grub on the MBR after that too. I forget how to do that in grub1! Probably just grub-install /dev/sdX.
Too bad the ancient RHEL5 doesn't have dracut (right?) and its debug shell. Quite handy (if crappy) for these situations. I've spent WAY too many hours fighting with this kind of stuff, and learned waaay more than I wanted to know, but I'm usually going the opposite way, from non-RAID -> RAID.
If you get stuck and can get me a root ssh into the box (or rescue boot) I can probably solve your problem in ten minutes! :-) I also consult, and can come onsite for a modest fee, though you already knew that! :-) ;-)