Blog
Company news and beyond.

Linux HowTo: Setting up and booting Debian Linux from Soft RAID1

January 16, 2004 - Filed in Linux HowTos by Felix
Setting up a Software RAID1 as root partition for a Linux system has the advance of no reliance on special hardware controllers and reduced running costs. However, the actual process of setting it up can be very time-consuming, annoying and finally a failure. Thus, here a quick break-down of how I got my soft RAID1 array running with Debian.

0. Disclaimer

Make sure you have at least one complete backup of your data in a safe place before following anything in this HOWTO. This HOWTO is provided AS-IS. I take no responsibility or liability for damages of any kind that may arouse out of the use of this information. You use it solely on your own risk. If you are not ready to do this, do not continue.

1. Partition the harddrives

The easist way to set up such an array is by using three harddrives, with the first holding the actual Debian install, the other two being the target RAID1 array.

Setup

DriveCapacityTask
/dev/hda120 GBRAID 1, first HD
/dev/hdb120 GBRAID 1, second HD
/dev/hdc 80 GBInstall to be transfered to the new RAID array

You can partition the harddrive using cfdisk or fdisk. Make sure to set the parition types to 0xFD and not to format them (yet).

In the following this parition/raid array is used as basis:

Parititions

PartitionRAID ArrayRAID Mount point
/dev/hda1/dev/md1/boot
/dev/hdb1/dev/md1/boot
/dev/hda2/dev/md0/
/dev/hdb2/dev/md0/

2. Create the two raidarray

Create the two raid arrays.

mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/hda2 /dev/hdb2
mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/hda1 /dev/hdb1

The two raid arrays are now active. You can check this with:

cat /proc/mdstat

Now is the right time to create /etc/mdadm/mdadm.conf

echo 'DEVICE /dev/hda1 /dev/hda2 /dev/hdb1 /dev/hdb2' > mdadm.conf
mdadm --detail --scan >> mdadm.conf

mdadm.conf is later used by mkinitrd to build the special initrd. Make sure md0 and md1 are not exchanged. Edit the file as needed, then move it to /etc/mdadm/

3. Format the raidarrays

Here, we use ext2 for /boot (/dev/md1) and ext3 for / (/dev/md0).

# Ext3 for the root fs
mkfs.ext3 /dev/md0

# Ext2 for the boot fs
mkfs.ext2 /dev/md1

4. Make a special initrd with RAID support

In order to have raid support, the md and raid1 modules need to be inserted into the kernel before INIT. To later mount the filesystems, we need ext2 and ext3. We'll add all of them to /etc/mkinitrd/modules

Contents of /etc/mkinitrd/modules

# /etc/mkinitrd/modules: Kernel modules to load for initrd.
#
# This file should contain the names of kernel modules and their arguments
# (if any) that are needed to mount the root file system, one per line.
# Comments begin with a `#', and everything on the line after them are ignored.
#
# You must run mkinitrd(8) to effect this change.
#
# Examples:
#
#  ext2
#  wd io=0x300
ext2
ext3
md
raid1

Add other modules as you need them. For now, we've added all modules we need and can create the new initrd.img:

mkinitrd  -d /etc/mkinitrd/ -r /dev/md0 -o /boot/initrd.img-2.4.23raid

Adapt /etc/lilo.conf

# /etc/lilo.conf
lba32
boot=/dev/hda
root=/dev/hdc2
install=/boot/boot-menu.b
map=/boot/map
delay=20
vga=extended

default=RescueLinux

image=/vmlinuz
        label=RescueLinux
        initrd=/initrd.img
        read-only

image=/vmlinuz
        label=LinuxRAID
        initrd=/boot/initrd.img-2.4.23raid

Install the new boot block and reboot, try LinuxRAID:

lilo
shutdown -r now
If you encounter a kernel panic, you can switch back to RescueLinux and boot as usual. Press "Shift" on the lilo prompt to bring up a menu.

While booting with LinuxRAID, watch the output closely. Prior to the INIT call, md0 should be detected and added as RAID array. It is "normal" (well, actually Linux seems broken or behaviour changed) the "Autodetection" does not yield results:

md: Autodetecting RAID arrays.
md: autorun ...
md: ... autorun DONE.
Notice: "autodetection" does only appear if md and raid1 are compiled into the kernel, which is not the case with standard Debian "kernel-image"-kernels. There is no need to compile in support for them when following this HOW-TO, since the compile-in-way as stand-alone solution does not seem to work anyway.

5. Copy your old install over to the RAID

Check whether raid arrays got correct enumeration (e.g. md0 and md1 not exchanged) with

cat /proc/mdstat

Then mount the raidarrays, copy your old system and cleanly unmount them again

# Copy /boot to the raid array
mount /dev/md1 /mnt
cp -ax /boot/ /mnt/
umount /mnt

# Copy / to the raid array
mount /dev/md0 /mnt
cp -ax / /mnt/
umount /mnt

6. Adapt lilo.conf on RAID array

Now let's mount md0, chroot to it, change fstab, lilo.conf, call lilo, leave, unmount and reboot.

mount /dev/md0 /mnt
chroot /mnt
vi /etc/fstab
mount /boot
vi /etc/lilo.conf
lilo
umount /boot
exit
umount /mnt
shutdown -r now

New contents of /etc/fstab

# /etc/fstab: static file system information.
#
#                
/dev/md0        /       ext3    defaults,errors=remount-ro      0 1
/dev/md1        /boot   ext2    defaults        0 2
#/dev/hda3      none    swap    sw      0 0
#/dev/hdb3      none    swap    sw      0 1
/dev/fd0        /floppy auto    rw,user,noauto  0 0
proc    /proc   proc    defaults        0 0

New contents of /etc/lilo.conf

lba32
boot=/dev/hda
disk=/dev/md0
bios=0x80
partition=/dev/md1
root=/dev/md0
install=/boot/boot-menu.b
map=/boot/map
delay=20
vga=extended

default=Linux_OLDRAID

image=/vmlinuz
        label=LinuxRESCUE
        initrd=/initrd.img
	root=/dev/hdc2

image=/vmlinuz
        label=Linux_OLDRAID
        initrd=/boot/initrd.img-2.4.23raid
        read-only

7. Further notes

  • Example of a /etc/raidtab (not needed here, but if you need one)

    # md0 is the boot array
    raiddev                 /dev/md0
            raid-level              1
            nr-raid-disks           2
            chunk-size              32
            # Spare disks for hot reconstruction
            nr-spare-disks          0
            persistent-superblock   1
            device                  /dev/hda2
            raid-disk               0
            device                  /dev/hdb2
            raid-disk               1
    
    # md1 is root array
    raiddev                 /dev/md1
            raid-level              1
            nr-raid-disks           2
            chunk-size              32
            # Spare disks for hot reconstruction
            nr-spare-disks          0
            persistent-superblock   1
            device                  /dev/hda1
            raid-disk               0
            device                  /dev/hdb1
            raid-disk               1
    
    

  • If you need to zero out the persistent superblock (you may loose data through this!), you can use mdadm --zero-superblock
  • Stop a raid-array with
    mdadm --manage --stop 
  • Start a raid-array with
    mdadm --manage --run 
  • Get the persistent superblock info
    mdadm -E 
  • Assemble and start all raid-arrays with
    mdadm --assemble --scan
  • You can mount a initrd with
    mount -o loop /boot/initrd.img-2.4.23raid /mnt
    .. the important line then is in /mnt/script (added there by mkinitrd) ..
    mdadm -A /devfs/md/0 -R -u 2ebbd037:da9d0664:7294680c:63c60e36 /dev/hda2 /dev/hdb2
  • It is/was actually possible to tell the kernel where to look for raidarrays, then let it assemble them. This is done with the flags
    append="md=0,/dev/hda2,/dev/hdb2 md=1,/dev/hda1,/dev/hdb1"
    in lilo.conf. However, at least with kernel 2.4.23 and 2.4.24 you will get the kernel brabbling of bugs in md.c and refusing to recognize the raid arrays. It seems, though, that this worked in older 2.4 revisions.

8. Further references


Last updated: 16.01.2004

0 comment(s):

Write a comment

The comments are closed for this article.