Introduction
As a hosting services provider company, the company needed a solution for automatically backing up our clients’ data on an external to the farm data storage. So in order to set up the different geo-location server with good standard backup we needed a raid of any kind.
We’ve had a card laying here for sometime now and I decided to put a 2-tera raid using it. (it’s a soft raid backup).
In our case it’s two WD 1-terabyte disks, but it can be any sort of disks. Keep them identical though.
This backup solution involvs a shared key between the backup unit and the hosting servers.
System details
openSuse 11.4
2 X Seagate Terabyte disks
Note: These disks will be wiped of any data on them.
SATA RAID PCI card
Procedure
First, validate everything is recognized on dmesg. basically RAID card and Hard drives:
Here are the RAID lines from dmesg:
—————————–
sata_sil 0000:00:0a.0: version 2.3
sata_sil 0000:00:0a.0: PCI INT A -> GSI 18 (level, low) -> IRQ 18
scsi2 : sata_sil
scsi3 : sata_sil
ata3: SATA max UDMA/100 mmio m512@0xeb000000 tf 0xeb000080 irq 18
ata4: SATA max UDMA/100 mmio m512@0xeb000000 tf 0xeb0000c0 irq 18
Here are the hard drives lines:
————————–
ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 310)
ata3.00: ATA-8: ST31000333AS, SD35, max UDMA/133
ata3.00: 1953525168 sectors, multi 16: LBA48 NCQ (depth 0/32)
.
.
.
ata3.00: configured for UDMA/100
ata4: SATA link up 1.5 Gbps (SStatus 113 SControl 310)
ata4.00: ATA-8: ST31000333AS, SD35, max UDMA/133
ata4.00: 1953525168 sectors, multi 16: LBA48 NCQ (depth 0/32)
gameport: NS558 PnP Gameport is pnp00:0f/gameport0, io 0x201, speed 1011kHz
ata4.00: configured for UDMA/100
scsi 2:0:0:0: Direct-Access ATA ST31000333AS SD35 PQ: 0 ANSI: 5
sd 2:0:0:0: [sdb] 1953525168 512-byte hardware sectors: (1000GB/931GiB)
sd 2:0:0:0: [sdb] Write Protect is off
sd 2:0:0:0: [sdb] Mode Sense: 00 3a 00 00
sd 2:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn’t support DPO or FUA
sd 2:0:0:0: [sdb] 1953525168 512-byte hardware sectors: (1000GB/931GiB)
sd 2:0:0:0: [sdb] Write Protect is off
sd 2:0:0:0: [sdb] Mode Sense: 00 3a 00 00
sd 2:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn’t support DPO or FUA
sdb: unknown partition table
sd 2:0:0:0: [sdb] Attached SCSI disk
sd 2:0:0:0: Attached scsi generic sg2 type 0
scsi 3:0:0:0: Direct-Access ATA ST31000333AS SD35 PQ: 0 ANSI: 5
sd 3:0:0:0: [sdc] 1953525168 512-byte hardware sectors: (1000GB/931GiB)
sd 3:0:0:0: [sdc] Write Protect is off
sd 3:0:0:0: [sdc] Mode Sense: 00 3a 00 00
sd 3:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn’t support DPO or FUA
sd 3:0:0:0: [sdc] 1953525168 512-byte hardware sectors: (1000GB/931GiB)
sd 3:0:0:0: [sdc] Write Protect is off
sd 3:0:0:0: [sdc] Mode Sense: 00 3a 00 00
sd 3:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn’t support DPO or FUA
sdc: unknown partition table
sd 3:0:0:0: [sdc] Attached SCSI disk
sd 3:0:0:0: Attached scsi generic sg3 type 0
Disks are recognized
Raid Configuring on openSUSE
I’ve had some difficulty understanding how to activate the RAID on the this card until I found out it’s a different card than what i’m used to – this one is SOFT RAID. In contrast to regular hardware raid, here the OS is in-charge of copying the blocks from one drive to the other.
Following are the commands done on the openSuse machine in order to activate the never-configured-before raid on the machine.
CREATE
# mdadm –create –verbose /dev/md0 –level=raid1 –raid-devices=2 /dev/sdb /dev/sdc
# mkfs.ext3 /dev/md0
# mount /dev/md0 /mnt
# mkdir /backupRaid
# mount /dev/md0 /backupRaid
VERIFY
# df -h
Filesystem Size Used Avail Use% Mounted on
rootfs 20G 3.0G 16G 16% /
devtmpfs 622M 40K 622M 1% /dev
tmpfs 628M 480K 627M 1% /dev/shm
tmpfs 628M 668K 627M 1% /run
/dev/mapper/system-root 20G 3.0G 16G 16% /
tmpfs 628M 0 628M 0% /sys/fs/cgroup
tmpfs 628M 668K 627M 1% /var/lock
tmpfs 628M 0 628M 0% /media
tmpfs 628M 668K 627M 1% /var/run
/dev/sdc1 152M 30M 114M 21% /boot
/dev/mapper/system-home 25G 881M 23G 4% /home
/dev/md0 917G 15G 856G 2% /backupRaid
# cat /proc/mdstat
Personalities : [linear] [raid1]
md0 : active raid1 sda[1] sdb[0]
976762496 blocks [2/2] [UU]
unused devices: <none>
check if log exists on /var/log/messages for creation.
Crontab configuration
———————
I’ve decided this backup will have 1-day, 1-week, and 2-week snapshots. For this i’ve built 3 scripts backing up the data to 3 different directories. (I could have just used 1 file with 3 different parameters…)
Here’s an example of one of the scripts:
rsync -v -r –rsh=’ssh -p22′ root@<MYSERVERIP>:/var/www/vhosts/ /backupRaid/hosting_webserver/daily
Crontab of the user backing up the content
00 23 7,21 0 /path/to/rsync.weekly_721 > /var/log/hosting_backup_weekly_721 2>&1
00 23 1,14 0 /path/to/rsync.weekly_114 > /var/log/hosting_backup_weekly_114 2>&1
00 03 * /path/to/rsync.daily > /var/log/hosting_backup_daily 2>&1
Conclusion
Well this solution is working until today (3years almost now) and has proven itself very important for the company and as well as for customers’ mistakes recognition.
Appendix A
In case you’d like to DELETE a disk from the RAID you can use these commands:
Stop the device on the raid:
# mdadm –manage /dev/mdfoo –fail /dev/sdfoo
REMOVE the device from the raid:
# mdadm –manage /dev/mdfoo –remove /dev/sdfoo
In case you’d like to STOP THE RAID:
# mdadm –manage –stop /dev/mdfoo
Start the md as it would start from reboot.
# /etc/rc.d/boot.md start
Starting MD Raid mdadm: /dev/md/0 has been started with 2 drives.
done
Troubleshooting
# mdadm –assemble –scan
mdadm: /dev/md/0_0 has been started with 2 drives.
# cat /proc/mdstat
Personalities : [raid1]
md127 : active raid1 sdb[0] sdc[1]
976762496 blocks [2/2] [UU]
unused devices: <none>
# mdadm –manage /dev/md127 –stop
mdadm: stopped /dev/md127
Enjoy!