Download memtest from http://www.memtest.org/#downiso.
Make sure you download the Pre-Built Binary zip/gz file.
Copy the bin file into the /tftpboot/linux-install directory and rename it to 'memtest'. You MUST REMOVE the .bin extension or it WILL NOT BOOT!
Edit /tftpboot/linux-install/pxelinux.cfg/default and add the following lines.
#vi /tftpboot/linux-install/pxelinux.cfg/default
default linux
prompt 1
timeout 100
label linux
kernel memtest
Jun 28, 2010
Jun 12, 2010
Recovering Failed RAID Disk on Linux
Recovering Failed RAID Disk on Linux
Objective:
If primary disk failed, Boot the OS from secondary disk in software RAID 1 (Grub not installed in secondary disk).
Procedure:
1. If the disk is hot-swappable, simply remove it. If it isn't, you'll need to schedule downtime and remove the disk.
2. Replace the failed disk and restart your machine,
a. If your failed disk isn't the boot disk (skip to step 7).
b. If your failed disk is the boot disk
3. Boot to the rescue mode using the 1st cd media, mount the boot filesystem under a temporary mountpoint, and do the following:
# mkdir /tmp/recovery
# mount /dev/sda1 /tmp/recovery
# cd /tmp/recovery
# grub --batch
This may take a while as grub probes and tries to guess where all of your drives are
4. Once grub is finished probing, do the following at the "grub>" prompt:
grub> root (hd0,0)
root (hd0,0)
Filesystem type is ext2fs, partition type 0xfd
grub> setup (hd0)
setup (hd0)
...
Running "install /grub/stage1 (hd0) (hd0)1+16 p (hd0,0)/grub/stage2
/grub/grub.conf"... succeeded
grub> exit
5. Now verify that all is well while still running off of the CD, like so:
# cat /boot/grub/device.map
(hd0) /dev/sda
6. Unmount the boot filesystem and reboot the system.
# umount /tmp/recovery
# reboot
Be sure to set the grub device map for hd0 to /dev/hdc if /dev/hda has gone bye-bye
7. After replacing the disk check the RAID status
# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sda1[0]
104320 blocks [2/1] [U_]
md1 : active raid1 sdb2[1] sda2[0]
522048 blocks [2/2] [UU]
md2 : active raid1 sda3[0]
4610560 blocks [2/1] [U_]
unused devices:
8. Repartition the disk, again, with sfdisk and we should end up with our partition table looking exactly the same
# sfdisk -d /dev/sda > mirror
# sfdisk /dev/sdb < mirror
The partition table should look almost identical
9. Now, just need to add back all the partitions
# mdadm -a /dev/md0 /dev/sdb1
# mdadm -a /dev/md1 /dev/sdb2
# mdadm -a /dev/md2 /dev/sdb3
10. Check the RAID details using the below commands
# mdadm -D /dev/md0
# mdadm -D /dev/md1
# mdadm -D /dev/md2
Once the RAID sync is done then restart and check the status.
Objective:
If primary disk failed, Boot the OS from secondary disk in software RAID 1 (Grub not installed in secondary disk).
Procedure:
1. If the disk is hot-swappable, simply remove it. If it isn't, you'll need to schedule downtime and remove the disk.
2. Replace the failed disk and restart your machine,
a. If your failed disk isn't the boot disk (skip to step 7).
b. If your failed disk is the boot disk
3. Boot to the rescue mode using the 1st cd media, mount the boot filesystem under a temporary mountpoint, and do the following:
# mkdir /tmp/recovery
# mount /dev/sda1 /tmp/recovery
# cd /tmp/recovery
# grub --batch
This may take a while as grub probes and tries to guess where all of your drives are
4. Once grub is finished probing, do the following at the "grub>" prompt:
grub> root (hd0,0)
root (hd0,0)
Filesystem type is ext2fs, partition type 0xfd
grub> setup (hd0)
setup (hd0)
...
Running "install /grub/stage1 (hd0) (hd0)1+16 p (hd0,0)/grub/stage2
/grub/grub.conf"... succeeded
grub> exit
5. Now verify that all is well while still running off of the CD, like so:
# cat /boot/grub/device.map
(hd0) /dev/sda
6. Unmount the boot filesystem and reboot the system.
# umount /tmp/recovery
# reboot
Be sure to set the grub device map for hd0 to /dev/hdc if /dev/hda has gone bye-bye
7. After replacing the disk check the RAID status
# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sda1[0]
104320 blocks [2/1] [U_]
md1 : active raid1 sdb2[1] sda2[0]
522048 blocks [2/2] [UU]
md2 : active raid1 sda3[0]
4610560 blocks [2/1] [U_]
unused devices:
8. Repartition the disk, again, with sfdisk and we should end up with our partition table looking exactly the same
# sfdisk -d /dev/sda > mirror
# sfdisk /dev/sdb < mirror
The partition table should look almost identical
9. Now, just need to add back all the partitions
# mdadm -a /dev/md0 /dev/sdb1
# mdadm -a /dev/md1 /dev/sdb2
# mdadm -a /dev/md2 /dev/sdb3
10. Check the RAID details using the below commands
# mdadm -D /dev/md0
# mdadm -D /dev/md1
# mdadm -D /dev/md2
Once the RAID sync is done then restart and check the status.
Jun 4, 2010
Upgrade the old HDD to new HDD in RHEL 5.x
Objective:
To upgrade the old HDD replace with new one without data loss. (New HDD size more than old HDD)
Releases:
Red hat Enterprises Linux 5.x
Procedure:
Note: Check the drives in Linux. /dev/sda should be your old drive. /dev/sdb should be your new drive.
- Copy the MBR (Master Boot Record) from first HDD to second HDD.
# dd if=/dev/sda of=/dev/sdb bs=446 count=1
- Copy the partition table from first HDD to second HDD
# sfdisk -d /dev/sda > part_back
# sfdisk /dev/sdb < part_back
Note: If you are having more than 4 partition means you need to follow the below steps for using the free spaces.
a. First redirect only 3 partition by editing the part_back file using the vi editor
# vi part_back
# Partition table of /dev/sda
Unit: sectors
/dev/sda1 : start= 63, size= 208782, Id=83, bootable
/dev/sda2 : start= 208845, size= 1044225, Id=82
/dev/sda3 : start= 1253070, size= 9012465, Id=83
b. Now redirect this partition dump file to 2nd HDD
# sfdisk /dev/sdb < part_back
c. Create the extended partition manually by using fdisk command
# fdisk /dev/sdb
d. Now dump this 2nd HDD partition table to different file.
# sfdisk –d /dev/sdb > part_back1
e. Once again take the partition dump of first HDD
# sfdisk –d /dev/sda > part_back
f. Replace the 4th partition details (extended) of part_back using part_back1 and save the file.
g. Now redirect to 2nd HDD by using sfdisk
#sfdisk /dev/sdb < part_back
- Now manually create the FS on all the partitions depending upon your partition ID.
# mke2fs –j /dev/sdb1
# mkswap /dev/sdb2
# mke2fs –j /dev/sdb3
# mke2fs –j /dev/sdb5
- Copy the partition
# dd if=/dev/sda1 of=/dev/sdb1 bs=64k
# dd if=/dev/sda2 of=/dev/sdb2 bs=64k
# dd if=/dev/sda3 of=/dev/sdb3 bs=64k
# dd if=/dev/sda5 of=/dev/sdb5 bs=64k
- Now booted through the 2nd hard disk OS will boot and also you are able to use the free spaces.
Subscribe to:
Posts (Atom)