nBox factory reset / hard reset
CLRA:
1) factory reset: EPG + REC + STRZAŁKA W GÓRE
2) hard reset: EPG + RES + STRZAŁKA W DÓŁ
BSLA/BZZB:
1) factory reset: EPG + REC+ STRZAŁKA W GÓRE
2) hard reset: EPG + RES + STRZAŁKA W PRAWO
LVM on RAID 10
mdadm -v –create /dev/md1 –level=raid10 –raid-devices=4 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2
pvcreate /dev/md1
vgcreate vg-server1 /dev/md1
lvcreate -L4g -nlv-home vg-server1
lvcreate -L2g -nlv-var vg-server1
lvcreate -L1g -nlv-tmp vg-server1
The -L option specifies the size of the volume
lvcreate -l 100%FREE -nlv-home vg (creates all of the unallocated space in the volume group)
reducing volume to 100GB (not for XFS filesystem!)
- umount /dev/vg-server1/lv-zapas
- e2fsck -f /dev/vg-server1/lv-zapas
- resize2fs -p /dev/vg-server1/lv-zapas 100G
- lvreduce -L 100G /dev/vg-server1/lv-zapas
- e2fsck -f /dev/vg-server1/lv-zapas
- resize2fs -p /dev/vg-server1/lv-zapas
- e2fsck -f /dev/vg-server1/lv-zapas
- mount /dev/vg-server1/lv-zapas
extending volume up to 100GB (not for XFS filesystem !)
- umount /dev/vg-server1/lv-home
- lvextend -L 100G /dev/vg-server1/lv-home or ( lvextend -l +100%FREE /dev/vg-server1/lv-home)
- e2fsck -f /dev/vg-server1/lv-home
- resize2fs -p /dev/vg-server1/lv-home
- e2fsck -f /dev/vg-server1/lv-home
- mount /dev/vg-server1/lv-home
extending volume + 1GB
lvextend -L+1G /dev/vg-server1/lv-var
reinstaling lilo
Boot from any Live CD
mkdir /myraid
mount /dev/md0 /myraid
mount /dev/sda1 /myraid/boot
mount –bind /dev /myraid/dev
mount -t devpts devpts /myraid/dev/pts
mount -t proc proc /myraid/proc
mount -t sysfs sysfs /myraid/sys
chroot /myraid
lilo
exit
umounting and deleteing array
umount /dev/md0
mdadm –stop /dev/md0
mdadm –zero-superblock /dev/sda1
mdadm –zero-superblock /dev/sdb1
mdadm –zero-superblock /dev/sdc1
mdadm –zero-superblock /dev/sdd1
rsync on Windows – example
rsync -av –delete –delete-excluded –exclude-from=/cygdrive/c/docume~1/rafrom/putty/rsync_exclude –chmod u+rwx -e “ssh -p 22 -i /cygdrive/c/docume~1/rafrom/putty/id_rsa_rsync” “/cygdrive/c/Fox” rafrom@192.168.100.199:backup/
software RAID repair (naprawa macierzy RAID)
1. In this example RAID contents two disks (/dev/sda, /dev/sdb) and 4 partitions ( md0, md1, md2, md3)
cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] md1 : active raid1 sdb5[1] sda5[0] 20979264 blocks [2/2] [UU]
md2 : active raid1 sda6[0] sdb6[1] 20979968 blocks [2/2] [UU]
md3 : active raid1 sdb7[1] sda7[0] 880264320 blocks [2/2] [UU]
md0 : active raid1 sdb1[1] sda1[0] 20979648 blocks [2/2] [UU]
unused devices: <none>
2. If the RAID fail you will see
cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] md1 : active raid1 sda5[0] 20979264 blocks [2/1] [U_] md2 : active raid1 sda6[0] 20979968 blocks [2/1] [U_] md3 : active raid1 sda7[0] 880264320 blocks [2/1] [U_] md0 : active raid1 sda1[0] 20979648 blocks [2/1] [U_] unused devices:
3. Disconnect failed hdd
4. Connect new hdd
5. Loook into dmesg for hdd letter
6. remove failed hdd from raid (usuniecie uszkodzonego dysku z macierzy)
mdadm /dev/md0 --fail detached --remove detached mdadm /dev/md1 --fail detached --remove detached mdadm /dev/md2 --fail detached --remove detached mdadm /dev/md3 --fail detached --remove detached
7. copy partitions structure from old GOOD HDD to the new one ( kopiujemy strukture partycji z jednego dysku na drugi)
sfdisk -d /dev/sda | sfdisk /dev/sde
8. Add new hdd into RAID (# dodajemy nowo wsadzony dysk)
mdadm --add /dev/md0 /dev/sde1 mdadm --add /dev/md1 /dev/sde5 mdadm --add /dev/md2 /dev/sde6 mdadm --add /dev/md3 /dev/sde7
9. Rebuilding starts automaticaly, You can see RAID status by
cat /proc/mdstat
10. and WHEN ALL PARTITIONS ARE REBUILDED run:
mdadm --detail --scan >> /etc/mdadm.conf
raid1 on existing filesystem -> Example
Existing filesystem is on /dev/sda
/dev/root /
/dev/sda5 /var
/dev/sda6 /usr
/dev/sda8 /home
Connect new disk (for example /dev/sdb)
Copy partition structure from sda to sdc:
sfdisk -d /dev/sda | sfdisk /dev/sdb
or if disk has differ size
create the same size ( or bigger) partitions on sdb
end change type for fd (Linux raid autodetect)
change disk’s blkid
tune2fs -U random /dev/sdb
Next, create the single-disk RAID-1 array. Note the “missing” keyword is specified as one of our devices. We are going to fill this missing device later.
mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb1
or
mdadm --create /dev/md0 --metadata=0.90 --level=1 --raid-devices=2 missing /dev/sdb1
and for other partitions
mdadm --create /dev/md1 --metadata=0.90 --level=1 --raid-devices=2 missing /dev/sdb5
mdadm --create /dev/md2 --metadata=0.90 --level=1 --raid-devices=2 missing /dev/sdb6
mdadm --create /dev/md3 --metadata=0.90 --level=1 --raid-devices=2 missing /dev/sdb7
cat /proc/mdstat shows just created raid
Use the file system of your preference here. I'll use ext4 for this guide.
mkfs -t ext4 -j -L RAID-ONE /dev/md0
mkfs -t ext4 -j -L RAID-ONE /dev/md1
mkfs -t ext4 -j -L RAID-ONE /dev/md2
mkfs -t ext4 -j -L RAID-ONE /dev/md3
Make a file system on the swap partition:
mkswap -L NEW-SWAP /dev/sdb2
The new RAID-1 array is ready to start accepting data! So now we need to mount the array, and copy everything from the old system to the new system
mkdir /tmp/md0 /tmp/md1 /tmp/md2 /tmp/md3
mount /dev/md0 /tmp/md0 mount /dev/md1 /tmp/md1
mount /dev/md2 /tmp/md2
mount /dev/md3 /tmp/md3
rsync -avxHAXS --delete --progress /home/ /tmp/md3
rsync -avxHAXS --delete --progress /var/ /tmp/md2
rsync -avxHAXS --delete --progress /usr/ /tmp/md1
rsync -avxHAXS --delete --progress / /tmp/md0
edit /etc/fstab and change mounting points to appropriate /dev/md0 .. /dev/md3
and add line:
/dev/sdb2 swap swap defaults 0 0
reboot with opition root=/dev/md0 (where my root filesystem was located).
telinit 1 ?
First, open /dev/sda with fdisk and change all the partittions you want to have added to the array to type fd – linux raid autodetection.
Then, for each degraded array, add the appropriate non-array device to it:
mdadm /dev/md0 -a /dev/sda1
(wait for finish recovery)
(umount/dev/sda5)
mdadm /dev/md1 -a /dev/sda5
(wait for finish recovery)
umount/dev/sda6
mdadm /dev/md2 -a /dev/sda6
(wait for finish recovery)
umount/dev/sda8
mdadm /dev/md3 -a /dev/sda8
(wait for finish recovery)
Ensure your /etc/lilo.conf has the correct setup:
boot=/dev/md0 raid-extra-boot=mbr-only
root=/dev/md0
and type:
lilo
LILO will write boot information to each of the individual raid devices boot sectors, so if either /boot or your root partition are on failed disks, you’ll still be able to boot.
Create /etc/mdadm/mdadm.conf.
in /etc/mdadm/mdadm.conf put line:
DEVICE /dev/sda* /dev/sdb*
and WHEN ALL PARTITIONS ARE REBUILDED run:
mdadm --detail --scan >> /etc/mdadm.conf
pecl ERROR: `phpize’ failed
Solution from: http://kagan.mactane.org/blog/2009/05/11/workaround-for-pearpecl-failure-with-message-error-phpize-failed/
If you’ve gotten an “ERROR: `phpize’ failed” message when trying to run a “pecl install” or “pear install” command, try running phpinfo()
— if you see --enable-sigchild
in the “Configure Command” section near the very top, then you’re most likely being bitten by this bug.
Potential Fixes and Workarounds
The PHP dev team recommends recompiling without the offending flag.
However, you may not be able to do that, for any of various reasons. (You may have installed from a binary package, for instance — like most people these days.) Or it may simply seem like an excessive hassle. I offer the following patch as-is, without any guarantee or support.
First, ensure that you have the latest version of PEAR::Builder. Look in your PEAR/Builder.php file — On most Linux and Unix installations, this is likely to be in /usr/lib/php/PEAR/Builder.php
, or possibly /usr/local/lib/php/PEAR/Builder.php
.
On Windows systems, PHP might be installed nearly anywhere, but supposing it’s in c:\php
, then the file you’re looking for will be in c:\php\PEAR\PEAR\Builder.php
(yes, that’s two PEARs in a row).
Check the “@version” line in the big comment block at the beginning of the file; the line you want should be around line 19 or so. If says it’s less than version 1.38 (the latest one, at the time I’m writing this post), then try upgrading. Running “pear upgrade pear” should work. Then you can install this patch file:
Download the patch file and place it somewhere on your machine. Log in and cd
to the PEAR directory that contains the Builder.php file. Then run the patch command. In the following example, I’ve placed the patch file in root’s home directory:
root@finrod:~# ls
loadlin16c.txt loadlin16c.zip patch-pear-builder-1.38.txt
root@finrod:~# cd /usr/lib/php/PEAR
root@finrod:/usr/lib/php/PEAR# cp Builder.php Builder.bak.php
root@finrod:/usr/lib/php/PEAR# patch -p0 < /root/patch-pear-builder-1.38.txt
patching file Builder.php
root@finrod:/usr/lib/php/PEAR#
Naturally, if the patch file doesn’t work for some reason, or it breaks things, you can just cp
the backup file back into place.