Dec 10 2012

perl – uninstall module

perl -MCPAN -e shell
install CPANPLUS
exit

perl -MCPANPLUS -e shell
help
[General]
h | ? # display help
q # exit
v # version information
[Search]
a AUTHOR … # search by author(s)
m MODULE … # search by module(s)
f MODULE … # list all releases of a module
o [ MODULE … ] # list installed module(s) that aren’t up to date
w # display the result of your last search again
[Operations]
i MODULE | NUMBER … # install module(s), by name or by search number
i URI | … # install module(s), by URI (ie http://foo.com/X.tgz)
t MODULE | NUMBER … # test module(s), by name or by search number
u MODULE | NUMBER … # uninstall module(s), by name or by search number
d MODULE | NUMBER … # download module(s)
l MODULE | NUMBER … # display detailed information about module(s)
r MODULE | NUMBER … # display README files of module(s)
c MODULE | NUMBER … # check for module report(s) from cpan-testers
z MODULE | NUMBER … # extract module(s) and open command prompt in it
[Local Administration]
b # write a bundle file for your configuration
s program [OPT VALUE] # set program locations for this session
s conf [OPT VALUE] # set config options for this session
s mirrors # show currently selected mirrors

Dec 5 2012

mysql trigger

DELIMITER |
CREATE TRIGGER `klienci_before_insert` BEFORE INSERT ON `klienci`
FOR EACH ROW
set NEW.sha1 = sha1(NEW.klient_id+NEW.time_add);
|

DELIMITER ;

Sep 28 2012

Wykrywacz kłamstw PL

Aug 31 2012

Kukiz – Paranoid

Jun 21 2012

nBox factory reset / hard reset

CLRA:
1) factory reset: EPG + REC + STRZAŁKA W GÓRE
2) hard reset: EPG + RES + STRZAŁKA W DÓŁ

BSLA/BZZB:
1) factory reset: EPG + REC+ STRZAŁKA W GÓRE
2) hard reset: EPG + RES + STRZAŁKA W PRAWO

May 25 2012

LVM on RAID 10

mdadm -v –create /dev/md1 –level=raid10 –raid-devices=4 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2
pvcreate /dev/md1
vgcreate vg-server1 /dev/md1
lvcreate -L4g -nlv-home vg-server1
lvcreate -L2g -nlv-var vg-server1
lvcreate -L1g -nlv-tmp vg-server1

The -L option specifies the size of the volume

lvcreate -l 100%FREE -nlv-home vg (creates all of the unallocated space in the volume group)

reducing volume  to 100GB (not for XFS filesystem!)

  1. umount /dev/vg-server1/lv-zapas
  2. e2fsck -f /dev/vg-server1/lv-zapas
  3. resize2fs -p  /dev/vg-server1/lv-zapas 100G
  4. lvreduce -L 100G /dev/vg-server1/lv-zapas
  5. e2fsck -f /dev/vg-server1/lv-zapas
  6. resize2fs -p  /dev/vg-server1/lv-zapas 
  7. e2fsck -f /dev/vg-server1/lv-zapas
  8. mount /dev/vg-server1/lv-zapas

extending volume up to 100GB  (not for XFS filesystem !)

  1. umount /dev/vg-server1/lv-home
  2. lvextend -L 100G /dev/vg-server1/lv-home or  lvextend -l +100%FREE /dev/vg-server1/lv-home)
  3. e2fsck -f /dev/vg-server1/lv-home
  4. resize2fs -p  /dev/vg-server1/lv-home
  5. e2fsck -f /dev/vg-server1/lv-home
  6. mount /dev/vg-server1/lv-home

 

extending volume + 1GB

lvextend -L+1G /dev/vg-server1/lv-var



				
May 25 2012

reinstaling lilo

Boot from any Live CD

mkdir /myraid
mount /dev/md0 /myraid
mount /dev/sda1 /myraid/boot
mount –bind /dev /myraid/dev
mount -t devpts devpts /myraid/dev/pts
mount -t proc proc /myraid/proc
mount -t sysfs sysfs /myraid/sys
chroot /myraid
lilo
exit

 

May 25 2012

umounting and deleteing array

umount /dev/md0
mdadm –stop /dev/md0
mdadm –zero-superblock /dev/sda1
mdadm –zero-superblock /dev/sdb1
mdadm –zero-superblock /dev/sdc1
mdadm –zero-superblock /dev/sdd1

May 9 2012

rsync on Windows – example

rsync -av –delete –delete-excluded –exclude-from=/cygdrive/c/docume~1/rafrom/putty/rsync_exclude –chmod u+rwx -e “ssh -p 22 -i /cygdrive/c/docume~1/rafrom/putty/id_rsa_rsync” “/cygdrive/c/Fox” rafrom@192.168.100.199:backup/

Apr 19 2012

software RAID repair (naprawa macierzy RAID)

1. In this example RAID  contents two disks (/dev/sda, /dev/sdb)  and 4 partitions ( md0, md1, md2, md3)
cat /proc/mdstat

Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md1 : active raid1 sdb5[1] sda5[0]
 20979264 blocks [2/2] [UU]
md2 : active raid1 sda6[0] sdb6[1]
 20979968 blocks [2/2] [UU]
md3 : active raid1 sdb7[1] sda7[0]
 880264320 blocks [2/2] [UU]
md0 : active raid1 sdb1[1] sda1[0]
 20979648 blocks [2/2] [UU]
unused devices: <none>

2. If the RAID fail you will see

cat /proc/mdstat
 Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
 md1 : active raid1 sda5[0]
       20979264 blocks [2/1] [U_]
 md2 : active raid1 sda6[0]
       20979968 blocks [2/1] [U_]
 md3 : active raid1 sda7[0]
       880264320 blocks [2/1] [U_]
 md0 : active raid1 sda1[0]
       20979648 blocks [2/1] [U_]
 unused devices:

3. Disconnect failed hdd

4. Connect new hdd

5. Loook into dmesg for hdd letter

6. remove failed hdd from raid (usuniecie uszkodzonego dysku z macierzy)

mdadm /dev/md0 --fail detached --remove detached
mdadm /dev/md1 --fail detached --remove detached
mdadm /dev/md2 --fail detached --remove detached
mdadm /dev/md3 --fail detached --remove detached

7. copy partitions structure from old GOOD HDD to the new one ( kopiujemy strukture partycji z jednego dysku na drugi)

sfdisk -d /dev/sda | sfdisk /dev/sde

8. Add new hdd into RAID (# dodajemy nowo wsadzony dysk)

mdadm --add /dev/md0 /dev/sde1
mdadm --add /dev/md1 /dev/sde5
mdadm --add /dev/md2 /dev/sde6
mdadm --add /dev/md3 /dev/sde7

9. Rebuilding starts automaticaly, You can see RAID status by

cat /proc/mdstat

 10. and WHEN ALL PARTITIONS ARE REBUILDED run:

mdadm --detail --scan >> /etc/mdadm.conf