Archive for the Linux Category

Dec 10 2012

perl – uninstall module

perl -MCPAN -e shell
install CPANPLUS
exit

perl -MCPANPLUS -e shell
help
[General]
h | ? # display help
q # exit
v # version information
[Search]
a AUTHOR … # search by author(s)
m MODULE … # search by module(s)
f MODULE … # list all releases of a module
o [ MODULE … ] # list installed module(s) that aren’t up to date
w # display the result of your last search again
[Operations]
i MODULE | NUMBER … # install module(s), by name or by search number
i URI | … # install module(s), by URI (ie http://foo.com/X.tgz)
t MODULE | NUMBER … # test module(s), by name or by search number
u MODULE | NUMBER … # uninstall module(s), by name or by search number
d MODULE | NUMBER … # download module(s)
l MODULE | NUMBER … # display detailed information about module(s)
r MODULE | NUMBER … # display README files of module(s)
c MODULE | NUMBER … # check for module report(s) from cpan-testers
z MODULE | NUMBER … # extract module(s) and open command prompt in it
[Local Administration]
b # write a bundle file for your configuration
s program [OPT VALUE] # set program locations for this session
s conf [OPT VALUE] # set config options for this session
s mirrors # show currently selected mirrors

Dec 5 2012

mysql trigger

DELIMITER |
CREATE TRIGGER `klienci_before_insert` BEFORE INSERT ON `klienci`
FOR EACH ROW
set NEW.sha1 = sha1(NEW.klient_id+NEW.time_add);
|

DELIMITER ;

May 25 2012

LVM on RAID 10

mdadm -v –create /dev/md1 –level=raid10 –raid-devices=4 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2
pvcreate /dev/md1
vgcreate vg-server1 /dev/md1
lvcreate -L4g -nlv-home vg-server1
lvcreate -L2g -nlv-var vg-server1
lvcreate -L1g -nlv-tmp vg-server1

The -L option specifies the size of the volume

lvcreate -l 100%FREE -nlv-home vg (creates all of the unallocated space in the volume group)

reducing volume  to 100GB (not for XFS filesystem!)

  1. umount /dev/vg-server1/lv-zapas
  2. e2fsck -f /dev/vg-server1/lv-zapas
  3. resize2fs -p  /dev/vg-server1/lv-zapas 100G
  4. lvreduce -L 100G /dev/vg-server1/lv-zapas
  5. e2fsck -f /dev/vg-server1/lv-zapas
  6. resize2fs -p  /dev/vg-server1/lv-zapas 
  7. e2fsck -f /dev/vg-server1/lv-zapas
  8. mount /dev/vg-server1/lv-zapas

extending volume up to 100GB  (not for XFS filesystem !)

  1. umount /dev/vg-server1/lv-home
  2. lvextend -L 100G /dev/vg-server1/lv-home or  lvextend -l +100%FREE /dev/vg-server1/lv-home)
  3. e2fsck -f /dev/vg-server1/lv-home
  4. resize2fs -p  /dev/vg-server1/lv-home
  5. e2fsck -f /dev/vg-server1/lv-home
  6. mount /dev/vg-server1/lv-home

 

extending volume + 1GB

lvextend -L+1G /dev/vg-server1/lv-var



					
May 25 2012

reinstaling lilo

Boot from any Live CD

mkdir /myraid
mount /dev/md0 /myraid
mount /dev/sda1 /myraid/boot
mount –bind /dev /myraid/dev
mount -t devpts devpts /myraid/dev/pts
mount -t proc proc /myraid/proc
mount -t sysfs sysfs /myraid/sys
chroot /myraid
lilo
exit

 

May 25 2012

umounting and deleteing array

umount /dev/md0
mdadm –stop /dev/md0
mdadm –zero-superblock /dev/sda1
mdadm –zero-superblock /dev/sdb1
mdadm –zero-superblock /dev/sdc1
mdadm –zero-superblock /dev/sdd1

Apr 19 2012

software RAID repair (naprawa macierzy RAID)

1. In this example RAID  contents two disks (/dev/sda, /dev/sdb)  and 4 partitions ( md0, md1, md2, md3)
cat /proc/mdstat

Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md1 : active raid1 sdb5[1] sda5[0]
 20979264 blocks [2/2] [UU]
md2 : active raid1 sda6[0] sdb6[1]
 20979968 blocks [2/2] [UU]
md3 : active raid1 sdb7[1] sda7[0]
 880264320 blocks [2/2] [UU]
md0 : active raid1 sdb1[1] sda1[0]
 20979648 blocks [2/2] [UU]
unused devices: <none>

2. If the RAID fail you will see

cat /proc/mdstat
 Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
 md1 : active raid1 sda5[0]
       20979264 blocks [2/1] [U_]
 md2 : active raid1 sda6[0]
       20979968 blocks [2/1] [U_]
 md3 : active raid1 sda7[0]
       880264320 blocks [2/1] [U_]
 md0 : active raid1 sda1[0]
       20979648 blocks [2/1] [U_]
 unused devices:

3. Disconnect failed hdd

4. Connect new hdd

5. Loook into dmesg for hdd letter

6. remove failed hdd from raid (usuniecie uszkodzonego dysku z macierzy)

mdadm /dev/md0 --fail detached --remove detached
mdadm /dev/md1 --fail detached --remove detached
mdadm /dev/md2 --fail detached --remove detached
mdadm /dev/md3 --fail detached --remove detached

7. copy partitions structure from old GOOD HDD to the new one ( kopiujemy strukture partycji z jednego dysku na drugi)

sfdisk -d /dev/sda | sfdisk /dev/sde

8. Add new hdd into RAID (# dodajemy nowo wsadzony dysk)

mdadm --add /dev/md0 /dev/sde1
mdadm --add /dev/md1 /dev/sde5
mdadm --add /dev/md2 /dev/sde6
mdadm --add /dev/md3 /dev/sde7

9. Rebuilding starts automaticaly, You can see RAID status by

cat /proc/mdstat

 10. and WHEN ALL PARTITIONS ARE REBUILDED run:

mdadm --detail --scan >> /etc/mdadm.conf
Apr 18 2012

raid1 on existing filesystem -> Example

Existing filesystem is on /dev/sda

/dev/root  /
/dev/sda5 /var
/dev/sda6 /usr
/dev/sda8  /home

Connect new disk (for example /dev/sdb)

Copy partition structure from sda to sdc:

sfdisk -d /dev/sda | sfdisk /dev/sdb

or if disk has differ size

create the same size ( or bigger) partitions on sdb

end change type for fd (Linux raid autodetect)

change disk’s blkid

tune2fs -U random /dev/sdb

Next, create the single-disk RAID-1 array. Note the “missing” keyword is specified as one of our devices. We are going to fill this missing device later.

mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb1

or

mdadm --create /dev/md0 --metadata=0.90 --level=1 --raid-devices=2 missing /dev/sdb1

and for other partitions

mdadm --create /dev/md1 --metadata=0.90 --level=1 --raid-devices=2 missing /dev/sdb5
mdadm --create /dev/md2 --metadata=0.90 --level=1 --raid-devices=2 missing /dev/sdb6
mdadm --create /dev/md3 --metadata=0.90 --level=1 --raid-devices=2 missing /dev/sdb7

cat /proc/mdstat shows just created raid

Use the file system of your preference here. I'll use ext4 for this guide.
mkfs -t ext4 -j -L RAID-ONE /dev/md0
mkfs -t ext4 -j -L RAID-ONE /dev/md1
mkfs -t ext4 -j -L RAID-ONE /dev/md2
mkfs -t ext4 -j -L RAID-ONE /dev/md3

Make a file system on the swap partition:

mkswap -L NEW-SWAP /dev/sdb2

The new RAID-1 array is ready to start accepting data! So now we need to mount the array, and copy everything from the old system to the new system

 mkdir /tmp/md0 /tmp/md1 /tmp/md2 /tmp/md3
 mount /dev/md0 /tmp/md0

 mount /dev/md1 /tmp/md1
 mount /dev/md2 /tmp/md2
 mount /dev/md3 /tmp/md3
 rsync -avxHAXS --delete --progress /home/ /tmp/md3
 rsync -avxHAXS --delete --progress /var/ /tmp/md2
 rsync -avxHAXS --delete --progress /usr/ /tmp/md1
 rsync -avxHAXS --delete --progress / /tmp/md0

edit /etc/fstab and change mounting points to appropriate /dev/md0 .. /dev/md3

and add line:

/dev/sdb2  swap swap defaults 0 0

reboot with opition  root=/dev/md0 (where my root filesystem was located).

telinit 1 ?

First, open /dev/sda with fdisk and change all the partittions you want to have added to the array to type fd – linux raid autodetection.

Then, for each degraded array, add the appropriate non-array device to it:

mdadm /dev/md0 -a /dev/sda1

(wait for finish recovery)

(umount/dev/sda5)
mdadm /dev/md1 -a /dev/sda5

(wait for finish recovery)

umount/dev/sda6
mdadm /dev/md2 -a /dev/sda6

(wait for finish recovery)

umount/dev/sda8
mdadm /dev/md3 -a /dev/sda8

(wait for finish recovery)

Ensure your /etc/lilo.conf has the correct setup:
boot=/dev/md0
raid-extra-boot=mbr-only
root=/dev/md0

and type:

lilo

LILO will write boot information to each of the individual raid devices boot sectors, so if either /boot or your root partition are on failed disks, you’ll still be able to boot.

Create /etc/mdadm/mdadm.conf.
in /etc/mdadm/mdadm.conf put line:
DEVICE /dev/sda* /dev/sdb*

and WHEN ALL PARTITIONS ARE REBUILDED run:

mdadm --detail --scan >> /etc/mdadm.conf

					
Feb 13 2012

pecl ERROR: `phpize’ failed

Solution from:  http://kagan.mactane.org/blog/2009/05/11/workaround-for-pearpecl-failure-with-message-error-phpize-failed/

If you’ve gotten an “ERROR: `phpize’ failed” message when trying to run a “pecl install” or “pear install” command, try running phpinfo() — if you see --enable-sigchild in the “Configure Command” section near the very top, then you’re most likely being bitten by this bug.

 

Potential Fixes and Workarounds

The PHP dev team recommends recompiling without the offending flag.

However, you may not be able to do that, for any of various reasons. (You may have installed from a binary package, for instance — like most people these days.) Or it may simply seem like an excessive hassle. I offer the following patch as-is, without any guarantee or support.

First, ensure that you have the latest version of PEAR::Builder. Look in your PEAR/Builder.php file — On most Linux and Unix installations, this is likely to be in /usr/lib/php/PEAR/Builder.php, or possibly /usr/local/lib/php/PEAR/Builder.php.

On Windows systems, PHP might be installed nearly anywhere, but supposing it’s in c:\php, then the file you’re looking for will be in c:\php\PEAR\PEAR\Builder.php (yes, that’s two PEARs in a row).

Check the “@version” line in the big comment block at the beginning of the file; the line you want should be around line 19 or so. If says it’s less than version 1.38 (the latest one, at the time I’m writing this post), then try upgrading. Running “pear upgrade pear” should work. Then you can install this patch file:

 

Download the patch file and place it somewhere on your machine. Log in and cd to the PEAR directory that contains the Builder.php file. Then run the patch command. In the following example, I’ve placed the patch file in root’s home directory:

root@finrod:~# ls
loadlin16c.txt loadlin16c.zip patch-pear-builder-1.38.txt
root@finrod:~# cd /usr/lib/php/PEAR
root@finrod:/usr/lib/php/PEAR# cp Builder.php Builder.bak.php
root@finrod:/usr/lib/php/PEAR# patch -p0 < /root/patch-pear-builder-1.38.txt
patching file Builder.php
root@finrod:/usr/lib/php/PEAR#

Naturally, if the patch file doesn’t work for some reason, or it breaks things, you can just cp the backup file back into place.

 

Jan 9 2012

Postfix + Mysql + dovecot + Maildir

wpis w /etc/dovecot.conf.d/10-mail.conf
mail_location = maildir:/var/vmail/%d/%n/Maildir

auth-sql.conf.ext
passdb sql {
args = /etc/dovecot/dovecot-sql.conf
}

userdb static {
args = uid=5000 gid=5000 home=/var/vmail/%d/%n allow_all_users=yes
}

Nov 7 2011

Function ereg_replace() is deprecated

zamiast
$desc=ereg_replace(‘ +’,’ ‘,$desc); //reduce all multiple-space strings to single space
wpisujesz
$desc= preg_replace(‘/\s\s+/’, ‘ ‘, $desc); //reduce all multiple-space strings to single space