Upgrade all perl modules via CPAN
cpan -r
or
perl -MCPAN -e 'CPAN::Shell->install(CPAN::Shell->r)'
cpan -r
or
perl -MCPAN -e 'CPAN::Shell->install(CPAN::Shell->r)'
Error: java.io.FileNotFoundException: /logs/debug.log (No such file or directory)
Solution:
edit file “./lib/log4j.xml” and replace “{$openfireHome}” with absolute path to logs
Dodaj kod do
/usr/share/fail2ban/server/datedetector.py
# AAMMJJ HH:MM:SS
template = DateStrptime()
template.setName("YearMonthDay Hour:Minute:Second")
template.setRegex("\d{2}\d{2}\d{2} \d{2}:\d{2}:\d{2}")
template.setPattern("%y%m%d %H:%M:%S")
self.__templates.append(template)
# cat /etc/fail2ban/filter.d/mysqld.conf
Fail2Ban configuration file
#
# Author: Darel
#
[Definition]
failregex = Access denied for user ‘.*’@’
perl -MCPAN -e shell
install CPANPLUS
exit
perl -MCPANPLUS -e shell
help
[General]
h | ? # display help
q # exit
v # version information
[Search]
a AUTHOR … # search by author(s)
m MODULE … # search by module(s)
f MODULE … # list all releases of a module
o [ MODULE … ] # list installed module(s) that aren’t up to date
w # display the result of your last search again
[Operations]
i MODULE | NUMBER … # install module(s), by name or by search number
i URI | … # install module(s), by URI (ie http://foo.com/X.tgz)
t MODULE | NUMBER … # test module(s), by name or by search number
u MODULE | NUMBER … # uninstall module(s), by name or by search number
d MODULE | NUMBER … # download module(s)
l MODULE | NUMBER … # display detailed information about module(s)
r MODULE | NUMBER … # display README files of module(s)
c MODULE | NUMBER … # check for module report(s) from cpan-testers
z MODULE | NUMBER … # extract module(s) and open command prompt in it
[Local Administration]
b # write a bundle file for your configuration
s program [OPT VALUE] # set program locations for this session
s conf [OPT VALUE] # set config options for this session
s mirrors # show currently selected mirrors
DELIMITER |
CREATE TRIGGER `klienci_before_insert` BEFORE INSERT ON `klienci`
FOR EACH ROW
set NEW.sha1 = sha1(NEW.klient_id+NEW.time_add);
|
DELIMITER ;
mdadm -v –create /dev/md1 –level=raid10 –raid-devices=4 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2
pvcreate /dev/md1
vgcreate vg-server1 /dev/md1
lvcreate -L4g -nlv-home vg-server1
lvcreate -L2g -nlv-var vg-server1
lvcreate -L1g -nlv-tmp vg-server1
The -L option specifies the size of the volume
lvcreate -l 100%FREE -nlv-home vg (creates all of the unallocated space in the volume group)
lvextend -L+1G /dev/vg-server1/lv-var
Boot from any Live CD
mkdir /myraid
mount /dev/md0 /myraid
mount /dev/sda1 /myraid/boot
mount –bind /dev /myraid/dev
mount -t devpts devpts /myraid/dev/pts
mount -t proc proc /myraid/proc
mount -t sysfs sysfs /myraid/sys
chroot /myraid
lilo
exit
umount /dev/md0
mdadm –stop /dev/md0
mdadm –zero-superblock /dev/sda1
mdadm –zero-superblock /dev/sdb1
mdadm –zero-superblock /dev/sdc1
mdadm –zero-superblock /dev/sdd1
1. In this example RAID contents two disks (/dev/sda, /dev/sdb) and 4 partitions ( md0, md1, md2, md3)
cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] md1 : active raid1 sdb5[1] sda5[0] 20979264 blocks [2/2] [UU]
md2 : active raid1 sda6[0] sdb6[1] 20979968 blocks [2/2] [UU]
md3 : active raid1 sdb7[1] sda7[0] 880264320 blocks [2/2] [UU]
md0 : active raid1 sdb1[1] sda1[0] 20979648 blocks [2/2] [UU]
unused devices: <none>
2. If the RAID fail you will see
cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] md1 : active raid1 sda5[0] 20979264 blocks [2/1] [U_] md2 : active raid1 sda6[0] 20979968 blocks [2/1] [U_] md3 : active raid1 sda7[0] 880264320 blocks [2/1] [U_] md0 : active raid1 sda1[0] 20979648 blocks [2/1] [U_] unused devices:
3. Disconnect failed hdd
4. Connect new hdd
5. Loook into dmesg for hdd letter
6. remove failed hdd from raid (usuniecie uszkodzonego dysku z macierzy)
mdadm /dev/md0 --fail detached --remove detached mdadm /dev/md1 --fail detached --remove detached mdadm /dev/md2 --fail detached --remove detached mdadm /dev/md3 --fail detached --remove detached
7. copy partitions structure from old GOOD HDD to the new one ( kopiujemy strukture partycji z jednego dysku na drugi)
sfdisk -d /dev/sda | sfdisk /dev/sde
8. Add new hdd into RAID (# dodajemy nowo wsadzony dysk)
mdadm --add /dev/md0 /dev/sde1 mdadm --add /dev/md1 /dev/sde5 mdadm --add /dev/md2 /dev/sde6 mdadm --add /dev/md3 /dev/sde7
9. Rebuilding starts automaticaly, You can see RAID status by
cat /proc/mdstat
10. and WHEN ALL PARTITIONS ARE REBUILDED run:
mdadm --detail --scan >> /etc/mdadm.conf
Existing filesystem is on /dev/sda
/dev/root /
/dev/sda5 /var
/dev/sda6 /usr
/dev/sda8 /home
Connect new disk (for example /dev/sdb)
Copy partition structure from sda to sdc:
sfdisk -d /dev/sda | sfdisk /dev/sdb
or if disk has differ size
create the same size ( or bigger) partitions on sdb
end change type for fd (Linux raid autodetect)
change disk’s blkid
tune2fs -U random /dev/sdb
Next, create the single-disk RAID-1 array. Note the “missing” keyword is specified as one of our devices. We are going to fill this missing device later.
mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb1
or
mdadm --create /dev/md0 --metadata=0.90 --level=1 --raid-devices=2 missing /dev/sdb1
and for other partitions
mdadm --create /dev/md1 --metadata=0.90 --level=1 --raid-devices=2 missing /dev/sdb5
mdadm --create /dev/md2 --metadata=0.90 --level=1 --raid-devices=2 missing /dev/sdb6
mdadm --create /dev/md3 --metadata=0.90 --level=1 --raid-devices=2 missing /dev/sdb7
cat /proc/mdstat shows just created raid
Use the file system of your preference here. I'll use ext4 for this guide.
mkfs -t ext4 -j -L RAID-ONE /dev/md0
mkfs -t ext4 -j -L RAID-ONE /dev/md1
mkfs -t ext4 -j -L RAID-ONE /dev/md2
mkfs -t ext4 -j -L RAID-ONE /dev/md3
Make a file system on the swap partition:
mkswap -L NEW-SWAP /dev/sdb2
The new RAID-1 array is ready to start accepting data! So now we need to mount the array, and copy everything from the old system to the new system
mkdir /tmp/md0 /tmp/md1 /tmp/md2 /tmp/md3
mount /dev/md0 /tmp/md0 mount /dev/md1 /tmp/md1
mount /dev/md2 /tmp/md2
mount /dev/md3 /tmp/md3
rsync -avxHAXS --delete --progress /home/ /tmp/md3
rsync -avxHAXS --delete --progress /var/ /tmp/md2
rsync -avxHAXS --delete --progress /usr/ /tmp/md1
rsync -avxHAXS --delete --progress / /tmp/md0
and add line:
/dev/sdb2 swap swap defaults 0 0
reboot with opition root=/dev/md0 (where my root filesystem was located).
telinit 1 ?
First, open /dev/sda with fdisk and change all the partittions you want to have added to the array to type fd – linux raid autodetection.
Then, for each degraded array, add the appropriate non-array device to it:
mdadm /dev/md0 -a /dev/sda1
(wait for finish recovery)
(umount/dev/sda5)
mdadm /dev/md1 -a /dev/sda5
(wait for finish recovery)
umount/dev/sda6
mdadm /dev/md2 -a /dev/sda6
(wait for finish recovery)
umount/dev/sda8
mdadm /dev/md3 -a /dev/sda8
(wait for finish recovery)
Ensure your /etc/lilo.conf has the correct setup:
boot=/dev/md0 raid-extra-boot=mbr-only
root=/dev/md0
and type:
lilo
LILO will write boot information to each of the individual raid devices boot sectors, so if either /boot or your root partition are on failed disks, you’ll still be able to boot.
Create /etc/mdadm/mdadm.conf.
in /etc/mdadm/mdadm.conf put line:
DEVICE /dev/sda* /dev/sdb*
and WHEN ALL PARTITIONS ARE REBUILDED run:
mdadm --detail --scan >> /etc/mdadm.conf