Tikejhya: Ashish Nepal

Knowledgebase

Category: DISK (Page 1 of 2)

lvm howto’s

Your ads will be inserted here by

Easy Plugin for AdSense.

Please go to the plugin admin page to
Paste your ad code OR
Suppress this ad slot.

#How to create lvm

pvcreate /dev/xvdb
vgcreate vg_name /dev/xvdb
lvcreate -l 100%FREE -n lg vg
echo , | sfdisk /dev/mapper/vg_name-lg_name
mkfs.ext4 -F /dev/mapper/vg_name-lg_name

#How to extend existing lvm disk space.
Lets assume we have added new disk to EC2 instance

pvcreate /dev/xvdc

Add the physical volume to the volume group via ‘vgextend’.

vgextend vg_name /dev/xvdc
e.g: vgextend disk2 /dev/xvdc

Allocate the physical volume to a logical volume (extend the volume size by your new disk size).

$ lvextend -l +100%FREE /dev/mapper/disk2-esdata

Resize the file system on the logical volume so it uses the additional space.

$ resize2fs /dev/mapper/disk2-esdata

SysBench on CentOS – HowTo

Your ads will be inserted here by

Easy Plugin for AdSense.

Please go to the plugin admin page to
Paste your ad code OR
Suppress this ad slot.

cd /usr/local/src

wget http://sourceforge.net/projects/sysbench/files/latest/download
tar -xvzf sysbench-0.4.xx.tar.gz
cd sysbench-0.4.12
libtoolize --force --copy
./autogen.sh
./configure
make

Problem:

/usr/bin/ld: cannot find -lmysqlclient_r
collect2: ld returned 1 exit status
make[2]: *** [sysbench] Error 1
make[2]: Leaving directory `/usr/local/src/sysbench-0.4.12/sysbench’
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/usr/local/src/sysbench-0.4.12/sysbench’
make: *** [all-recursive] Error 1

Solution:

Pass linker: LDFLAGS linker flags, e.g. -L

export LDFLAGS='-L/usr/lib/mysql'
./configure
make
make install

Testing Parameter

Your ads will be inserted here by

Easy Plugin for AdSense.

Please go to the plugin admin page to
Paste your ad code OR
Suppress this ad slot.

CPU Benchmark

sysbench --test=cpu --cpu-max-prime=20000 run

Fileio

sysbench --num-threads=16 --test=fileio --file-total-size=3G --file-test-mode=rndrw prepare

DB Benchmark

sysbench --test=oltp --oltp-table-size=1000000 --mysql-db=databasename --mysql-user=root --mysql-password=PASSWORD prepare
sysbench --test=oltp --oltp-table-size=1000000 --mysql-db=databasename --mysql-user=root --mysql-password=PASSWORD --max-time=60 --oltp-read-only=on --max-requests=0 --num-threads=8 run

# Ideally take more than ram size

sysbench --test=fileio --file-total-size=30G prepare

sysbench --test=fileio --file-total-size=30G --file-test-mode=rndrw --init-rng=on --max-time=300 --max-requests=0 run

Ref Doc: http://sysbench.sourceforge.net/docs/

How to change mountpoint into new disk

How to change mountpoint into new disk

# Create New directory to mount into newly added disk
mkdir /mnt/fordb

# Now mount newly added disk to /mnt/new

mount /dev/sdx1 /mnt/fordb

# Now Copy content into new location
# This will work as backup + content will be copied across new disk

rsync -Pzarv /data/db/ /mnt/fordb/

# Unmount /mnt/fordb
# This will prepare you to remount into new desired location

umount /mnt/fordb

# move new content as diff name
# so that we could point in as /data/new

mv /data/db /data/db_bak

# Create directory with same name

mkdir /data/db

# mount into same directory

mount /dev/sdx1 /data/db

How to expand size of ext3 (non-LVM)

// Lets Run df -h
// To see the size of disks

#df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
47G 19G 26G 43% /
/dev/sda1 99M 20M 75M 21% /boot
tmpfs 3.0G 0 3.0G 0% /dev/shm
/dev/sdb1 197G 182G 4.9G 98% /var

// OOPS, seems like we are running out of disk on /dev/sdb1 i.e. /var
// And this is critical disk for me.

# fdisk -l

Disk /dev/sda: 53.6 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 6527 52323705 8e Linux LVM

Disk /dev/sdb: 268.4 GB, 268435456000 bytes
255 heads, 63 sectors/track, 32635 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 1 26108 209712478+ 83 Linux

Okie, so as i expanded disk from clound cPanel, i have got 268.4 GB of disk,
however df -h still doesnot show the equal amount.
Now that means i will have to delete and re partition it.
Yes, Delete, And yes, it wont delete files. So do need to worry.

# fdisk /dev/sdb

The number of cylinders for this disk is set to 32635.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): p

Disk /dev/sdb: 268.4 GB, 268435456000 bytes
255 heads, 63 sectors/track, 32635 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 1 26108 209712478+ 83 Linux

Command (m for help): d
Selected partition 1

// I have deleted partition 1 on /dev/sdb
// It could be sdb2 in your case which will be partition 2 and sdb3 might be 3 and so on.

Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-32635, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-32635, default 32635):
Using default value 32635

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.

# resize2fs /dev/sdb1
resize2fs 1.39 (29-May-2006)
The filesystem is already 52428119 blocks long. Nothing to do!

Yes here we will need to reboot.
# reboot

Broadcast message from root (pts/0) (Sat Apr 6 22:18:38 2013):

The system is going down for reboot NOW!

OOPSY not yet, ok lets not worry
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
47G 19G 26G 43% /
/dev/sda1 99M 20M 75M 21% /boot
tmpfs 3.0G 0 3.0G 0% /dev/shm
/dev/sdb1 197G 182G 4.9G 98% /var

# resize2fs /dev/sdb1
resize2fs 1.39 (29-May-2006)
Filesystem at /dev/sdb1 is mounted on /var; on-line resizing required

Performing an on-line resize of /dev/sdb1 to 65535151 (4k) blocks.
The filesystem on /dev/sdb1 is now 65535151 blocks long.

# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
47G 19G 26G 43% /
/dev/sda1 99M 20M 75M 21% /boot
tmpfs 3.0G 0 3.0G 0% /dev/shm
/dev/sdb1 247G 182G 52G 78% /var

Bingo ))

FIO Disk Benchmarking

FIO

wget http://apt.sw.be/redhat/el6/en/x86_64/rpmforge/RPMS/fio-2.0.7-1.el6.rf.x86_64.rpm
yum install libaio
rpm -ivh fio-2.0.7-1.el6.rf.x86_64.rp

Problem:
warning: fio-2.0.7-1.el6.rf.x86_64.rpm: Header V3 DSA/SHA1 Signature, key ID 6b8d79e6: NOKEY
error: Failed dependencies:
libaio.so.1()(64bit) is needed by fio-2.0.7-1.el6.rf.x86_64
libaio.so.1(LIBAIO_0.1)(64bit) is needed by fio-2.0.7-1.el6.rf.x86_64
libaio.so.1(LIBAIO_0.4)(64bit) is needed by fio-2.0.7-1.el6.rf.x86_64

Solution:
yum install libaio

Sample run Command (googled it somewhere)

Sample command runs (syntax is exactly the same on all platforms):
1. Write out a 20GB file called fio.write.out in 64k chunks, 1 thread:
# fio –size=20g –bs=64k –rw=write –ioengine=sync –name=fio.write.out

2. Read 20GB from file fio.write.out in 64k chunks, 1 thread:
# fio –size=20g –bs=64k –rw=read –ioengine=sync –name=fio.write.out

3. Write out 4 x 10GB files in 64k chunks, 4 threads:
# fio –size=10g –bs=64k –rw=write –ioengine=sync –name=fio.write.out –numjobs=4

4. Read 10GB from files fio.write.out.* in 64k chunks, 4 threads:
# fio –size=10g –bs=64k –rw=read –ioengine=sync –name=fio.write.out –numjobs=4

How to create reiserfs partition on centos

How to create reiserfs partition on centos

In order to install reiserfs partition on centos you will need elrepo
(Download correct version of repo as in given sample i have used for centos 6)

rpm –import http://elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh http://elrepo.org/elrepo-release-6-4.el6.elrepo.noarch.rpm

Install reiserfs and utils

yum install kmod-reiserfs reiserfs-utils

Check the free disk (i will assume /dev/sdb)
I particularly added new partition in vmware workstation and rebooted server.

fdisk -l

create partition first.
fdisk /dev/sdb/
n
p
1
entre
entre
w – Write to finalize

Format with reiserfs
mkfs.reiserfs /dev/sdb1

Tune with reiserfs
reiserfstune -l drive_reiser /dev/sdb1

makedir that you want to mount at.
mkdir /reiser

edit fstab for boot load
vi /etc/fstab
/dev/sdb1 /reiser reiserfs defaults 0 0

Mount disk permanently
mount -a

fsck

Unable to boot CentOS

***An error occurred during the file system check.
***Dropping you to a shell; the system will reboot
***when you leave the shell.
Give root password for maintenance
(or type Control-D to continue):

Solution:
Only many occasion when filesystem is corrupted
simply running `fsck` does the trick.

Glusterfs Centos

Considering you have installed:

yum install bison flex openssl openssl-devel fuse-devel fuse python-ctypes

Read background before you start which is available at glusterfs documentation for admin.
Glusterfs has its own cli

gluster volume help | grep "something"

Base3.ashish.com (Gluster Server)

//checking status
gluster peer status
// probing Xrd machine (//NOTE: Not to probe localhost)
gluster peer probe base2.ashish.com
// Volume create (//Later to add “add-brick”)
gluster volume create cloud-share transport tcp base2.ashish.com:/cloud
// start the volume created before
gluster volume start cloud-share
// Adding another machine (self or x machine)
gluster volume add-brick cloud-share base3.ashish.com:/cloud/

// Balancing content (only required if you are adding brick where previous brick has already huge amount of content)

gluster volume rebalance cloud-share fix-layout start
gluster volume rebalance cloud-share fix-layout status

IPTABLES for Glusterfs

# Gluster Server

-A INPUT -p tcp -m tcp -s 111.11.111.0/24 --dport 24007:24024 -j ACCEPT
-A INPUT -p udp -m udp -s 111.11.111.0/24 --dport 24007:24024 -j ACCEPT

Base1 – (i.e. i am looking space to expand or utilize that gluster chunks from base1)

gluster peer status
service glusterd restart
mount -t glusterfs base3.ashish.com:/cloud-share /data
mkdir filesys
mv filesys filesys_orig
umount /data/
mkdir /filesys;mount -t glusterfs base3.ashish.com:/cloud-share /filesys
cd /filesys
rsync -Pzarv /filesys_orig/* /filesys/
mount -o remount /filesys
umount -l /filesys
mount -t glusterfs base3.ashish.com:/cloud-share /filesys
touch 51gbwala
umount -l /filesys
mount -t glusterfs base2.ashish.com:/cloud-share /filesys

Setting quota..

gluster volume quota cloud-share enable
gluster volume quota cloud-share limit-usage / 1GB

gluster volume quota cloud-share list

Problems: Potential Solutions

Uuid: 00000000-0000-0000-0000-000000000000

First detach the new nodes using
gluster peer detach
check service iptables status => make sure it’s not a
firewall issue.
Ping test with relevent hosts.
restart /etc/init.d/glusterd

GlusterFS: {path} or a prefix of it is already part of a volume

setfattr -x trusted.glusterfs.volume-id /path/to/share

Problem
configure: error: OpenSSL crypto library is required to build glusterfs

Solution

yum install openssl
yum install openssl-devel

And sometimes Patience is the keyword. LOL
(i would recommend to turn of iptables and ensure things first if everything works start iptables accordingly).

Glusterfs is wired if you try to delete and start again and again, you might have to change share-volume, share directory and so on….

###############################################################

Removing Brick.

If you want to remove brick (one of the server).

gluster volume info all

pay attn on Volume Name and brick name you want to remove

i.e. gluster volume remove-brick [Volume name] [brick name]
gluster volume remove-brick cloud-share base2.ashish.com:/share

Note: Client will stop sending data to this particular brick if its replicated/stripped or distributed. However not need to get worried on retriving files it will be in the brick point.

###########################################################################

[Its always recommended for manual save and rsync to the mount point, i have not that big test completed but while changing type of gluster it might loose integrity of data (trying to convert stiped to replicated could be fine but i
personally doubt replicated to striped)]

Changing Storage type: From Distributed to striped.

We will have to stop volume and delete (No need to worry it will not remove data from bricks).

gluster volume stop cloud-share
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
Stopping volume cloud-share has been successful

gluster volume delete cloud-share

[root@base3 /]# gluster volume info all
[This should output]
No volumes present

[root@base3 /]# gluster volume create cloud-share stripe 2 transport tcp base2.ashish.com:/share base3.ashish.com:/share
Creation of volume cloud-share has been successful. Please start the volume to access data.

[root@base3 /]# gluster volume start cloud-share
Starting volume cloud-share has been successful

If you want umount -l /newmount and mount -a
assuming that you have fstab edited.

all data should be equally striped.

// To be in safe side
rsync -Pzarve /share/* /share_bak

// remove everything as we have in share_bak
rm -rf /share/*

Now those files can be rsynced to the mounted location in order to retrieve back otherwise can be ignored depending upon how critical data is. Ofcourse it is assumed that we will make early plan if we want to strip or distribute rather than getting it done after starting.

To Sum Up, Data now kept in mount point should be distributed almost equally.

###########################################################################

Changing Storage type: From Distributed to replicated

Same as before while moving distributed to replicated, need to backup stuff as you find it best according to the space available.

mkdir /newmount_bak; rsync -Pzarv [anywhere you got space] newmount/* /newmount_bak

###########################################################################

gluster volume create cloud-share replica 2 transport tcp base2.ashish.com:/share base3.ashish.com:/share
gluster volume start cloud-share

gluster volume stop cloud-share
gluster volume delete cloud-share
gluster volume create cloud-share stripe 2 transport tcp base2.ashish.com:/share base3.ashish.com:/share
gluster volume start cloud-share

##############################################################################

iptables tips and tricks

#Check Statistic

iptables -L INPUT -nvx

# All open for certain subnet
iptables -A INPUT -s 111.11.111.0/24 -j ACCEPT

# gluster Client
-A INPUT -p tcp -m tcp -s 111.11.111.0/24 –sport 24007:24024 -j ACCEPT
-A INPUT -p udp -m udp -s 111.11.111.0/24 –sport 24007:24024 -j ACCEPT
-A INPUT -p udp -m udp -s 111.11.111.0/24 –dport 24007:24024 -j ACCEPT
-A INPUT -p tcp -m tcp -s 111.11.111.0/24 –dport 24007:24024 -j ACCEPT

# Gluster Server
-A INPUT -p tcp -m tcp -s 111.11.111.0/24 –dport 24007:24024 -j ACCEPT
-A INPUT -p udp -m udp -s 111.11.111.0/24 –dport 24007:24024 -j ACCEPT

How to identify if its hardlink

etc]# ls -i ~/index.html 

664952 /root/index.html
@localhost etc]# ls -li index.html
664952 -rw-r–r– 2 root root 45706 Apr 20 2012 index.html

“Hardlink” has same inode.

Creating swapfile

To Create a 2GB swap file ( 1024 * 2048M = 2097152)

or this works as well bs = 1M count = 2048

Type this
dd if=/dev/zero of=/swapfile bs=2048 count=2097152

mkswap /swapfile

swapon /swapfile

vi /etc/fstab

/swapfile swap swap defaults 0 0

Page 1 of 2

Powered by WordPress & Theme by Anders Norén