Your ads will be inserted here by

Easy Plugin for AdSense.

Please go to the plugin admin page to
Paste your ad code OR
Suppress this ad slot.

Considering you have installed:

yum install bison flex openssl openssl-devel fuse-devel fuse python-ctypes

Read background before you start which is available at glusterfs documentation for admin.
Glusterfs has its own cli

gluster volume help | grep "something"

Base3.ashish.com (Gluster Server)

//checking status
gluster peer status
// probing Xrd machine (//NOTE: Not to probe localhost)
gluster peer probe base2.ashish.com
// Volume create (//Later to add “add-brick”)
gluster volume create cloud-share transport tcp base2.ashish.com:/cloud
// start the volume created before
gluster volume start cloud-share
// Adding another machine (self or x machine)
gluster volume add-brick cloud-share base3.ashish.com:/cloud/

// Balancing content (only required if you are adding brick where previous brick has already huge amount of content)

gluster volume rebalance cloud-share fix-layout start
gluster volume rebalance cloud-share fix-layout status

IPTABLES for Glusterfs

# Gluster Server

-A INPUT -p tcp -m tcp -s 111.11.111.0/24 --dport 24007:24024 -j ACCEPT
-A INPUT -p udp -m udp -s 111.11.111.0/24 --dport 24007:24024 -j ACCEPT

Base1 – (i.e. i am looking space to expand or utilize that gluster chunks from base1)

gluster peer status
service glusterd restart
mount -t glusterfs base3.ashish.com:/cloud-share /data
mkdir filesys
mv filesys filesys_orig
umount /data/
mkdir /filesys;mount -t glusterfs base3.ashish.com:/cloud-share /filesys
cd /filesys
rsync -Pzarv /filesys_orig/* /filesys/
mount -o remount /filesys
umount -l /filesys
mount -t glusterfs base3.ashish.com:/cloud-share /filesys
touch 51gbwala
umount -l /filesys
mount -t glusterfs base2.ashish.com:/cloud-share /filesys

Setting quota..

gluster volume quota cloud-share enable
gluster volume quota cloud-share limit-usage / 1GB

gluster volume quota cloud-share list

Problems: Potential Solutions

Uuid: 00000000-0000-0000-0000-000000000000

First detach the new nodes using
gluster peer detach
check service iptables status => make sure it’s not a
firewall issue.
Ping test with relevent hosts.
restart /etc/init.d/glusterd

GlusterFS: {path} or a prefix of it is already part of a volume

setfattr -x trusted.glusterfs.volume-id /path/to/share

Problem
configure: error: OpenSSL crypto library is required to build glusterfs

Solution

yum install openssl
yum install openssl-devel

And sometimes Patience is the keyword. LOL
(i would recommend to turn of iptables and ensure things first if everything works start iptables accordingly).

Glusterfs is wired if you try to delete and start again and again, you might have to change share-volume, share directory and so on….

###############################################################

Removing Brick.

If you want to remove brick (one of the server).

gluster volume info all

pay attn on Volume Name and brick name you want to remove

Your ads will be inserted here by

Easy Plugin for AdSense.

Please go to the plugin admin page to
Paste your ad code OR
Suppress this ad slot.

i.e. gluster volume remove-brick [Volume name] [brick name]
gluster volume remove-brick cloud-share base2.ashish.com:/share

Note: Client will stop sending data to this particular brick if its replicated/stripped or distributed. However not need to get worried on retriving files it will be in the brick point.

###########################################################################

[Its always recommended for manual save and rsync to the mount point, i have not that big test completed but while changing type of gluster it might loose integrity of data (trying to convert stiped to replicated could be fine but i
personally doubt replicated to striped)]

Changing Storage type: From Distributed to striped.

We will have to stop volume and delete (No need to worry it will not remove data from bricks).

gluster volume stop cloud-share
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
Stopping volume cloud-share has been successful

gluster volume delete cloud-share

[root@base3 /]# gluster volume info all
[This should output]
No volumes present

[root@base3 /]# gluster volume create cloud-share stripe 2 transport tcp base2.ashish.com:/share base3.ashish.com:/share
Creation of volume cloud-share has been successful. Please start the volume to access data.

[root@base3 /]# gluster volume start cloud-share
Starting volume cloud-share has been successful

If you want umount -l /newmount and mount -a
assuming that you have fstab edited.

all data should be equally striped.

// To be in safe side
rsync -Pzarve /share/* /share_bak

// remove everything as we have in share_bak
rm -rf /share/*

Now those files can be rsynced to the mounted location in order to retrieve back otherwise can be ignored depending upon how critical data is. Ofcourse it is assumed that we will make early plan if we want to strip or distribute rather than getting it done after starting.

To Sum Up, Data now kept in mount point should be distributed almost equally.

###########################################################################

Changing Storage type: From Distributed to replicated

Same as before while moving distributed to replicated, need to backup stuff as you find it best according to the space available.

mkdir /newmount_bak; rsync -Pzarv [anywhere you got space] newmount/* /newmount_bak

###########################################################################

gluster volume create cloud-share replica 2 transport tcp base2.ashish.com:/share base3.ashish.com:/share
gluster volume start cloud-share

gluster volume stop cloud-share
gluster volume delete cloud-share
gluster volume create cloud-share stripe 2 transport tcp base2.ashish.com:/share base3.ashish.com:/share
gluster volume start cloud-share

##############################################################################

iptables tips and tricks

#Check Statistic

iptables -L INPUT -nvx

# All open for certain subnet
iptables -A INPUT -s 111.11.111.0/24 -j ACCEPT

# gluster Client
-A INPUT -p tcp -m tcp -s 111.11.111.0/24 –sport 24007:24024 -j ACCEPT
-A INPUT -p udp -m udp -s 111.11.111.0/24 –sport 24007:24024 -j ACCEPT
-A INPUT -p udp -m udp -s 111.11.111.0/24 –dport 24007:24024 -j ACCEPT
-A INPUT -p tcp -m tcp -s 111.11.111.0/24 –dport 24007:24024 -j ACCEPT

# Gluster Server
-A INPUT -p tcp -m tcp -s 111.11.111.0/24 –dport 24007:24024 -j ACCEPT
-A INPUT -p udp -m udp -s 111.11.111.0/24 –dport 24007:24024 -j ACCEPT