...
In the above script, we assume that the filesystem is mounted on multiple VMs created as explained in Deploy Message Passing Interface (MPI) (and/or GNU parallel) cluster. Edit the variables CLUSTERNAME, MINIONLASTIND, GLUSTERSERVERIP, GLUSTERSERVERNAME, and TARGETDIR for your cluster configuration.
Case B: Filesystem with multiple Gluster servers (in the form of distributed volumes)
...
You need to deploy multiple VMs as Gluster servers. The basic procedure follows the tutorial given in Deploy Message Passing Interface (MPI) (and/or GNU parallel) cluster. However, you do not need to setup MPI and NFS share directories explained in the tutorial. Use a cluster template KASI-Cluster-Basic-Block in the KASI Cluster Templates to produce a single VM cluster for the Gluster filesystem .
Step 2. Configuring the Gluster server with multiple bricks on the cluster VMs
The configuration of the Gluster VMs can be done with the provided script https://github.com/astromsshin/cloud_ex/blob/main/tool_setup_glusterfs_multiple_servers.sh on the github repository https://github.com/astromsshin/cloud_ex. The script is given below:
with an extra block device (i.e., /dev/vdb). The recommended configuration of the flavor is C2M8D40, and you need to estimate the required number of minions for the required Gluster volume size and the chosen extra block size. For example, if you need 30TB volume, you need 9 minions as well as 1 master (i.e., 10 virtual machines in total) with the block size 3TB (i.e., 3072GB).
Step 2. Configuring the Gluster server with multiple bricks on the cluster VMs
The configuration of the Gluster VMs can be done with the provided script https://github.com/astromsshin/cloud_ex/blob/main/tool_setup_glusterfs_multiple_servers.sh on the github repository https://github.com/astromsshin/cloud_ex. The script is given below. You should edit the CLUSTERNAME and MINIONLASTIND in the below script for your own configuration of the Gluster cluster.
Code Block | ||
---|---|---|
| ||
#!/bin/bash
# Run this script with root permission
# on the master node of the GlusterFS cluster
# Block device
TARGETDEV="/dev/vdb"
# Directory serving the Gluster filesystem
TARGETDIR="/glusterfs/vol"
# Gluster filesystem volume available for clients
TARGETVOL="gvol"
# Name of the Gluster cluster
CLUSTERNAME="gluster-test"
MINIONLASTIND="8"
# Saving Gluster server information
HOSTNAMEFN="gluster_server_hostnames.txt"
HOSTIPFN="gluster_server_ips.txt"
# Install the server
echo "... install required software on ${CLUSTERNAME}-master"
apt install glusterfs-server -y
systemctl enable --now glusterd
for ind in $(seq 0 ${MINIONLASTIND})
do
echo "... install required software | ||
Code Block | ||
| ||
#!/bin/bash # Run this script with root permission # on the master node of the GlusterFS cluster # Directory serving the Gluster filesystem TARGETDIR="/glusterfs/vol" # Gluster filesystem volume available for clients TARGETVOL="gvol" # Name of the Gluster cluster CLUSTERNAME="glustercluster" MINIONLASTIND="0" # Install the server echo "... install on ${CLUSTERNAME}-master" apt install glusterfs-server -y systemctl enable --now glusterd for ind in $(seq 0 ${MINIONLASTIND}) do echo "... install on ${CLUSTERNAME}-minion-${ind}" ssh ${CLUSTERNAME}-minion-${ind} "apt install glusterfs-server -y; systemctl enable --now glusterd" done # Probing for ind in $(seq 0 ${MINIONLASTIND}) do echo "... probing ${CLUSTERNAME}-minion-${ind}" gluster peer probe ${CLUSTERNAME}-minion-${ind} done # Prepare the directory echo "... creating directory on ${CLUSTERNAME}-master" mkdir -p minion-${TARGETDIR} for ind in $(seq 0 ${MINIONLASTIND}) do echo "... creating directory on}" ssh ${CLUSTERNAME}-minion-${ind} " apt install ssh ${CLUSTERNAME}-minion-${ind} "mkdir -p ${TARGETDIR}glusterfs-server -y; systemctl enable --now glusterd" done # Informationecho glusterread --version gluster peer status # Produce the Gluster filesystem volume with the above directory VOLSTR="${CLUSTERNAME}-master:${TARGETDIR}" for ind in $(seq 0 ${MINIONLASTIND}) do VOLSTR="${VOLSTR} ${CLUSTERNAME}-minion-${ind}:${TARGETDIR}" done gluster volume create ${TARGETVOL} ${VOLSTR} force # Make the volume available gluster volume start ${TARGETVOL} # Information gluster volume info |
Step 3. Mounting the created Gluster filesystem on client VMs
The configuration of the Gluster firesystem affects how the client VMs mount and access the Gluster filesystem. The following script, which is available on the github repository https://github.com/astromsshin/cloud_ex as https://raw.githubusercontent.com/astromsshin/cloud_ex/main/tool_setup_glusterfs_client_all_nodes-multiple_servers.sh.
Code Block | ||
---|---|---|
| ||
#!/bin/bash # Name of the cluster which defines the clients' names CLUSTERNAME="ml-image" # Last integer index of the minions in the cluster MINIONLASTIND="8" # Directory name which is a mount point on clients TARGETDIR="/mnt/gluster" # IP of the Gluster server GLUSTERSERVERIP="10.0.100.120 10.0.100.191" # Name of the Gluster server GLUSTERSERVERNAME="test-basic-master test-basic-minion-0" # Updating /etc/hosts IPARRAY=($GLUSTERSERVERIP) NAMEARRAY=($GLUSTERSERVERNAME) # ... Master for (( i=0; i<=${#IPARRAY[@]}; i++ )) do echo ${IPARRAY[$i]} ${NAMEARRAY[$i]} >> /etc/hosts done # ... Minion for ind in $(seq 0 ${MINIONLASTIND}) do for (( i=0; i<=${#IPARRAY[@]}; i++ )) do ssh ${CLUSTERNAME}p "[CHECK] Are softwares installed successfully on all nodes? [YyNn]" -n 1 -r echo if [[ ! $REPLY =~ ^[Yy]$ ]] then echo "[STOP] please, re-run the script to make sure that all nodes have the softwares." exit 1 fi # Probing for ind in $(seq 0 ${MINIONLASTIND}) do echo "... probing ${CLUSTERNAME}-minion-${ind}" gluster peer probe ${CLUSTERNAME}-minion-${ind} done # Prepare the block device with the XFS filesystem echo "... preparing the block dev ${TARGETDEV} on ${CLUSTERNAME}-master" umount -f ${TARGETDEV} mkfs.xfs -f -i size=512 ${TARGETDEV} for ind in $(seq 0 ${MINIONLASTIND}) do echo "... preparing the block dev ${TARGETDEV} on ${CLUSTERNAME}-minion-${ind}" ssh ${CLUSTERNAME}-minion-${ind} "umount -f ${TARGETDEV}; mkfs.xfs -f -i size=512 ${TARGETDEV}" done # Prepare the directory echo "... creating directory on ${CLUSTERNAME}-master" mkdir -p ${TARGETDIR} for ind in $(seq 0 ${MINIONLASTIND}) do echo "... creating directory on ${CLUSTERNAME}-minion-${ind}" ssh ${CLUSTERNAME}-minion-${ind} "echo ${IPARRAY[$i]}mkdir -p ${NAMEARRAY[$i]} >> /etc/hostsTARGETDIR}" done done # RestMount ofthe tasksblock device RUNCMD="apt -y install glusterfs-client; mkdirecho "... mounting ${TARGETDEV} on ${TARGETDIR}; CLUSTERNAME}-master" mount -t glusterfsxfs ${NAMEARRAY[0]}:/gvolTARGETDEV} ${TARGETDIR}; chmod -R 777 for ind in $(seq 0 ${TARGETDIRMINIONLASTIND}" # ... Master) do echo "... setuping mounting ${TARGETDEV} on ${CLUSTERNAME}-master-minion-${ind}" echo $RUNCMD | bash # ... Minion for ind in $(seq 0 ${MINIONLASTIND}) do echo "... setuping on ${CLUSTERNAME}-minion-${ind}" sshssh ${CLUSTERNAME}-minion-${ind} "mount -t xfs ${TARGETDEV} ${TARGETDIR}" done # Information gluster --version gluster peer status # Produce the Gluster filesystem volume with the above directory VOLSTR="${CLUSTERNAME}-master:${TARGETDIR}" for ind in $(seq 0 ${MINIONLASTIND}) do VOLSTR="${VOLSTR} ${CLUSTERNAME}-minion-${ind}:${TARGETDIR}" done gluster volume create ${TARGETVOL} "${RUNCMD}" done |
...
${VOLSTR} force
# Make the volume available
gluster volume start ${TARGETVOL}
# Information
# (optional)
gluster volume info
gluster volume status
echo "Use ${HOSTNAMEFN} for clients"
gluster volume info | grep "^Brick[0-9]*:" | grep -v 'Bricks:' | awk -F':' '{print $2}' | xargs | tee ${HOSTNAMEFN}
echo "Use ${HOSTIPFN} for clients"
gluster volume info | grep "^Brick[0-9]*:" | grep -v 'Bricks:' | awk -F':' '{print $2}' | xargs -I {} grep -m 1 "{}" /etc/hosts | cut -d' ' -f1 | xargs | tee ${HOSTIPFN}
|
Step 3. Mounting the created Gluster filesystem on client VMs
The configuration of the Gluster firesystem affects how the client VMs mount and access the Gluster filesystem. The following script, which is available on the github repository https://github.com/astromsshin/cloud_ex as https://raw.githubusercontent.com/astromsshin/cloud_ex/main/tool_setup_glusterfs_client_all_nodes-multiple_servers.sh, should be executed on the master node of your cluster, i.e., client VMs. Edit the variables CLUSTERNAME, MINIONLASTIND, and TARGETDIR depending on your cluster configuration and usage directory name.
Code Block | ||
---|---|---|
| ||
#!/bin/bash
# Name of the cluster which defines the clients' names
CLUSTERNAME="lice"
# Last integer index of the minions in the cluster
MINIONLASTIND="14"
# Directory name which is a mount point on clients
TARGETDIR="/mnt/gluster-input"
# IP of the Gluster server
GLUSTERSERVERIP="10.0.110.167 10.0.110.92 10.0.110.124 10.0.110.68 10.0.110.104 10.0.110.172 10.0.110.44 10.0.110.147 10.0.110.67 10.0.110.185 10.0.110.31 10.0.110.225"
# Name of the Gluster server
GLUSTERSERVERNAME="gluster-input-master gluster-input-minion-0 gluster-input-minion-1 gluster-input-minion-2 gluster-input-minion-3 gluster-input-minion-4 gluster-input-minion-5 gluster-input-minion-6 gluster-input-minion-7 gluster-input-minion-8 gluster-input-minion-9 gluster-input-minion-10"
IPARRAY=($GLUSTERSERVERIP)
NAMEARRAY=($GLUSTERSERVERNAME)
# Deleting exisitng entries in /etc/hosts
echo "Deleting existing entries in /etc/hosts"
# ... Master
for (( i=0; i<=${#IPARRAY[@]}; i++ ))
do
sed -i "/ ${NAMEARRAY[$i]}$/d" /etc/hosts
sed -i "/ ${IPARRAY[$i]}$/d" /etc/hosts
done
# ... Minion
for ind in $(seq 0 ${MINIONLASTIND})
do
for (( i=0; i<=${#IPARRAY[@]}; i++ ))
do
ssh ${CLUSTERNAME}-minion-${ind} "sed -i \"/${NAMEARRAY[$i]}/d\" /etc/hosts; sed -i \"/${IPARRAY[$i]}/d\" /etc/hosts"
done
done
# Updating /etc/hosts
echo "Adding entries in /etc/hosts"
# ... Master
for (( i=0; i<=${#IPARRAY[@]}; i++ ))
do
echo ${IPARRAY[$i]} ${NAMEARRAY[$i]} >> /etc/hosts
done
# ... Minion
for ind in $(seq 0 ${MINIONLASTIND})
do
for (( i=0; i<=${#IPARRAY[@]}; i++ ))
do
ssh ${CLUSTERNAME}-minion-${ind} "echo ${IPARRAY[$i]} ${NAMEARRAY[$i]} >> /etc/hosts"
done
done
# Rest of tasks
echo "Running the relevant commands"
RUNCMD="apt -y install glusterfs-client; mkdir ${TARGETDIR}; mount -t glusterfs ${NAMEARRAY[0]}:/gvol ${TARGETDIR}; chmod -R 777 ${TARGETDIR}"
# ... Master
echo "... setuping on ${CLUSTERNAME}-master"
echo $RUNCMD | bash
# ... Minion
for ind in $(seq 0 ${MINIONLASTIND})
do
echo "... setuping on ${CLUSTERNAME}-minion-${ind}"
ssh ${CLUSTERNAME}-minion-${ind} "${RUNCMD}"
done
|
Since there are multiple Gluster servers, you need to type their ip addresses and names in the above script. Edit the variables GLUSTERSERVERIP and GLUSTERSERVERNAME in the above script. You can easily find the ip addresses and names in the cloud dashboard or the files generated in the step of configuring the Gluster cluster.
Useful Tips
- When you need to unmount the Gluster filesystems on the client machines, you can use the https://github.com/astromsshin/cloud_ex/blob/main/tool_umount_fs_all_nodes.sh script.
- There are several Gluster-related commands which help you figure out the current status. See https://docs.gluster.org/en/latest/CLI-Reference/cli-main/ . You have to login the Gluster cluster machines in order to execute the Gluster commands. Check https://data.kasi.re.kr/confluence/x/PoDVAg about how to access virtual machines.