This page explains how to setup Gluster network filesystems that can be mounted by multiple virtual machines, in particular, MPI(or GNU/parallel) cluster nodes.
If you have questions and suggestions about this tutorial page and related problems, please, contact Min-Su Shin.
Case A: Filesystem with a single Gluster server
Gluster filesystem can be prepared with a single server and its disk space to serve multiple clients. In this case A, we assume that a single Gluster server is made to be used by multiple VMs (virtual machines) included in a cluster.
Step 1. Creating a single VM
Following KASI Science Cloud : VM instances#Step3.CreateaVMinstance, create a single VM with a VM flavor which fits to your requirement such as the required size of the network filesystem. The created single VM will work as a server serving the Gluster filesystem. Since the required memory for the Gluster server is not huge, the recommended flavor is large-CPU + small-memory for a given size of disk.
Step 2. Configuring the Gluster server
On the created VMs, we install the Gluster server program and configure a Gluster filesystem. For your convenience, you can find a shell script that should be executed by root account on the master node. See https://github.com/astromsshin/cloud_ex and the script https://github.com/astromsshin/cloud_ex/blob/main/tool_setup_glusterfs_single_server.sh which is presented below:
#!/bin/bash # Directory serving the Gluster filesystem TARGETDIR="/glusterfs/vol" # Gluster filesystem volume available for clients TARGETVOL="gvol" # Name of the Gluster server VMNAME="test-gl-vm" # Install the server apt install glusterfs-server -y systemctl enable --now glusterd # Information # (optional) #gluster --version #systemctl status glusterd # Prepare the directory mkdir -p ${TARGETDIR} # Produce the Gluster filesystem volume with the above directory gluster volume create ${TARGETVOL} ${VMNAME}:${TARGETDIR} force # Make the volume available gluster volume start ${TARGETVOL} # Information # (optional) #gluster volume info #gluster volume status
Step 3. Mounting the created Gluster filesystemt on client VMs
The created network filesystem can be mounted on client VMs by using mount command with required options. A simple script is available as https://github.com/astromsshin/cloud_ex/blob/main/tool_setup_glusterfs_client_all_nodes-single_server.sh in the github repository https://github.com/astromsshin/cloud_ex. The script, which needs to be executed on the client cluster master node, is given below:
#!/bin/bash # Name of the cluster which defines the clients' names CLUSTERNAME="ml-image" # Last integer index of the minions in the cluster MINIONLASTIND="8" # IP of the Gluster server GLUSTERSERVERIP="10.0.100.150" # Name of the Gluster server GLUSTERSERVERNAME="test-gl-vm" # Directory name which is a mount point on clients TARGETDIR="/mnt/gluster" RUNCMD="apt -y install glusterfs-client; echo ${GLUSTERSERVERIP} ${GLUSTERSERVERNAME} >> /etc/hosts;mkdir ${TARGETDIR}; mount -t glusterfs ${GLUSTERSERVERNAME}:/gvol ${TARGETDIR}; chmod -R 777 ${TARGETDIR}" echo "... setuping on ${CLUSTERNAME}-master" echo $RUNCMD | bash for ind in $(seq 0 ${MINIONLASTIND}) do echo "... setuping on ${CLUSTERNAME}-minion-${ind}" ssh ${CLUSTERNAME}-minion-${ind} "${RUNCMD}" done
In the above script, we assume that the filesystem is mounted on multiple VMs created as explained in Deploy Message Passing Interface (MPI) (and/or GNU parallel) cluster. Edit the variables CLUSTERNAME, MINIONLASTIND, GLUSTERSERVERIP, GLUSTERSERVERNAME, and TARGETDIR for your cluster configuration.
Case B: Filesystem with multiple Gluster servers (in the form of distributed volumes)
We explain how to setup the Gluster filesystem in the form of distributed volumes by using multiple Gluster servers. As depicted in the following figure and explained in https://docs.gluster.org/en/latest/Administrator-Guide/Setting-Up-Volumes/,
there is a type of distrbiuted volume where files are distributed in multiple gluster servers.
Step 1. Creating a basic cluster of VMs
You need to deploy multiple VMs as Gluster servers. The basic procedure follows the tutorial given in Deploy Message Passing Interface (MPI) (and/or GNU parallel) cluster. However, you do not need to setup MPI and NFS share directories explained in the tutorial. Use a cluster template KASI-Cluster-Basic-Block in the KASI Cluster Templates to produce a VM cluster for the Gluster filesystem with an extra block device (i.e., /dev/vdb). The recommended configuration of the flavor is C2M8D40, and you need to estimate the required number of minions for the required Gluster volume size and the chosen extra block size. For example, if you need 30TB volume, you need 9 minions as well as 1 master (i.e., 10 virtual machines in total) with the block size 3TB (i.e., 3072GB).
Step 2. Configuring the Gluster server with multiple bricks on the cluster VMs
The configuration of the Gluster VMs can be done with the provided script https://github.com/astromsshin/cloud_ex/blob/main/tool_setup_glusterfs_multiple_servers.sh on the github repository https://github.com/astromsshin/cloud_ex. The script is given below. You should edit the CLUSTERNAME and MINIONLASTIND in the below script for your own configuration of the Gluster cluster.
#!/bin/bash # Run this script with root permission # on the master node of the GlusterFS cluster # Block device TARGETDEV="/dev/vdb" # Directory serving the Gluster filesystem TARGETDIR="/glusterfs/vol" # Gluster filesystem volume available for clients TARGETVOL="gvol" # Name of the Gluster cluster CLUSTERNAME="gluster-test" MINIONLASTIND="8" # Saving Gluster server information HOSTNAMEFN="gluster_server_hostnames.txt" HOSTIPFN="gluster_server_ips.txt" # Install the server echo "... install required software on ${CLUSTERNAME}-master" apt install glusterfs-server -y systemctl enable --now glusterd for ind in $(seq 0 ${MINIONLASTIND}) do echo "... install required software on ${CLUSTERNAME}-minion-${ind}" ssh ${CLUSTERNAME}-minion-${ind} "apt install glusterfs-server -y; systemctl enable --now glusterd" done echo read -p "[CHECK] Are softwares installed successfully on all nodes? [YyNn]" -n 1 -r echo if [[ ! $REPLY =~ ^[Yy]$ ]] then echo "[STOP] please, re-run the script to make sure that all nodes have the softwares." exit 1 fi # Probing for ind in $(seq 0 ${MINIONLASTIND}) do echo "... probing ${CLUSTERNAME}-minion-${ind}" gluster peer probe ${CLUSTERNAME}-minion-${ind} done # Prepare the block device with the XFS filesystem echo "... preparing the block dev ${TARGETDEV} on ${CLUSTERNAME}-master" umount -f ${TARGETDEV} mkfs.xfs -f -i size=512 ${TARGETDEV} for ind in $(seq 0 ${MINIONLASTIND}) do echo "... preparing the block dev ${TARGETDEV} on ${CLUSTERNAME}-minion-${ind}" ssh ${CLUSTERNAME}-minion-${ind} "umount -f ${TARGETDEV}; mkfs.xfs -f -i size=512 ${TARGETDEV}" done # Prepare the directory echo "... creating directory on ${CLUSTERNAME}-master" mkdir -p ${TARGETDIR} for ind in $(seq 0 ${MINIONLASTIND}) do echo "... creating directory on ${CLUSTERNAME}-minion-${ind}" ssh ${CLUSTERNAME}-minion-${ind} "mkdir -p ${TARGETDIR}" done # Mount the block device echo "... mounting ${TARGETDEV} on ${CLUSTERNAME}-master" mount -t xfs ${TARGETDEV} ${TARGETDIR} for ind in $(seq 0 ${MINIONLASTIND}) do echo "... mounting ${TARGETDEV} on ${CLUSTERNAME}-minion-${ind}" ssh ${CLUSTERNAME}-minion-${ind} "mount -t xfs ${TARGETDEV} ${TARGETDIR}" done # Information gluster --version gluster peer status # Produce the Gluster filesystem volume with the above directory VOLSTR="${CLUSTERNAME}-master:${TARGETDIR}" for ind in $(seq 0 ${MINIONLASTIND}) do VOLSTR="${VOLSTR} ${CLUSTERNAME}-minion-${ind}:${TARGETDIR}" done gluster volume create ${TARGETVOL} ${VOLSTR} force # Make the volume available gluster volume start ${TARGETVOL} # Information # (optional) gluster volume info gluster volume status echo "Use ${HOSTNAMEFN} for clients" gluster volume info | grep "^Brick[0-9]*:" | grep -v 'Bricks:' | awk -F':' '{print $2}' | xargs | tee ${HOSTNAMEFN} echo "Use ${HOSTIPFN} for clients" gluster volume info | grep "^Brick[0-9]*:" | grep -v 'Bricks:' | awk -F':' '{print $2}' | xargs -I {} grep -m 1 "{}" /etc/hosts | cut -d' ' -f1 | xargs | tee ${HOSTIPFN}
Step 3. Mounting the created Gluster filesystem on client VMs
The configuration of the Gluster firesystem affects how the client VMs mount and access the Gluster filesystem. The following script, which is available on the github repository https://github.com/astromsshin/cloud_ex as https://raw.githubusercontent.com/astromsshin/cloud_ex/main/tool_setup_glusterfs_client_all_nodes-multiple_servers.sh, should be executed on the master node of your cluster, i.e., client VMs. Edit the variables CLUSTERNAME, MINIONLASTIND, and TARGETDIR depending on your cluster configuration and usage directory name.
#!/bin/bash # Name of the cluster which defines the clients' names CLUSTERNAME="lice" # Last integer index of the minions in the cluster MINIONLASTIND="14" # Directory name which is a mount point on clients TARGETDIR="/mnt/gluster-input" # IP of the Gluster server GLUSTERSERVERIP="10.0.110.167 10.0.110.92 10.0.110.124 10.0.110.68 10.0.110.104 10.0.110.172 10.0.110.44 10.0.110.147 10.0.110.67 10.0.110.185 10.0.110.31 10.0.110.225" # Name of the Gluster server GLUSTERSERVERNAME="gluster-input-master gluster-input-minion-0 gluster-input-minion-1 gluster-input-minion-2 gluster-input-minion-3 gluster-input-minion-4 gluster-input-minion-5 gluster-input-minion-6 gluster-input-minion-7 gluster-input-minion-8 gluster-input-minion-9 gluster-input-minion-10" IPARRAY=($GLUSTERSERVERIP) NAMEARRAY=($GLUSTERSERVERNAME) # Deleting exisitng entries in /etc/hosts echo "Deleting existing entries in /etc/hosts" # ... Master for (( i=0; i<=${#IPARRAY[@]}; i++ )) do sed -i "/ ${NAMEARRAY[$i]}$/d" /etc/hosts sed -i "/ ${IPARRAY[$i]}$/d" /etc/hosts done # ... Minion for ind in $(seq 0 ${MINIONLASTIND}) do for (( i=0; i<=${#IPARRAY[@]}; i++ )) do ssh ${CLUSTERNAME}-minion-${ind} "sed -i \"/${NAMEARRAY[$i]}/d\" /etc/hosts; sed -i \"/${IPARRAY[$i]}/d\" /etc/hosts" done done # Updating /etc/hosts echo "Adding entries in /etc/hosts" # ... Master for (( i=0; i<=${#IPARRAY[@]}; i++ )) do echo ${IPARRAY[$i]} ${NAMEARRAY[$i]} >> /etc/hosts done # ... Minion for ind in $(seq 0 ${MINIONLASTIND}) do for (( i=0; i<=${#IPARRAY[@]}; i++ )) do ssh ${CLUSTERNAME}-minion-${ind} "echo ${IPARRAY[$i]} ${NAMEARRAY[$i]} >> /etc/hosts" done done # Rest of tasks echo "Running the relevant commands" RUNCMD="apt -y install glusterfs-client; mkdir ${TARGETDIR}; mount -t glusterfs ${NAMEARRAY[0]}:/gvol ${TARGETDIR}; chmod -R 777 ${TARGETDIR}" # ... Master echo "... setuping on ${CLUSTERNAME}-master" echo $RUNCMD | bash # ... Minion for ind in $(seq 0 ${MINIONLASTIND}) do echo "... setuping on ${CLUSTERNAME}-minion-${ind}" ssh ${CLUSTERNAME}-minion-${ind} "${RUNCMD}" done
Since there are multiple Gluster servers, you need to type their ip addresses and names in the above script. Edit the variables GLUSTERSERVERIP and GLUSTERSERVERNAME in the above script. You can easily find the ip addresses and names in the cloud dashboard or the files generated in the step of configuring the Gluster cluster.
Useful Tips
- When you need to unmount the Gluster filesystems on the client machines, you can use the https://github.com/astromsshin/cloud_ex/blob/main/tool_umount_fs_all_nodes.sh script.
- There are several Gluster-related commands which help you figure out the current status. See https://docs.gluster.org/en/latest/CLI-Reference/cli-main/ . You have to login the Gluster cluster machines in order to execute the Gluster commands. Check https://data.kasi.re.kr/confluence/x/PoDVAg about how to access virtual machines.