GFS2
Jump to navigation
Jump to search
Prerequisites
The kernel needs to support GFS:
$ grep GFS /boot/config* CONFIG_GFS2_FS=m CONFIG_GFS2_FS_LOCKING_DLM=y
Userspace needs to be prepared as well:
$ apt-get install gfs2-tools # Debian, Ubuntu $ yum install gfs2-cluster # Fedora
We have to setup a real interface on both nodes too and make it resolvable:
$ cat /etc/network/interfaces [...] auto eth0:1 iface eth0:1 inet static address 10.0.0.100 netmask 255.255.255.0 $ grep ^10 /etc/hosts 10.0.0.100 node-00 10.0.0.101 node-01 $ ifup eth0:1
Installation
$ cat /etc/cluster/cluster.conf <source lang=xml> <?xml version="1.0"?> <cluster name="myGfs2" config_version="1"> <clusternodes> <clusternode name="node-00" nodeid="1"/> <clusternode name="node-01" nodeid="2"/> </clusternodes> </cluster> </source>
With /etc/cluster/cluster.conf in place on both nodes, we can now start cman (Cluster Manager) on both nodes. Try to start them somewhat simultaneously, so that the quorum gets a timely response:
$ /etc/init.d/cman start Starting cluster: Checking Network Manager... [ OK ] Global setup... [ OK ] Loading kernel modules... [ OK ] Mounting configfs... [ OK ] Starting cman... [ OK ] Waiting for quorum... [ OK ] Starting fenced... [ OK ] Starting dlm_controld... [ OK ] Starting gfs_controld... [ OK ] Unfencing self... [ OK ] Joining fence domain... [ OK ]
By now, 4 new daemons should be running:
$ ps -ef [...] root 1623 1 0 01:27 ? 00:00:00 corosync -f root 1673 1 0 01:27 ? 00:00:00 fenced root 1698 1 0 01:27 ? 00:00:00 dlm_controld root 1743 1 0 01:27 ? 00:00:00 gfs_controld
$ cman_tool nodes Node Sts Inc Joined Name 1 M 44 2011-03-07 01:27:55 node-00 2 M 40 2011-03-07 01:27:55 node-01
Usage
We're now able to use our shared disk. On one of the cluster nodes, we do:
$ mkfs.gfs2 -p lock_dlm -t myGfs2:mydisk -j 2 /dev/sdc
- -p specifies the locking protocol to use, lock_dlm
- -t specifies the (unique) name in a cluster configuration (ClusterName:FSName)
- -j specifies the number of journals. One journal for each node is required.
Now, we can mount /dev/sdc on each cluster node:
$ mount -t gfs2 /dev/sdc /mnt/gfs $ gfs_control -n ls gfs mountgroups name sdc id 0x94f0cbd2 flags 0x00000008 mounted change member 2 joined 1 remove 0 failed 0 seq 7,7 members 1 2 all nodes nodeid 1 jid 0 member 1 failed 0 start 1 seq_add 1 seq_rem 0 mount done nodeid 2 jid 1 member 1 failed 0 start 1 seq_add 7 seq_rem 0 mount done
Of course, /dev/sdc should be some kind of SAN, iSCSI or DRBD blockdevice.
Links
- Lustre vs GFS
- GFS2 on RHEL6
- DRBD8 + GFS2 on debian etch (January 14th, 2009)