article about glusterfs

Signed-off-by: Nico Schottelius <nico@freiheit.schottelius.org>
This commit is contained in:
Nico Schottelius 2015-02-13 11:34:00 +01:00
parent 374aad714e
commit cc4ff86547
3 changed files with 98 additions and 0 deletions

View file

@ -0,0 +1,90 @@
[[!meta title="How to access gluster running from multiple networks"]]
# TL;DR
Create volumes name based instead of IP based:
gluster volume create xfs-plain replica 2 transport vmhost1:/home/gluster vmhost2:/home/gluster
instead of
gluster volume create xfs-plain replica 2 transport 192.168.0.1:/home/gluster 192.168.0.2:/home/gluster
And have the names point to different IP addresses.
## The setup
The basic setup (in our case) looks like this:
---------------------------------
| Clients / Users |
---------------------------------
|
|
--------------------------------- ---------------------------------
| frontend (with opennebula) | ---| vmhost1 with glusterfs |
--------------------------------- / ---------------------------------
| / eth0 eth1
|-------------------------< ||
\ eth0 eth1
\ ---------------------------------
---| vmhost2 with glusterfs |
---------------------------------
The frontend running [[!opennebula]] connects to
**vmhost1** and **vmhost2** using their public interfaces.
The gluster bricks running on the vm hosts are supposed to communicate
via eth1, so that the traffic for [[!gluster]] does not influence
the traffic of the virtual machines to the Internet. The gluster filesystem
of the vmhosts is only thought to be used by the virtual machines running
on those two hosts - an isolated cluster. Thus the volume initially has been created
like this:
gluster volume create xfs-plain replica 2 transport 192.168.0.1:/home/gluster 192.168.0.2:/home/gluster
## The problem
However, the frontend requires access to the gluster volume, because
[[!opennebula]] needs to copy and import the VM image into the gluster
datastore. Even though the *glusterd* process listens on any IP address,
the volume contains the information that it runs on 192.168.0.1
and 192.168.0.2 and is thus not reachable from the frontend.
## Using name based volumes
The frontend can reach the vm hosts via **vmhost1** and **vmhost2**,
which resolves to their **public IP addresses** via DNS.
On the vm hosts we created entries in **/etc/hosts** using [[!cdist]]
that looks as following:
192.168.0.1 vmhost1
192.168.0.2 vmhost2
Now we re-created the volume using
gluster volume create xfs-plain replica 2 transport tcp vmhost1:/home/gluster vmhost2:/home/gluster
gluster volume start xfs-plain
And it correctly shows up in the volume info:
%gluster volume info
Volume Name: xfs-plain
Type: Replicate
Volume ID: fe45c626-c79d-4e67-8f19-77938470f2cf
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: vmhost1-cluster1.place4.ungleich.ch:/home/gluster
Brick2: vmhost2-cluster1.place4.ungleich.ch:/home/gluster
And now we can mount it successfully on the frontend using
% mount -t glusterfs vmhost2:/xfs-plain /mnt/gluster
[[!inline pages="follow-up-include" archive="yes" show=0 quick=no]]
[[!tag gluster filesystem unix ungleich]]

7
follow-up-including.mdwn Normal file
View file

@ -0,0 +1,7 @@
## Follow up
If you find this article interesting, you may want to stay updated by following
me and ungleich on Twitter:
* [[!twitter ungleich]]
* [[!twitter NicoSchottelius]]

View file

@ -36,6 +36,7 @@
* [[!shortcut name=notmuch desc="notmuch" url="http://notmuchmail.org/"]]
* [[!shortcut name=opennebula desc="Opennebula" url="http://www.opennebula.org"]]
* [[!shortcut name=ceph desc="Ceph" url="http://ceph.com/"]]
* [[!shortcut name=gluster desc="Gluster" url="http://www.gluster.org/"]]
* [[!shortcut name=consul desc="Consul" url="http://consul.io/"]]
* [[!shortcut name=icinga desc="Icinga" url="https://www.icinga.org/"]]
* [[!shortcut name=nagios desc="Nagios" url="http://www.nagios.org/"]]