2015-02-13 10:44:19 +00:00
|
|
|
[[!meta title="How to access gluster from multiple networks"]]
|
2015-02-13 10:34:00 +00:00
|
|
|
|
|
|
|
# TL;DR
|
|
|
|
|
|
|
|
Create volumes name based instead of IP based:
|
|
|
|
|
|
|
|
gluster volume create xfs-plain replica 2 transport vmhost1:/home/gluster vmhost2:/home/gluster
|
|
|
|
|
|
|
|
instead of
|
|
|
|
|
|
|
|
gluster volume create xfs-plain replica 2 transport 192.168.0.1:/home/gluster 192.168.0.2:/home/gluster
|
|
|
|
And have the names point to different IP addresses.
|
|
|
|
|
|
|
|
## The setup
|
|
|
|
|
|
|
|
The basic setup (in our case) looks like this:
|
|
|
|
|
|
|
|
|
|
|
|
---------------------------------
|
|
|
|
| Clients / Users |
|
|
|
|
---------------------------------
|
|
|
|
|
|
|
|
|
|
|
|
|
|
--------------------------------- ---------------------------------
|
|
|
|
| frontend (with opennebula) | ---| vmhost1 with glusterfs |
|
|
|
|
--------------------------------- / ---------------------------------
|
|
|
|
| / eth0 eth1
|
|
|
|
|-------------------------< ||
|
|
|
|
\ eth0 eth1
|
|
|
|
\ ---------------------------------
|
|
|
|
---| vmhost2 with glusterfs |
|
|
|
|
---------------------------------
|
|
|
|
|
|
|
|
|
|
|
|
The frontend running [[!opennebula]] connects to
|
|
|
|
**vmhost1** and **vmhost2** using their public interfaces.
|
|
|
|
|
|
|
|
The gluster bricks running on the vm hosts are supposed to communicate
|
|
|
|
via eth1, so that the traffic for [[!gluster]] does not influence
|
|
|
|
the traffic of the virtual machines to the Internet. The gluster filesystem
|
|
|
|
of the vmhosts is only thought to be used by the virtual machines running
|
|
|
|
on those two hosts - an isolated cluster. Thus the volume initially has been created
|
|
|
|
like this:
|
|
|
|
|
|
|
|
gluster volume create xfs-plain replica 2 transport 192.168.0.1:/home/gluster 192.168.0.2:/home/gluster
|
|
|
|
|
|
|
|
## The problem
|
|
|
|
|
|
|
|
However, the frontend requires access to the gluster volume, because
|
|
|
|
[[!opennebula]] needs to copy and import the VM image into the gluster
|
|
|
|
datastore. Even though the *glusterd* process listens on any IP address,
|
|
|
|
the volume contains the information that it runs on 192.168.0.1
|
|
|
|
and 192.168.0.2 and is thus not reachable from the frontend.
|
|
|
|
|
|
|
|
## Using name based volumes
|
|
|
|
|
|
|
|
The frontend can reach the vm hosts via **vmhost1** and **vmhost2**,
|
|
|
|
which resolves to their **public IP addresses** via DNS.
|
|
|
|
|
|
|
|
On the vm hosts we created entries in **/etc/hosts** using [[!cdist]]
|
|
|
|
that looks as following:
|
|
|
|
|
|
|
|
192.168.0.1 vmhost1
|
|
|
|
192.168.0.2 vmhost2
|
|
|
|
|
|
|
|
Now we re-created the volume using
|
|
|
|
|
|
|
|
gluster volume create xfs-plain replica 2 transport tcp vmhost1:/home/gluster vmhost2:/home/gluster
|
|
|
|
gluster volume start xfs-plain
|
|
|
|
|
|
|
|
And it correctly shows up in the volume info:
|
|
|
|
|
|
|
|
%gluster volume info
|
|
|
|
Volume Name: xfs-plain
|
|
|
|
Type: Replicate
|
|
|
|
Volume ID: fe45c626-c79d-4e67-8f19-77938470f2cf
|
|
|
|
Status: Started
|
|
|
|
Number of Bricks: 1 x 2 = 2
|
|
|
|
Transport-type: tcp
|
|
|
|
Bricks:
|
|
|
|
Brick1: vmhost1-cluster1.place4.ungleich.ch:/home/gluster
|
|
|
|
Brick2: vmhost2-cluster1.place4.ungleich.ch:/home/gluster
|
|
|
|
|
|
|
|
And now we can mount it successfully on the frontend using
|
|
|
|
|
|
|
|
% mount -t glusterfs vmhost2:/xfs-plain /mnt/gluster
|
|
|
|
|
2015-02-13 10:38:08 +00:00
|
|
|
[[!inline pages="follow-up-include" archive="no" show=0 raw=yes]]
|
2015-02-13 10:34:00 +00:00
|
|
|
|
|
|
|
[[!tag gluster filesystem unix ungleich]]
|