correct the usual typos

Signed-off-by: Nico Schottelius <nico@bento.schottelius.org>
This commit is contained in:
Nico Schottelius 2013-07-23 11:44:11 +02:00
parent ba28e35310
commit 362cd32d43
1 changed files with 19 additions and 18 deletions

View File

@ -12,10 +12,10 @@ article before continuing to read this one.
## KVM Host configuration
The KVM hosts are Dell R815 with CentOS 6.x installed. Why Dell? Because they
offered us a good price/value combination for the boxes. Why CentOS? Historical
offered a good price/value combination. Why CentOS? Historical
reasons. The hosts got a minimal set of BIOS tuning to support the VM performance:
* Enable the usual virtualisation flags (don't forget the IOMMU!)
* Enable the usual virtualisation flags (don't forget to enable the IOMMU!)
* Change the power profile to **Maximum Perforamnce**
Furthermore, as the CentOS kernel is pretty old (2.6.32-279) and
@ -24,10 +24,11 @@ command line option to enable the IOMMU:
amd_iommu=on
Not enabling this option degrades the performance by at least 100%. In our case,
enabling it dropped the latency of the application by a factor of 10.
Not enabling this option degrades the performance.
In our case, enabling it reduced the latency of the
application running in the VM by a factor of 10.
One big motivation of the the KVM setup at local.ch is to make the
One big design consideration of the the KVM setup at local.ch is to make the
KVM hosts as independent as possible and sensibly fault tolerant. That said,
VMs are stored on local storage and hosts are always redundantly connected
to two switches use [LACP](https://en.wikipedia.org/wiki/Link_aggregation).
@ -118,13 +119,13 @@ The following configuration is used to create the bond0 device:
SLAVE=yes
BOOTPROTO=none
The MTU of the 10G cards has been set to 9000, as the Aristas support
The MTU of the 10G cards has been set to 9000, as the Arista switches support
[Jumbo Frames](https://en.wikipedia.org/wiki/Jumbo_frame).
Every VM is attached to two different networks:
* PZ: presentation (for general traffic) (10.18x.0.0/22 network)
* FZ: filerzone (for NFS and database traffic) (10.18x.64.0/22 network)
* PZ: presentation zone (for general traffic) (10.18x.0.0/22 network)
* FZ: filer zone (for NFS and database traffic) (10.18x.64.0/22 network)
Both networks are seperated using the VLAN tags 2 (pz) and 3 (fz), which result
in **bond0.2** and **bond0.3**:
@ -137,7 +138,7 @@ in **bond0.2** and **bond0.3**:
140: bond0.3@bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP
To keep things simple, the two vlan tagged (bonded) interfaces are added to a bridge each,
to which the VMs are attached later on. The full configuration looks like this:
to which the VMs are attached later on. The configuration looks like this:
[root@kvm-hw-inx01 network-scripts]# cat ifcfg-bond0.2
DEVICE="bond0.2"
@ -203,8 +204,7 @@ files:
* vnc: socket to the screen of the VM
With the exception of monitor, pid and vnc are all files generated by cdist.
One of the major concerns of this KVM setup is that all hosts have as little
as possible dependencies. That said, the start script of a VM looks like this:
The start script of a VM looks like this:
[root@kvm-hw-inx03 jira-vm-inx01.intra.local.ch]# cat start
#!/bin/sh
@ -227,8 +227,10 @@ as possible dependencies. That said, the start script of a VM looks like this:
-net tap,script=/opt/local.ch/sys/kvm/bin/ifup-fz,downscript=/opt/local.ch/sys/kvm/bin/ifdown,vlan=300 \
-smp 4
Most parameter values depend on output of sexy, which uses the cdist type, which in turn
assembles this start script. The above script may be useful for one or more of my readers,
Most parameter values depend on output of sexy,
which uses the cdist type **__localch_kvm_vm**,
which in turn assembles this start script.
The above script may be useful for one or more of my readers,
as it includes a lot of tuning we have done to KVM.
@ -272,11 +274,10 @@ is pretty simple:
;;
As you can see, every VM is started in its own
[screen](http://www.gnu.org/software/screen/). We decided to go for this approach,
as screen is sometimes buggy and hangs itself up. This way, we only lose on machine
on every screen death, not all of them at the same time. Furthermore, screen is usually
limited to a maximum number of windows it can server.
When everything went successful, the process output for a virtual machine looks like this:
[screen](http://www.gnu.org/software/screen/) - so if screen decides to
hang up, only one VM is affected.
Furthermore screen supports only a limited number of windows it can server.
The process listing for a running virtual machine looks like this:
root 64611 0.0 0.0 118840 852 ? Ss Mar11 0:00 SCREEN -d -m -S binarypool-vm-inx02.intra.local.ch /opt/local.ch/sys/kvm/vm/binarypool-vm-inx02.intra.local.ch/start
root 64613 0.0 0.0 106092 1180 pts/22 Ss+ Mar11 0:00 /bin/sh /opt/local.ch/sys/kvm/vm/binarypool-vm-inx02.intra.local.ch/start