KVM and GlusterFS 3.3.1 performance on CentOS 6

7
by on January 30, 2013 at 6:32 pm

Currently I am testing the usability and performance of GlusterFS as a suitable virtual image store for the KVM hypervisor on Centos 6.

Setup:

Hypervisor: Centos 6 x64, AMD Phenom II 1090T, 16G RAM DDR3 1033mhz. HDD: 7200RPM SATA II.
Storage Node: Fedora 18 x64. HP Proliant DL380 G4. 4G RAM DDR. RAID 1 SAS 10k.
VM-remote: Debian 6 x64. 2G RAM, 2 Virtual CPUs. RAW 8G disk on glusterfs volume.
VM-local: N/A (coming soon)
Network: Direct patch 1Gbps ethernet.

I plan to test iSCSI across the same network connection tomorrow.
Test method: running the following command: dd if=/dev/zero of=test.zeros bs=1024k count=500 .  After completion, I run it again to overwrite the current file.
I am using the glusterfs-fuse client to mount the glusterfs volume on the hypervisor.  SELinux has been temporarily disabled.

Hypervisor:
Local Storage: Write one:  1.2 GB/s.  Write two: 111 MB/s
To GlusterFS volume:   Write one:  110 MB/s  Write two:  110 MB/s

Storage Node:
Local Storage ext4 root volume (no raid): Write one: 391 MB/s. Write two: 76.3 MB/s.
Local Storage xfs /export/brick1 (RAID 1, full disk formatted as xfs) : Write one: 562 MB/s. Write two: 536 MB/s.
Locally mounted gluster volume: Write one: 172 MB/s. Write two: 176 MB/s.

VM-remote:
Local Storage: Write one: 19.1 MB/s. Write two: 31.1 MB/s. Write three: 25.2 MB/s

So far, the VM is writing about 1/4th of the speed as the hypervisor. This is not very good performance at all.

 

KVM gluster write performance (Preliminary)

KVM gluster write performance (Preliminary)

 

Update 2/7/2013:

I am yet to test the iscsi performance in this setup, but I have no reason to believe it would perform any different than iscsi would elsewhere (around 110Mbps write).

I have decided to change the hypervisor to Fedora 18 as well.  This allows me to use the latest stable release of KVM/qemu, and the results have been promising.  Write speeds from the the VM to a qcow2 disk stored on the glusterfs volume are now reaching up to 60Mbps, though the average seems to be about 40Mbps.  Glusterfs has been integrated as a datastore type in this release of KVM, so it plays much better with SELinux, and allows the administrator to add the mount via the virt-manager.  On the CentOS 6 setup, the gluster native client would not allow KVM/qemu to write to the gluster mount with SELinux enabled.
While my unscientific testing of the latest packaged release of KVM does not appear numbers-wise to be that much of an improvement, the actual use of the VM tells a much different story.  Boot time has been cut to at least a 1/4, and the VM is much more responsive.  I think mounting a glusterfs or nfs volume on the VM itself for your application, instead of on the local virtual disk will create a production ready setup for KVM+glusterfs

KVM glusterfs setup

, , ,

You can skip to the end and leave a response. Pinging is currently not allowed.

  • Kosta

    Have you done any network monitoring while testing your VM-remote? IF your hitting 1Gbps at the network level but only see 19.1MB/s at the VM level maybe there is something else taking up the remaining bandwidth?

    I am about to do the same test, but with 56Gbps switches/NICs which should show if there are any performance related problems with GlusterFS and KVM / virtualization. If you would like I can provide some of the numbers once I test in the next week or so.

    • http://www.zipref.com Mike

      Kosta, thanks for the reply. I have not done any network monitoring. I’m seeing 110mbps from the hypervisor to the glusterfs mount, which indicates near-max performance of NFS over a 1gbps link.

      I am very interested in your findings and setup! Also, you may find this information interesting from gluster: http://www.gluster.org/2012/11/integration-with-kvmqemu/
      It seems that this VM image bottleneck has been identified, and the new release of qemu and Gluster 3.4 aims to address this bottleneck.

  • Kosta

    Thanks for that link. I hope to have some basic numbers by next week. Are you doing this right now for a production environment, testing or just for a home setup?

    • http://www.zipref.com Mike

      Right now I’m just testing at home using the equipment I have laying around. I work at a unique hosting company, and we are constantly deploying new technology, so this is my way of staying ahead of our clients.

  • Jacob

    Hi Mike,

    Thanks for the great post! Have you had any luck in improving your speeds? I have a similar setup with OpenStack (my storage and HV are on the same node) and am seeing:

    HV – local disk write: 1.9GB/s, 182MB/s
    HV – gluster mount write: 184MB/s, 250MB/s
    VM – local disk write: 13MB/s, 12.8MB/s

    I’m definitely not network constrained, running this on a 10GbE switch w/ two bonded ports on each node.

    • http://www.zipref.com Mike

      Jacob,

      I haven’t run any tests recently as I’ve been busy with other projects. However, towards the bottom of this article you’ll note an update from 2/7. I was able to average about 40 mb/s using Fedora 18 as the hypervisor. I recommend testing Fedora 18 and (19 is in beta, so consider this one as well) as both the HV and VM. Also, I believe GlusterFS has a native driver for OpenStack.

  • steven

    I have just been doing something very similar. So a Ubuntu 14.04 2 node gluster cluster backend with a single node RHEL7server front end. 2 VMs ubuntu 14 one on the the rhel local disks and one to the gluster cluster mounted with glusterfs. Currently I am seeing gluster as offering about 1/3rd disk I/O of local disk, that is I think terrible. Testing is using the application stress.

    NB to get autofailover I suspect I need 3 nodes, ie switching node 1 off freezes the VM ie I have no quorum to continue writes.

Categories