Currently I am testing the usability and performance of GlusterFS as a suitable virtual image store for the KVM hypervisor on Centos 6.
Hypervisor: Centos 6 x64, AMD Phenom II 1090T, 16G RAM DDR3 1033mhz. HDD: 7200RPM SATA II.
Storage Node: Fedora 18 x64. HP Proliant DL380 G4. 4G RAM DDR. RAID 1 SAS 10k.
VM-remote: Debian 6 x64. 2G RAM, 2 Virtual CPUs. RAW 8G disk on glusterfs volume.
VM-local: N/A (coming soon)
Network: Direct patch 1Gbps ethernet.
I plan to test iSCSI across the same network connection tomorrow.
Test method: running the following command: dd if=/dev/zero of=test.zeros bs=1024k count=500 . After completion, I run it again to overwrite the current file.
I am using the glusterfs-fuse client to mount the glusterfs volume on the hypervisor. SELinux has been temporarily disabled.
Local Storage: Write one: 1.2 GB/s. Write two: 111 MB/s
To GlusterFS volume: Write one: 110 MB/s Write two: 110 MB/s
Local Storage ext4 root volume (no raid): Write one: 391 MB/s. Write two: 76.3 MB/s.
Local Storage xfs /export/brick1 (RAID 1, full disk formatted as xfs) : Write one: 562 MB/s. Write two: 536 MB/s.
Locally mounted gluster volume: Write one: 172 MB/s. Write two: 176 MB/s.
Local Storage: Write one: 19.1 MB/s. Write two: 31.1 MB/s. Write three: 25.2 MB/s
So far, the VM is writing about 1/4th of the speed as the hypervisor. This is not very good performance at all.
I am yet to test the iscsi performance in this setup, but I have no reason to believe it would perform any different than iscsi would elsewhere (around 110Mbps write).
I have decided to change the hypervisor to Fedora 18 as well. This allows me to use the latest stable release of KVM/qemu, and the results have been promising. Write speeds from the the VM to a qcow2 disk stored on the glusterfs volume are now reaching up to 60Mbps, though the average seems to be about 40Mbps. Glusterfs has been integrated as a datastore type in this release of KVM, so it plays much better with SELinux, and allows the administrator to add the mount via the virt-manager. On the CentOS 6 setup, the gluster native client would not allow KVM/qemu to write to the gluster mount with SELinux enabled.
While my unscientific testing of the latest packaged release of KVM does not appear numbers-wise to be that much of an improvement, the actual use of the VM tells a much different story. Boot time has been cut to at least a 1/4, and the VM is much more responsive. I think mounting a glusterfs or nfs volume on the VM itself for your application, instead of on the local virtual disk will create a production ready setup for KVM+glusterfs
You can skip to the end and leave a response. Pinging is currently not allowed.