Monday 7 December 2009

The case for open-source virtualisation

I’ve been quite a keen follower of virtualisation since the early days of availability on commodity hardware. Obviously, big rigs like IBM, HP etc have been doing hardware partitioning since time immemorial, but I’m interested in the stuff that lets me run an OS concurrently with my main OS on a desktop.

To my mind, one of the big advantages of Windows (and to a lesser extent Linux) is the homogeneity of the OS on desktops and server. That is, if I write an application, website, database etc on my Windows PC, compile and run it, it should deploy to my Windows Server without any changes. This is in stark contrast to the development model for older big iron systems, where the development happened on the system itself, probably on a dedicated development partition, but quite certainly not on my Windows (OS/2, DOS etc) workstation without cross-compilation.

Nowadays, it’s taken for granted, but that’s always been in the back of my mind. With virtualisation, I can now deploy a test partition on a development server for Windows. It’s not just for development, but for testing too, and in my current path to get my Microsoft certifications up-to-date, it’s a godsend. I’ve got a fairly sizeable server that does almost all of my home tasks; file shares, email relay and filtering, VNC, photo sharing website, music streaming, proxying, VPN, and I’m working on VoIP. The thing is, it’s Linux.

I run Fedora. At one point I banished Microsoft from my home, just to see if it was possible. I wanted to accomplish as much of my home automation and services using open-source software as possible, and put Fedora on my main server (it’s actually been there for ages), Fedora on my power laptop and Ubuntu on my lighter laptop. and hey-ho, it works!

Well, actually, my mileage varied. Ubuntu is just great, it works well with laptop hardware (especially the Intel graphics), wireless and sound. Fedora also just works, that is until I tried to virtualise on the laptop.

I had read about Xen some years back, and how it offered paravirtualisation on Linux. I tried to get that working, but once I figured out it interrupts communications with my Radeon chip (they claimed it wouldn’t) I dropped it. Enter KVM, stage right!

I’ve been a keen follower of AMD’s hardware virtualisation (Pacifica) since the inception. I’ve actually been a very big fan of AMD’s for some time, but that’s for another post. VMware and other players have been doing hardware virtualisation for some time, but the Intel architecture just doesn’t play well with others, meaning the host OS and hypervisor needed to do a lot more work than ideally required to keep up the illusion for the guest. VMware’s intellectual property in the regard is substantial, and for years were the cut-and-dried leader in the field.

Qemu is a mature hardware emulator I’ve been using for years that runs all kinds of CPUs on multiple host OSs, but still runs in user space (i.e. no kernel privileges, and a lot of context switching for privileged ops). As soon as the kernel module is loaded (kqemu), things perform rather well indeed, but still noticeably slower than bare metal.

Hardware virtualisation (HVM) in Intel chips changed that, since now a lot of the grunt work like intercepting privileged interrupts can be caught by the CPU and sent off to the Hypervisor efficiently. In no time at all, the existing Qemu binaries were extended to include KVM, Kernel-based Virtual Machines. I’ve been a convert ever since.

By using a well-established virtualisation platform like Qemu, with excellent hardware support, KVM runs a lot of guests!

So why am I raving about this? Well, I often get asked why a Microsoft techie runs Linux. As I have previously stated, I tinker, and Linux offers me that chance. I get to play with raid in granular detail. Layered on that is LVM, which virtualises storage. You don’t have to get that complex, but it’s the closest I can come to simulating a SAN in my own home, with resilient, abstracted hardware hidden from the VM.

I make and break bridges and virtual networks on the fly. I’ve got three gigabit Ethernet ports, and segregate them by function – to the point where ALL VM traffic is on a dedicated port, so that if a VM talks to the host it pops out through the physical switch and back in on the front interface.

Here’s the key thing: While all of this is probable available from other vendors, and some parts may even be free, I’m in control. Sure documentation can be sketchy, and it requires a lot more basic knowledge of networking, storage and hardware architectures than other solutions, I’m hardly take my car to a garage to be serviced if I wanted to be a mechanic, I’d get dirty. Break things definitely, but that’s a great learning tool.

My power laptop used to break whenever I entered standby, and I mean trash the root file system (ext3 to boot, supposedly bullet-proof) so I filed a bug. As it turns out, it only happened when I was running HVM, so I applied a patch published 30 days before to my kernel. Voila!

Now this is a big thing! Without being a paid customer of Redhat’s, I got a problem resolved quickly and comfortably. I must stress here that Fedora is BETA software, permanently, and I expect it to break, but frankly for my needs it’s just fine, and when I can get things like this resolved, even better.

Sure, it’s painful, but it’s all under my control. I’ve scripted Windows 2003 deployments with snapshots that boot a new instance, sysprepped and ready to go, in 7 minutes. IPTables lets me simulate complex networks, firewalls and even lossy links.

I remain firmly impressed!

No comments:

Post a Comment