Monday, 7 December 2009

Virtualisation of basic services

So, virtualisation is everywhere. It departments are running more and more of their workloads on virtual hardware, and in the data centre that’s a big thing. It’s an accepted fact that most servers in an organisation run at near idle all day long, and even core systems like directory and file servers barely crack a sweat serving their users. For small and medium-sized entities (yes, they have datacentres, just not raised floor, air-conditioned ones like the big guys) this is a problem.

I was asked a while ago why I enabled file compression on my laptop, and seriously considered it on file servers. A few years later another query was why I enabled full-disk encryption on the same piece of kit. Doesn’t it affect the responsiveness?

The simple answer is yes. And no.

CPUs, memory and disks today are so, so fast that there’s not much they can’t do for an individual user. when asked what laptop a person should buy, I recommend the pink one if it’s a girl asking, and jet-black for the guys. Seriously, laptops today can wipe the floor of a desktop from five years ago, and last I checked we’re still doing the same things as back then: browsing the web, writing emails, laughing at cats. If you’re a developer, gamer, or serious showoff, you probably have a dedicated desktop for the hardcore tasks, and the point remains: Moore’s law has seen to that.

Of course, anything you put in the path of data from the disk to CPU/memory is going to slow things down, but this comes back to the more important point: Do I notice?

When I’m running a compile job, chances are the disk’s head movements trying to find each source file and library is going to be the first bottleneck. Then, if it’s a biggie like mplayer, the CPU is going to be loaded for a while doing the compile. Encryption? Oh, I didn’t even notice it’s on.

This brings me to my point, with uber-powerful, multi-core servers cheap as chips (e.g. HP ML110 G5, Dual-core 2.8Ghz Core 2, 1G RAM, 250G SATA for 350 ex tax, website 2009-12-07), do I really need to treat these things like fragile porcelain ware?

So, given a department of 100 users in a remote office, where I need to provide local AD, file services, updates, VPN to head office, backups, what’s to stop me buying a cheap server, loading it up with RAM and disks and virtualising all functions in one (or a few), neat little grey cubes with a reputable brand name on the front?

Now here, I’m thinking of Linux as the hypervisor, but that’s my flavour of choice. Something as simple as AD is a good case, since it’s by definition a replicated, resilient database so failures are not a big impact (authenticate somewhere else), and doesn’t do much in the day. It’s also advised to segregate roles on Windows servers, and for resilience perhaps have a second box if you’re paranoid . That’s one, maybe two little cubes that I can’t share for other functions. And it’s going to consume a few gigabytes of disk space, and even less RAM. A whole cube, just for you? Share!

Linux virtualisation has come a long way, and I consider it to be solid, fast and these days wonderfully easy to administer remotely. KVM, as I use, is just another OS process, no funky drivers or installations and since it runs on a full (and standard) instance of Linux, all the management tools for networking, storage, logging and troubleshooting are available.

One simple example: since all traffic traverses the host’s virtual network switch, all traffic can be inspected by a packet sniffer as powerful as Wireshark. No drivers in Windows, no strange binaries. If everything’s virtualised on one box, ALL the traffic is visible for troubleshooting in one spot. Neat!

Another, consider three servers, each requiring 100G storage. That’s probably six disks (minimum) if you’re doing things right (RAID-1). One one hypervisor host, that’s four 100GB disks (RAID-5), only two if you go the 500G route (RAID-1, plus a spare I suppose). Now I’m also a fan of software RAID. My case above stands, there are other factors more likely to impact performance, no least the client’s ability to pull data fast enough from a file server.

So, Now I’ve one box with my local services running in VMs. Full-disk encryption? Well, Windows 2008 has BitLocker, with all the TPM and USB key requirements. How about logging in to the host remotely via SSH, one of the most bulletproof protocols for remote management, and enable the encryption remotely. With a script. Not one byte in plaintext, not even the partition table. Break that, evil hardware theif!

Snapshotting? check
SNMP/performance monitoring and alerting? check
Hardware support? check
Virtualising legacy systems? check (especially MS-DOS on pesky 64-bit Windows)

Worth a think

No comments:

Post a Comment