Thursday 31 December 2009

Getting IP Right in Windows: 2. Subnets and Private IP space

Networking in Windows is deceptively easy. The level of development Microsoft has achieved to make it so is quite considerable, and I contrast it here with the amount of tweaking required to get Unix services off the ground.

That said, a well-implemented IP structure is the cornerstone of any enterprise (or even serious home) office deployment. I’ve composed a series of five articles on topics you should be really getting right! There are certainly more, but these stick out in my mind.

2. Subnets and Private IP space

The IP address space is global, centrally controlled and handed out to the bigger ISPs and national controlling bodies in big chunks, and broken into progressively smaller groups to hand out to their customers.

Knowing how to carve the IP address space into networks, subnets, supernets, using broadcast domains and multicasts can be readily understood with a little thought, is certainly very logical, and a bit of planning up front can save you a lot of headache down the line. A great graphical explanation can be found here, and a famous graphical map of world’s IP space is here.

If you’re not responsible for the network design at your organisation, have a chat with the guy who is to understand the principles, how it is expected to evolve over time and what you can expect as it does.

IPX was all the vogue in Novell’s heyday, and it is still a neat protocol, able to route across a vast network, but without a central registry for network numbers it was easy to find conflicts, and certainly inappropriate for Internet-type deployments, hence the rise of IP in the corporate space.

I worked at a 2,500+ user company that had a randomly chosen Class-A IP network to deploy internally, at the time I discovered these numbers had officially been assigned to the People’s Republic of China. Thankfully, none of the business interest lay there, but getting these overlapping networks to communicate had there been a need would have been a tedious task.

A special class of IP address are the Reserved Private IP Addresses (RPIPA), and if an Internet router sees traffic from or to these, it is simply discarded, so you can be certain you won’t conflict with someone on the Internet and deploy them as you like!

192.168.0.0/16 is probably the most well-known, and the two most common (192.168.0.0/24 and 192.168.1.0/24) are used almost universally in the default configuration of home routers. Since home networks rarely get integrated with others, this tends to work just fine.

Unfortunately, this also means that a lot of inexperienced network engineers use these as a default. In fact windows XP Internet Connection Sharing (ICS) requires that the internal interface receive the 192.168.0.1/24 address, no other will do, and they’re so common that very little attention is required to remember them.

But that’s the problem. This can lead to big headaches when two sites with these common subnets do want to communicate, from simple VPN access to your company network to handling a merger and linking it up to your network. There are two other ranges to choose from (172.16.0.0/12 and 10.0.0/8) offering literally hundreds of thousands of the good-old 24-bit subnets, so get creative.

A well-implemented DNS (as I wrote about earlier) will mask your numbering for day-to-day tasks, DHCP keeps track of the pool of assigned addresses, and if you're managing a larger network with different WAN links and routers, you should be documenting it in a coherent design anyway, so there isn’t much of a reason not to.

Previous: 1. Understand DNS
Next: 3. IPv6 is Coming

Wednesday 30 December 2009

Getting IP Right in Windows: 1. Understand DNS

Networking in Windows is deceptively easy. The level of development Microsoft has achieved to make it so is quite considerable, and I contrast it here with the amount of tweaking required to get Unix services off the ground.

That said, a well-implemented IP structure is the cornerstone of any enterprise (or even serious home) office deployment. I’ve composed a series of five articles on topics you should be really getting right! There are certainly more, but these stick out in my mind.

1. Understand DNS

IP addressing is a computer task, in that is involves computations of 32- or 128-bit integers. It’s only for our clumsy brains that dotted-decimal notation was devised (and no amount of fudging improves IPv6 legibility), and hostnames are even simpler since they gives us names to work with, easier on the grey matter.

DNS is the magic that transforms the two, and it is involved in almost every conversation and transaction on the LAN, and certainly on the Internet, perhaps not continually, but certainly at the start of a digital dialogue.

Get to know the DNS namespaces:

  • Domains; In the first IP networks, the hosts of the entire fledgling ‘Net were stored in one grand file, which simply couldn’t scale to today’s Internet, so they’re broken down to domains, with distributed control and querying. These are the most jealously guarded servers on the Internet, as compromise causes outages and have serious security implications
  • Hostnames; Every box has one, and it helps to have a standard to make things easy to locate and avoid conflicts. Understand how they relate to the domain space – a common practice to differentiate the interfaces on a system is to register different address suffixes on the hostname depending on the function (e.g. server1.mydomain.net, server1.backups.mydomain.net), so it’s useful to know how they relate to each other

DNS consists of various record types, depending on the kind of information you’re looking for:

  • A/AAAA; The simplest type of record, it maps a textual hostname to an IP address (IPv4 for A, IPv6 for AAAA). This is just the hostname portion, and is only useful in the context of a domain (see my later post on the evils of NetBIOS)
  • CNAME; This is an alias, useful for making an abstract name like www out of webserver1 for instance, and when multiple entries are used together, send users asking for www to one of webserver1, webserver2 and webserver3 without having to let your users know the difference.
  • PTR; This make backwards resolutions from IP addresses to names, and is useful for debugging or checking the identity of incoming connections
  • MX; Mail eXchanger records tell the world how to get mail to your domain, with rules on precedence and load-balancing
  • SRV; The newest in the family, and more and more widely used. A critical record for operation of your Active Directory domains and services, this record helps clients and servers figure out where they are on the network (sites), where the nearest service provider is (e.g. local domain controller) and even where to go if the local service is unavailable (e.g. lowest cost neighbouring AD site)

By getting into the habit of using Fully Qualified Domain Names (FQDNs) for your network activities, you’ll avoid common pitfalls such as failing (or mismatched) requests for a service in another domain, connecting to the wrong interface, or even just helping you map out the network in your head.

Wherever possible (and this doesn’t work everywhere I’d hope), use the User Principal Name (UPN) for your Active Directory logins and when asked for credentials instead of the old NT style, e.g.

Liam.Dennehy@leptech.lan – good!
LEPTECHDOM1\Liam – legacy, outdated. Not good!

All of this helps you put the resources in your Windows network in their correct place, avoids confusion and generally makes your life a bit easier in the long-run.

Next: 2. Subnets and Private IP Space

Tuesday 22 December 2009

Putting an MCITP in its place

I have noticed that the new raft of credentials from Microsoft don’t necessarily make sense to folks, especially those that are already familiar with the old set of MCSE-type credentials. I mentioned to some friends that I've got the “new MCSE”, a lot of them got it, but it dawned on me that this is, in fact, a field of some confusion. A quick search on Google came up with one fault, and that is how this new credential relates to the ones we (certainly I) already know.

The point of any credential, be it Cisco, VMware, Microsoft or embroidery is to show to an external party that you are qualified in a particular field of endeavour. This was quite plain with Microsoft’s old regime, the Microsoft Certified Professional (MCP), and the Microsoft Certified Systems Engineer (MCSE), as well as the Microsoft Certified Database Administrator (MCDBA). However, as Microsoft branch out into new fields and offer solid, integrated products in fields not entirely related to Windows Server or SQL Server, the approach of cobbling together a new acronym for a new product or role is unwieldy – imagine the Microsoft Certified System Centre Engineer – MCSCE??

So, what is the transition?

MCP –> MCTS

The first Microsoft certification I got in 1997 was an MCP: Windows 95. This showed that, according to Microsoft, I was competent in installing, administering and troubleshooting Windows 95. I remember just how proud I was that day.

The problem though is that the term, “Certified Professional”, encompasses both the specific credential and the entire field of Microsoft certified persons, so is not entirely appropriate. “Technology Specialist” on the other hand, clearly shows what the candidate is trying to demonstrate, that he knows his stuff on a particular product. This bit is key, a specialist in SQL Server configuration is not necessarily a specialist in database development or administration, and in larger organisations the roles are very clearly separate. The MCTS credential clearly segregates say, an application server specialist who can administer web applications, from the server network specialist who will hook it up to the various internal and external parties accessing it.

MCDST/MCSA/MCSE/MCDBA –> MCITP

A big failing of the old MCSE credential was the elective system. While Microsoft may introduce the idea in the future, I sincerely hope not as it adds doubt and confusion to the mix.

I hold an MCSE on Windows NT 4 (incorporating the MCP on Windows 95 I mentioned above). It included two “elective” exams from a list of many more, specifically TCP/IP Networking and Exchange Server 5.5. This means I need to explain to anyone asking just what kind of MCSE I’ve achieved. This was partially remedied in the 2003 track to include an MCSE: Messaging credential, but no such moniker exists for a SQL Server specialist.

The phrase “Systems Engineer” was especially limiting, since it implies an ability to design and implement server infrastructure centred on Windows Server. While that is indeed my own focus, it is of little use to someone specialising in monitoring and management systems, or even the venerable desktop support guru. While the DBA and the Desktop Support guy had their own acronym (MCDBA and MCDST respectively), I certainly don’t want to have to memorise the ever-growing list as a hiring or support manager.

By asserting that someone is an IT Professional in a named field, it indicates a proficiency in a technology set rather than one product. It also narrows the competency; while an Enterprise Administrator demonstrates competency in designing and implementing infrastructure from SANs and Terminal Services down to the desktop, the Server Administrator credential is more focused on those with competency in Windows Server itself.

These credentials are not easy to come by, and are especially hard if the individual has no relevant experience in the real world.

While the plethora of MCITP credentials may seem like a dilution of the fairly focused MCSE, it offers the opportunity for many more product specialist to demonstrate their competency in their field, with a credential on a par with the more established Systems Engineer we’ve come to know.

MCM/MCA

Now we get to the good stuff. The Microsoft Certified Master and Architect credentials are not for the faint of heart or newbies. The intensive certifications are for those with five or more years experience leading complex design, implementation and migration projects and a demonstrated history as a technology leader and expert. Standing up in front of a panel of recognised experts purporting to know your stuff is a daunting proposition, probably even for a few of the members of the very panel you’d be standing before.

For anyone claiming to be hot stuff on the range of Microsoft products, services and solutions, this is where you should be aiming. If you’re already that good, convincing your company to stump up for the three weeks training in Redmond for the MCM should be no effort, and I look forward to getting to that level myself.

Tuesday 8 December 2009

The Bolt-on Operating System

For years I’ve wondered why on earth so much is crammed into the Windows base image. Sure, they’ve got a decade worth of hardware to support, and since vendors don’t create standardised, reusable code like Linux, this is a significant factor in bloat.

But one of the longest running gripes I have with Microsoft’s OS offerings is that all manners of features are included that I don’t care about – and some that I am super-passionate about are just plain gone! The base installation for almost any major distribution of Linux will include a lot of productivity tools, but leave out some others. It comes down to personal choice, but since all the software is free and available to install from the Internet, this is no inconvenience at all - assuming good Internet connectivity.

That last point is actually quite big, since the distribution may contain an impressive array of software, in the end I’m probably going to want something that didn’t ship on the CD. Can’t have it both ways I guess.

But obviously, the last thing Microsoft would want is to not install something, then have you go back to the installation disc for the features you’re enabling. Of course, they STILL haven’t figured out that Unknown Device from Unknown Manufacturer isn’t a helpful message.

One of my recurring gripes (that is before I started blogging, so I can’t really prove it) is that my Windows Server has a GUI. Seriously, I don’t want a GUI. I want my apps installed on my server, and the management interface installed on a workstation somewhere else. Or the way Linux handles it, X as a process, in my own privilege space that can be launched just for me, and perhaps VNC or X-like somewhere else. The problem of course is that almost every Windows application requires a GUI to install. Sure, MSIs offer silent installs, but so often these line-of-business apps don’t have neat MSI or respect the silent option.

There are two things I’m getting at here;

First, even though the product is named “Windows”, and it’s grown out of a desktop graphical OS that pretty much reinvented the way we deal with computers (though much cred to Apple), I want a server. No people standing in front of it, so no pretty colours required. All the prettiness should be produced by the apps and displayed as IP packets. And on the same note, I see a bunch of sound drivers hanging around.

Well, Windows Server 2008 has the Core option you tell me. It’s a good step in the right direction, but I was genuinely disappointed to see the command prompt surrounded by a window. MMC is still there. I had actually expected to see a text console only, no graphics. Quite simply, it’s a waste of resources. Any app, service or component worth its’ salt is manageable remotely, from simple DHCP up to complex SQL Server clusters. It’s also a danger, as I recall at least one major outage at a previous company thanks to faulty graphics card drivers from <server vendor name censored>. And the GDI component, dating back to Windows 95, handles page and print rendering. Exploits in the 16/32 bit era and in the 32-bit era come to mind. That last one spans products over three release generations, all for a component that doesn’t belong there in the first place.

If I want graphics processing, print rendering etc, then let me add it on later, the way I would Ghostscript on Linux to make a PDF. That brings me on to my second point, and the trust of my argument: components that don’t belong.

The Sasser worm devastated computer estates around the world, by exploiting code hooks in the LSASS.exe process that handles security arbitration between requesting apps/users and security providers. Read closely into the articles, and you’ll notice that it is specifically a problem with code for dcpromo, the process that turns Windows Server into a Domain Controller. The applicable hotfixes patch the code on Windows Server.

On XP, the hotfix removes the code.

Just what was it doing there in the first place? I know the server and desktop products share a codebase, but this irks me. I’ve personally implemented (though not used) a hack to enable RAID on Windows XP that officially doesn’t support it, since the raid driver and all the GUI code is present but disabled. I suspect the same is true of the EFS code in all Home versions of Vista (since you can read EFS-encrypted files from Windows XP upgrades just fine).

Windows Server 2008 requires you to specifically add features to your server before activating them, like AD Directory Service (AD DS), and installations of SQL Server and Exchange (at least) check for updates to the installer before running, getting them closer to the model of Linux distributions – adding a feature from the online repository ALWAYS adds the latest version.

In the end, the development models of these two are very different, so I’m keen to see what further advances can be made on both sides. As always, security and functionality butt heads, somehow I end up with the headache…

Monday 7 December 2009

Virtualisation of basic services

So, virtualisation is everywhere. It departments are running more and more of their workloads on virtual hardware, and in the data centre that’s a big thing. It’s an accepted fact that most servers in an organisation run at near idle all day long, and even core systems like directory and file servers barely crack a sweat serving their users. For small and medium-sized entities (yes, they have datacentres, just not raised floor, air-conditioned ones like the big guys) this is a problem.

I was asked a while ago why I enabled file compression on my laptop, and seriously considered it on file servers. A few years later another query was why I enabled full-disk encryption on the same piece of kit. Doesn’t it affect the responsiveness?

The simple answer is yes. And no.

CPUs, memory and disks today are so, so fast that there’s not much they can’t do for an individual user. when asked what laptop a person should buy, I recommend the pink one if it’s a girl asking, and jet-black for the guys. Seriously, laptops today can wipe the floor of a desktop from five years ago, and last I checked we’re still doing the same things as back then: browsing the web, writing emails, laughing at cats. If you’re a developer, gamer, or serious showoff, you probably have a dedicated desktop for the hardcore tasks, and the point remains: Moore’s law has seen to that.

Of course, anything you put in the path of data from the disk to CPU/memory is going to slow things down, but this comes back to the more important point: Do I notice?

When I’m running a compile job, chances are the disk’s head movements trying to find each source file and library is going to be the first bottleneck. Then, if it’s a biggie like mplayer, the CPU is going to be loaded for a while doing the compile. Encryption? Oh, I didn’t even notice it’s on.

This brings me to my point, with uber-powerful, multi-core servers cheap as chips (e.g. HP ML110 G5, Dual-core 2.8Ghz Core 2, 1G RAM, 250G SATA for 350 ex tax, HP.nl website 2009-12-07), do I really need to treat these things like fragile porcelain ware?

So, given a department of 100 users in a remote office, where I need to provide local AD, file services, updates, VPN to head office, backups, what’s to stop me buying a cheap server, loading it up with RAM and disks and virtualising all functions in one (or a few), neat little grey cubes with a reputable brand name on the front?

Now here, I’m thinking of Linux as the hypervisor, but that’s my flavour of choice. Something as simple as AD is a good case, since it’s by definition a replicated, resilient database so failures are not a big impact (authenticate somewhere else), and doesn’t do much in the day. It’s also advised to segregate roles on Windows servers, and for resilience perhaps have a second box if you’re paranoid . That’s one, maybe two little cubes that I can’t share for other functions. And it’s going to consume a few gigabytes of disk space, and even less RAM. A whole cube, just for you? Share!

Linux virtualisation has come a long way, and I consider it to be solid, fast and these days wonderfully easy to administer remotely. KVM, as I use, is just another OS process, no funky drivers or installations and since it runs on a full (and standard) instance of Linux, all the management tools for networking, storage, logging and troubleshooting are available.

One simple example: since all traffic traverses the host’s virtual network switch, all traffic can be inspected by a packet sniffer as powerful as Wireshark. No drivers in Windows, no strange binaries. If everything’s virtualised on one box, ALL the traffic is visible for troubleshooting in one spot. Neat!

Another, consider three servers, each requiring 100G storage. That’s probably six disks (minimum) if you’re doing things right (RAID-1). One one hypervisor host, that’s four 100GB disks (RAID-5), only two if you go the 500G route (RAID-1, plus a spare I suppose). Now I’m also a fan of software RAID. My case above stands, there are other factors more likely to impact performance, no least the client’s ability to pull data fast enough from a file server.

So, Now I’ve one box with my local services running in VMs. Full-disk encryption? Well, Windows 2008 has BitLocker, with all the TPM and USB key requirements. How about logging in to the host remotely via SSH, one of the most bulletproof protocols for remote management, and enable the encryption remotely. With a script. Not one byte in plaintext, not even the partition table. Break that, evil hardware theif!

Snapshotting? check
SNMP/performance monitoring and alerting? check
Hardware support? check
Virtualising legacy systems? check (especially MS-DOS on pesky 64-bit Windows)

Worth a think

The case for open-source virtualisation

I’ve been quite a keen follower of virtualisation since the early days of availability on commodity hardware. Obviously, big rigs like IBM, HP etc have been doing hardware partitioning since time immemorial, but I’m interested in the stuff that lets me run an OS concurrently with my main OS on a desktop.

To my mind, one of the big advantages of Windows (and to a lesser extent Linux) is the homogeneity of the OS on desktops and server. That is, if I write an application, website, database etc on my Windows PC, compile and run it, it should deploy to my Windows Server without any changes. This is in stark contrast to the development model for older big iron systems, where the development happened on the system itself, probably on a dedicated development partition, but quite certainly not on my Windows (OS/2, DOS etc) workstation without cross-compilation.

Nowadays, it’s taken for granted, but that’s always been in the back of my mind. With virtualisation, I can now deploy a test partition on a development server for Windows. It’s not just for development, but for testing too, and in my current path to get my Microsoft certifications up-to-date, it’s a godsend. I’ve got a fairly sizeable server that does almost all of my home tasks; file shares, email relay and filtering, VNC, photo sharing website, music streaming, proxying, VPN, and I’m working on VoIP. The thing is, it’s Linux.

I run Fedora. At one point I banished Microsoft from my home, just to see if it was possible. I wanted to accomplish as much of my home automation and services using open-source software as possible, and put Fedora on my main server (it’s actually been there for ages), Fedora on my power laptop and Ubuntu on my lighter laptop. and hey-ho, it works!

Well, actually, my mileage varied. Ubuntu is just great, it works well with laptop hardware (especially the Intel graphics), wireless and sound. Fedora also just works, that is until I tried to virtualise on the laptop.

I had read about Xen some years back, and how it offered paravirtualisation on Linux. I tried to get that working, but once I figured out it interrupts communications with my Radeon chip (they claimed it wouldn’t) I dropped it. Enter KVM, stage right!

I’ve been a keen follower of AMD’s hardware virtualisation (Pacifica) since the inception. I’ve actually been a very big fan of AMD’s for some time, but that’s for another post. VMware and other players have been doing hardware virtualisation for some time, but the Intel architecture just doesn’t play well with others, meaning the host OS and hypervisor needed to do a lot more work than ideally required to keep up the illusion for the guest. VMware’s intellectual property in the regard is substantial, and for years were the cut-and-dried leader in the field.

Qemu is a mature hardware emulator I’ve been using for years that runs all kinds of CPUs on multiple host OSs, but still runs in user space (i.e. no kernel privileges, and a lot of context switching for privileged ops). As soon as the kernel module is loaded (kqemu), things perform rather well indeed, but still noticeably slower than bare metal.

Hardware virtualisation (HVM) in Intel chips changed that, since now a lot of the grunt work like intercepting privileged interrupts can be caught by the CPU and sent off to the Hypervisor efficiently. In no time at all, the existing Qemu binaries were extended to include KVM, Kernel-based Virtual Machines. I’ve been a convert ever since.

By using a well-established virtualisation platform like Qemu, with excellent hardware support, KVM runs a lot of guests!

So why am I raving about this? Well, I often get asked why a Microsoft techie runs Linux. As I have previously stated, I tinker, and Linux offers me that chance. I get to play with raid in granular detail. Layered on that is LVM, which virtualises storage. You don’t have to get that complex, but it’s the closest I can come to simulating a SAN in my own home, with resilient, abstracted hardware hidden from the VM.

I make and break bridges and virtual networks on the fly. I’ve got three gigabit Ethernet ports, and segregate them by function – to the point where ALL VM traffic is on a dedicated port, so that if a VM talks to the host it pops out through the physical switch and back in on the front interface.

Here’s the key thing: While all of this is probable available from other vendors, and some parts may even be free, I’m in control. Sure documentation can be sketchy, and it requires a lot more basic knowledge of networking, storage and hardware architectures than other solutions, I’m hardly take my car to a garage to be serviced if I wanted to be a mechanic, I’d get dirty. Break things definitely, but that’s a great learning tool.

My power laptop used to break whenever I entered standby, and I mean trash the root file system (ext3 to boot, supposedly bullet-proof) so I filed a bug. As it turns out, it only happened when I was running HVM, so I applied a patch published 30 days before to my kernel. Voila!

Now this is a big thing! Without being a paid customer of Redhat’s, I got a problem resolved quickly and comfortably. I must stress here that Fedora is BETA software, permanently, and I expect it to break, but frankly for my needs it’s just fine, and when I can get things like this resolved, even better.

Sure, it’s painful, but it’s all under my control. I’ve scripted Windows 2003 deployments with snapshots that boot a new instance, sysprepped and ready to go, in 7 minutes. IPTables lets me simulate complex networks, firewalls and even lossy links.

I remain firmly impressed!