Tuesday, 12 January 2010

Utility as a Learning Tool

IT, as with other high-skilled vocations, requires a constant cycle of learning and certification if you are to attain, retain and prove your skills. Each practitioner I know has specific competencies or bias towards particular product sets, architectures and vendors, so naturally we keep up with the latest developments and new releases.

One of the recurring problem engineers have is getting to grips with the ins and outs of a new product. Supporting the infrastructure for large organisations requires a lot of time to explore the feature sets (especially as they contrast with the vendor’s stated features), their utility, recoverability, capacity etc etc. This is quite entertaining in itself; incremental product releases generally tend to build on the feature sets of earlier versions, and if the vendor is any good they’ll provide good documentation and training for the upgrade path.

The one aspect I have problems with is a brand new product set from a particular vendor. Headline products servicing databases, messaging and operating systems are very infrequently created from scratch, but the myriad supporting products and protocols are under constant evolution. Quite often they are aimed at improving a small part of a whole, and the learning path can be intriguing at best.

I enjoy getting to grips with new products, but one can only go so far without a goal. My biggest problem with PowerShell was always that it was a reinvention of what most administrators were doing just fine with other technologies. Granted it’s streamlined and feature-rich, but as a fairly hefty departure from Windows command-line scripts or even Windows Scripting Host, the clear need to adopt it wasn’t ever really there while the learning curve was rather steep. This is a big problem; I knew I needed to learn it, but without a problem to solve the effort required doesn’t seem to match the reward.

I’m not the sort of person who will gladly sit through pages of manuals or RFCs to understand a new product or protocol. I’m much more hands-on; my personal systems include a myriad of products that I never imagined I’d come to rely on, they were mostly simply trying to understand how they work. Now that I do depend on them, I have bumped into each of the nasty bugs and side-effects they present, as well as discovering both features that are not in the headline literature and uses that even I hadn’t anticipated when I set out. If you build it, they will come.

And this is the focus of my argument. Pure learning, whether theoretical or practical, has no use in and of itself. Only when technologies are applied do they have value. Storage Area Networks (at least before iSCSI), systems monitoring, ERP applications and even the larger database configurations are beyond the need of the average technical user, and require hardware that should exist outside of a raised-floor, fluorescent data centre. Yet almost every technical enthusiast and support professional I know has some form of lab at home to explore these technologies and the products that offer them.

In my own explorations of technology, I have become decidedly indifferent to the specific products I am evaluating, since they come and go. I am vastly more interested in what’s going on under the hood, since these implementations are much more stable across product versions than the latest trend in user interfaces that mark the biggest visible change in product releases. Using open-source software I’ve been able to emulate and get to grips with almost all of the concepts used in large infrastructure installations, from SANs to firewalling to virtualisation to build and deployment to robust databases, and all in a single server. If my employer found it expedient to spring for a lab to learn about the proprietary equivalents, it would cost in the region of thousands to tens of thousands of dollars, and still I’d only learn more about how to perform specific tasks rather than understand the deeper concepts. Of course, YMMV.

So how do you keep up-to-date with current products, that come and go, while still learning skills you can use professionally? Well obviously the specifics of any implementation is important, and getting to know the interfaces, procedures and maintenance of the products is critical if you’re in the support role. But branching out into other vendors can bring a much deeper understanding of the underlying principles and methods than just installing the latest shiny package. If you are fortunate enough to have an employer that has a good lab, schedule a few hours a week for tinkering, and write the results of your evaluation out.

The modern workplace (especially in IT) is less about protecting your job by hiding what you know than in previous decades, and if you can demonstrate your ability to command a new approach, sharing your experiences can only do you good. I’ve found open IT departments and companies that constantly, and critically, evaluate themselves and the ecosystem they work in are definitely more productive and rewarding places to work.

Sunday, 3 January 2010

Getting IP Right in Windows: 5. NAT is not a Firewall

Networking in Windows is deceptively easy. The level of development Microsoft has achieved to make it so is quite considerable, and I contrast it here with the amount of tweaking required to get Unix services off the ground.

That said, a well-implemented IP structure is the cornerstone of any enterprise (or even serious home) office deployment. I’ve composed a series of five articles on topics you should be really getting right! There are certainly more, but these stick out in my mind.

5. NAT is not a Firewall

Here’s the part where I put on my flame-resistant suit. I know this is divisive, so let it be known this part is entirely my opinion :)

NAT was devised as a mechanism for hosts on networks with incompatible routing structures (either overlapping network numbers or RPIPA addresses seeking Internet connectivity) to have their addresses transformed into something more palatable. This happens every day in millions of home and corporate routers and firewalls, allowing millions more computers to consume Internet services without consuming the Internet’s most precious resource – global IP addresses.

Since these private networks use IP space that cannot be Internet routed, they are translated on the fly to, typically, one address which is what the destination sees as the source, while the router/firewall maintains a mapping of who asked for what from where, so that replies make it back to the requestor. If a packet arrives that has no apparent previous relationship to an internal host, it is dropped. In this way, NAT is an implied firewall, dropping unsolicited packets from the nasty Internet. Of course, if we need, say, HTTP or VoIP to be let in, we poke some holes and make exceptions.

Precisely because this is an implicit form of security, it is dangerous. Security is all about paying attention, making sure we understand how a threat can enter a network, how the people are affected (or risks themselves), what systems are vulnerable and how to defend against them etc. Defense in Depth, an NSA-derived concept, is all about layering security at different points in the network to increase the overall robustness.

Yet so often, NAT is simply assumed to be a line of defense. True, unsolicited traffic is bounced, but this causes problems for traffic like FTP (unless the firewall has application-layer awareness) and VoIP, whose Session Initiation Protocol has a rough time of NAT. Why then is the security only played out one way?

A commonly portrayed threat is of a trojan application or other type of malware being installed on your computer, scanning for personal data like credit cards and bank statements then uploading them to the nefarious source. NAT, in assuming that your network is the safe place and the Internet bad, gladly allows the outbound traffic through without question, and bang goes your credit rating.

IPv6 makes the need for NAT moot, since the address space and allocation policy should allow everyone to hold their own huge chunk of the address space with Internet-valid addresses. I haven’t yet seen a convincing argument why NAT should live on in an IPv6 world.

While NAT does indeed provide a great amount of protection, blindly approving that it makes you safer is missing the point. IP is a versatile protocol suite, and the fact that NAT is so readily implemented proves it, but without a little attention, you’re letting your router vendor dictate how your network is protected.

Recent versions of Windows include a host-based firewall, allowing each device to control what traffic is allowed to arrive at the network interfaces, and even what traffic is allowed out. Get to know the workings of the firewall and how to define the rules that are appropriate for your environment, including specific applications and how they communicate. Unfortunately, a lot of the protocols used on Windows tend to negotiate dynamic ports for communication, but since the firewall is also application-aware (specific executables are allowed to communicate instead of simply this or that port), it is a fairly easy task to secure your Windows hosts from a lot of the prevalent threats.

Enterprises know this and carefully craft the types of traffic that are allowed in and out of the network, with a little thought your networks can be secure, responsive and available.

Previous: 4. Disable NetBIOS

Saturday, 2 January 2010

Getting IP Right in Windows: 4. Disable NetBIOS

Networking in Windows is deceptively easy. The level of development Microsoft has achieved to make it so is quite considerable, and I contrast it here with the amount of tweaking required to get Unix services off the ground.

That said, a well-implemented IP structure is the cornerstone of any enterprise (or even serious home) office deployment. I’ve composed a series of five articles on topics you should be really getting right! There are certainly more, but these stick out in my mind.

4. Disable NetBIOS over TCP/IP (NBT)

The first network I ever configured around 1996 used the NetBIOS Extended User Interface (NetBEUI) protocol, and worked fantastically on a Windows 3.11 or 95 computer with 4MB RAM, happily fetching my files on my LAN and helping me (virtually) shoot my friends. Locating the file server (or peer) was accomplished using broadcasts, routing wasn’t an option and I had absolutely no need to talk to anything but other Windows devices, which was fine.

These days, I expect to be able to retrieve 4MB per second on my LAN, probably more, my computer regularly sends packets destined for a server thousands of miles away running who-knows-what, and modern network topologies would have baffled me back then. Microsoft has gone a long way to make sure every product of theirs, and supporting services for applications, are fully transitioned to TCP/IP, and yet NetBIOS is still in there, broadcasting the names of my computer, domain and the servers back at the office to all and sundry, just in case.

Turn it off!

There is a minor security concern that these broadcasts advertise to everyone on whatever LAN you’re plugged into where you work, what version of Windows you’re running etc, and there’s even been some mutterings of an exploit or two, but the threat is not significant.

NetBIOS advertises hostname of a service, be it a file share, chat endpoint or workgroup in a 16-byte field, with the last being reserved for the node type (e.g. 00 for Workstation, 03 for Messages, 20 for a File Server etc). From this, we’ve inherited the hideous 15-character limitation on hostnames and domains. Now I’m not advocating long hostnames as a rule, your naming system should be concise and accurate, but just as 8.3 filenames giving way to 255 characters in Windows 95 freed us from ever-more cryptic shorthand, this is a system that is long past the shelf date.

The short hostnames are a bother, but the biggest evil of NetBIOS (specifically NetBIOS over TCP/IP, or NBT) is to hide mistakes. If your DNS is improperly functioning, a NetBIOS Name Service (NBNS) broadcast or Windows Internet Name Service (WINS) query picks up the slack by asking everyone on the network in the hope that the right node will respond, or forcing you to rely on the WINS service, which is steadily being obsoleted by the folks at Microsoft.

Do yourself a favour, disable NetBIOS over TCP/IP (NBT) on every interface of systems in your lab and home from the word go. If you’re doing labs for training, make this part of the base install, or include it in your domain policy. Of course, for your company network run this through your testing process first. You may spend some time fixing the problems that crop up, but like me you’ll be quite surprised just how much you were depending on it in the first place.

Previous: 3. IPv6 is Coming
Next: 5. NAT is not a Firewall

Friday, 1 January 2010

Getting IP Right in Windows: 3. IPv6 is coming

Networking in Windows is deceptively easy. The level of development Microsoft has achieved to make it so is quite considerable, and I contrast it here with the amount of tweaking required to get Unix services off the ground.

That said, a well-implemented IP structure is the cornerstone of any enterprise (or even serious home) office deployment. I’ve composed a series of five articles on topics you should be really getting right! There are certainly more, but these stick out in my mind.

3. IPv6 is coming

If you haven’t already started looking at IPv6, you should. Even though there are billions of valid IPv4 addresses, a lot are wasted by the way they’re carved up so there won’t be enough to go around. The predictions of doom get revised by the week, but at the very least the protocols themselves are long overdue for a makeover, and you should get ready sooner than later/

IPv6 includes some considerable improvements, the most obvious and famous is the gargantuan address size, so big we have to dump it down to images like addressing every grain of sand of every beach on the planet.

The big benefit here is that address spaces virtually as large as the entire IPv4 space can be assigned to single countries, and over-provisioning of the space is a key factor in deciding how to carve it up. Internet routers have a lot of work deciding which of the myriad paths is right for traffic, and by dividing the space into these huge units, the routing tables can become much, much smaller, allowing the Internet to continue it’s amazing rate of expansion.

But the address space is only one of the improvements. Considerable work has been done to ensure IPv6 networks just work. One of these innovations is the creation of link-local addresses, a form of DHCP, and Router Solicitation. The task of configuring your devices has been moved from your centralised or distributed DHCP server to the devices that know your network the best: your routers.

IPv4 evolved from the first networks mostly when 256kbps was FAST! The protocols have been extended and augmented with things like Quality of Service, IPSec and all kinds of other solutions for secure (and plain) tunnelling. This has resulted in a confusing array of features and incompatibilities.

IPv6 includes a lot of these as standard (IPSec is now mandatory), and improves on others. QoS is vitally important for letting your routers know that your VoIP conversation is much more important than downloading your iTunes purchase, and IPv6 handles these decisions much more intelligently and consistently. Each part of the data packet (IP header, IP payload, TCP/UDP payload, and frequently the application itself) is also checksummed to detect errors, and each layer adds its own checksum, so IPv6 assumes these problems will be detected higher up in the protocol stack and does away with its own layer, further increasing speed.

You should even be able to request addresses for your entire organisation that are all internet-valid, doing away with RPIPA-type addressing (as I mentioned in my previous post here). How organisations deal with the change is still to be seen, but I sincerely hope NAT dies the death it deserves. More on this in my later article, NAT is not a Firewall.

Not all ISPs route or offer the protocol yet, nor do most Internet services, so don’t expect your Internet connection to be switched over any time soon. Versions of Windows from Vista and Server 2003 onwards (XP/2000 has limited support) now including IPv6 out-the-box running gladly alongside the IPv4 stack, you’re free to experiment and explore.

These are challenges you’ll be facing before long, so getting to grips now is well worth the effort.

Previous: 2. Subnets and Private IP space
Next: 4. Disable NetBIOS