Wednesday 24 November 2010

What is a Read Receipt?

I saw a posting on GMail's messaging forums today about their apparent lack of the facility to have a Read Receipt generated. I felt compelled to reply, and I thought I'd post it here too as it's something not quite as obvious as it might first seem. Google is moving more and more into the enterprise, and this is a standard feature that is just plain missing, so obviously it's noticed. To me, it's more of a social than technical challenge...

A Read Receipt is quite the misnomer: the best you can say is that the message was shown on a screen. Which screen though? GMail is accessible through  the webmail, POP3, IMAP, mobile devices (GMail native client, Exchange Sync), mobile web... what exactly are you trying to prove happened?

I've found Read Receipts to be near useless, while Delivery Receipts are generally poorly implemented and probably achieve the same thing: some systems will say that they've taken receipt of the message into the system, but what you actually want to know is that it has successfully arrived in the user's mailbox. Signing for a courier delivery is exactly that, but you'll never know if the recipient ever opened the package.

If you're so keen to know whether the person opened the mail, ask them to confirm receipt manually. Having a computer tell you that you read something is at best trivial and at worst misleading. For a one-line e-mail,this facility may be useful and mostly true, you're likely to have absorbed it. A 1000-word essay, what you really want to know is not just that it was displayed on some screen (and reading that much on a mobile is pointless), but rather that the end-user absorbed everything. Only the user can attest to that.

On principle, keep read receipts out of GMail (or other providers, even Exchange!), and these problems go away. Just ask for a reply that your message is received, understood and/or acted upon. Much more useful.

Friday 24 September 2010

I'm very disappointed to report that it works

A friend of mine was recently reviewing a friend's CV, and noted he had an ethical hacking qualification. I found this rather amusing, since I know there are a gazillion unqualified, uncertified hackers (and simply curious people) who I'd much, much rather be on some kind of radar. It reminds me of the DRM/DMCA debate: it's the ones not following the rules that tend to get the benefit, or at least know enough about the rules to not care about enforcement.

I have been having a series of discussions with a few security specialists in the last two weeks, and they've put a few seeds in my brain. I've reviewed a few articles about application vulnerabilities, and with more and more of the world moving into "the cloud" (btw, I hate that phrase) we're handing over more control to this nebulous entity. Google are definitely at the front, at least as far as end-user experience goes. Heck, I'm hosting this very blog on Google's servers, and neither know nor care where they are.

I also recently acquired an Android phone, and allowed Google's hooks into my life to sink just that bit deeper with integrated messaging, contacts, calendaring, apps I really don't need, Facebook on-the-go, Twitter... it's all cloud! I left my laptop at a friend's house recently, and realised (to my own shock) that frankly, I can live without it for a day or two, such is the functionality in this great new device.

So while all the focus is off in the cumulo-nimbus, I'm still dealing with daily life that's hosted and automated on some providers that are definitely well-defined. My bank is one of them, and as an extra I do some share dealing with their attached brokerage. Side note: three years ago I had spare cash and thought "what's safer than banks?".

Today, I placed an order to sell a few shares and received an order number. I recalled a conversation with one of these specialists about session identifiers where we discussed collision avoidance and non-sequentialness as two good markers for session tracking. On a whim, I took the URL generated by the transaction ID to view details and incremented the trade identifier by one digit.

Lo and behold, I got the details of someone else's trade. One thousand shares of an oil company, concluded around the same time as mine. Alarmed, I did it again, this time decrementing (I'd hit an upper bound), and found an incomplete trade. This is an order, as yet unfulfilled, awaiting the conditions set by the initiator. Now granted, i couldn't see the identity of the trader (in either case), so perhaps on the surface not such a big deal. But, if you know dealing, complete trades are not so significant as they are done and dusted, while incomplete trades show intention. Script this query for current and future IDs, and you could get a feel for investor sentiment that gives you an advantage.

I've been using this particular trading system for years, and regardless of the losses I've made (seriously, I have no aptitude for this) thought of the security measures as fairly robust: SSL encryption, separate login and dealing password both never revealed in full, limits on trading volume by account type and history, approved browser versions only. How easily we are placated.

Handing over so much of our personal info into the (at least free) cloud scares me, though I'm conscious of the fact that free products are, in the sage words of my father, worth what you paid for them. Paid services may not fare much better; by abstracting services into this fog, we run the risk of losing touch with how services are delivered, how we control them, and what we stand to lose if it all goes wrong.

But most of all, they're still built and run to the same rules as traditional systems, no matter how abstractly they're presented. The same DBMSs, the same web servers and runtimes, the same developers and critically the same developer mentalities.

A sobering lesson indeed.

Oh, and yes I raised this with the brokerage concerned. Does that get me the ethical badge?

Friday 10 September 2010

Security by Default in the Defined Domain

Some simple security concepts are tougher for engineers to grasp than they should be. A pervasive view I've found is that firewalls, security policies, LAN partitioning and excess of routing tend to just get in the way of making systems work.

Now certainly, large enterprises and especially security sensitive ones requiring secure access to and transmission of data such as financial services, R&D, law enforcement and others do require a higher level of attention to detail when designing and operating their systems. Most engineers cut their teeth building systems in the privacy of their homes or labs, neither of which can match the scale and complexity of real-world systems spanning continents, regulatory domains and most importantly untrusted links. When learning and experimenting, the engineer tends to be in control of everything.

Secure authentication is enabled by default in Windows, with Kerberos providing one of the most resilient systems available, and AD integration makes expansion and administration a breeze. But most of the remaining protocols don't pay too much attention, starting with the most pervasive - CIFS (Windows File Sharing). Even RDP starts off with secure authentication then moves on to a plaintext stream for keyboard inputs and screen updates.

I'm a big fan of IPSec, since it allows for transparent encryption and authentication without having to do pretty much anything to your applications or network layout. The problem comes in getting all your nodes to play nice, since it does require some configuration. Windows domain policies and domain membership vastly simplifies this - if your system is a domain member, enabling opportunistic IPSec encryption is a breeze.

This is the "Defined Domain", a set of systems over which you have complete control. A visitor to your network would not have these policies defined, so their device would not be able to participate, but placing them in a guest LAN and tunneling the connection at the router over a secure connection in Tunnel mode can help solve that problem, but it can quickly spiral out from there - how do you get simpler devices and protocols like SNMP and even PING to cooperate?

The Defined Domain needs to incorporate not just the systems you're protecting, but also how and why. Secure protocols aren't that hard to come by these days (e.g. SNMPv3 incorporates some measure of encryption, SSL is available in all major web platforms), but still requires some configuration and attention. Applications need to be designed with at least some measure of awareness of the security domain if they are to be trusted, and administrators and designers most of all need to keep this in their head when going about their work.

The biggest threat to any security domain is and will always be the human factor. By incorporating security devices in an easy-to-use package, and often transparent, it can make the average engineer immune to the considerations required to make systems truly secure. Yet again, ease induces laziness, and the worst kind of security is the kind you can't (or won't) verify. It took about two hours of use before a colleague of mine noticed his newly-secured RDP session (certificate-based authentication) placed a new padlock symbol on the control bar of his Terminal Services Client. Until then, the idea that the connection was relatively more secure was fairly unverifiable, and eventually irrelevant.

Attention is required, and a lot of education. No one security solution is a panacea (everyone kept screaming certificates at me, as a one-size fits all solution, no no no), and this will always be the problem.

At least in a defined domain, it's more manageable, more approachable. And then along come the users...

Wednesday 18 August 2010

Are YOUR hard drives noisy enough?

A conversation with a colleague presented an interesting question: Which hard drives are the quietest?

Now there are excellent resources online like Silent PC Review and QuietPC for finding the right components to build Home Theatre PCs. But this got me thinking, are my hard drives too quiet?

A lot of parameters are accessible on the firmware of modern hard drives, including readahead optimisations, cache policies, as well as monitoring options for errors, failures, temperature and even how many times the drive has been switched on. Linux distributions provide the smartctl utility for retrieving the monitoring variables, and the hdparm tool for setting all kinds of parameters, one of which is quite interesting...

The "-M" option sets the Automatic Acoustic Management level, which in most disks is one of OFF, QUIET and FAST, corresponding to integer values of 0, 128 and 254 respectively. Now the system I'm typing this on is my laptop, and I don't want it clunking away while I work (well, apart from the keypresses required to type this article). So, I set it to QUIET (128).

But, I have a server with four disks, two 750GB and two 1.5TB, running my server applications (file shares, telephony, media streaming etc), and crucially my virtualised lab which tend to be disk-heavy, especiall at high concurrency when each OS instance thinks it has exclusive control over volumes and optimises access accordingly. This server used to be located in my office alongside my desk, and yes it got a bit clattery. Now, it's in another room tucked under a cupboard, very headless. What do I care about noise?

Now, I could do a big benchamrking experiment, but this should give a reasonable first glance. Yes, the system is doing other things, but since the cpu sits at 97% of the time in a low-power state, the load isn't that high anyway and is probably negligible to the result.

Doing a simple read of one gigabyte from the RAID-5 array, at different offsets to remove cache interference, shows a remarkable difference. Reading a 1GB data segment from the logical drive (under the filesystem) increases throughput from 147MB/s to 220MB/s when the acoustic mode is set to FAST.

On the filesystem itself (ext4, defaults), extracting the latest Linux kernel source from a tar.bz2 file found on kernel.org more than halves the duration, from 3:03 to 1:30. Reading the resulting directory tree with `ls -lR` improves from 10s to 5s.

In the end, it probably makes little difference to my day-to-day tasks, but optimisation is central to any real techie's heart.

How to set or measure these features in Windows? No idea.

Friday 7 May 2010

Multihomed DNS and how Windows makes you lazy

I was called in to solve an interesting problem today. There's a good principle in effective IT security to segregate different services into LANs appropriate to their function. If there is more than one function, then more than one NIC (perhaps a VLAN ID) is required, with a unique IP address.

Windows is a great tool, a platform of continual development (and yes, that means it gets better, so don't think I've always found it to be great) over the last two decades that now runs a fair chunk of global business, and in some demanding environments. One of the simple beauties of the platform is the unified codebase, libraries and APIs from the smallest XP Embedded right up to Windows Server Datacentre Edition: The machine I'm developing on is almost identical, apart from scale, to the machine I'm likely to deploy on.

Yes, there are other differences, but I find most of them to be paid features like clustering and more speed (I can't do it captain). The hardware abstraction layer and other consistencies like the IP stack, filesystems and memory management are wonderful tools for developers. Unfortunately, so many admins cut their teeth on Windows desktop editions, or at least smaller servers under their absolute control, that they struggle to make the transition to enterprise administration.

My previous rant about NetBIOS is a case in point. With all this abstraction, details like network interfaces and network service location are so well hidden, they're essentially invisible. Ever try to catch the Invisible Man to ask him what he's doing?

The problem I had to solve today was around multihomed servers. Windows IT admins tend to be lazy, and NetBIOS broadcasts are only one factor where we rely on the wizardry of the OS to figure out what we're trying to do and make it happen. Dynamic DNS registration removes some of the tedium and mistakes from the process of getting systems deployed, but blindly assuming it knows what you want is just wrong.

The convergence of an Active Directory domain and the DNS namespace is a nifty feat, but in multihomed systems it's a nightmare. If all interfaces are routable and reachable, then this is slightly moot, but put up a firewall or routing restriction in the way and intermittent problems (the worst kind) crop up, and troubleshooting without a solid foundation in networking is tough. DHCP, DDNS, NetBIOS, even APIPA, all seek to hide the complexity from Windows admins, and they end up woefully underskilled in the cornerstone that makes their network tick.

The problem in this instance is that the FQDN of the server is comprised of the hostname and the AD DNS Name. No problem for a typical, single-NIC server. Unfortunately, this is an abstraction when it comes to multiple NICs: just how is DNS supposed to know what your topology is when giving you an answer.

Trying to convince a religiously Microsoft admin to use a subzone to specify the interface is absorbed with something approaching heresy. Do a traceroute (or tracert for Windows guys) to any internet address and you'll see FQDNs of routers, with the hostname portion wildly different along the way, including multiple digit groups. Most of these are Internet routers, and the DNS entries correspond to the interface rather than the router itself.

Of course, Windows has a mild cow if you try to refer to it by anthing other than the system name as the first part of the FQDN, and always expects all interfaces to be present in the machine's dns suffix. The best solution...

Change the way you think about finding servers. When you're connecting, you're probably interested in a particular interface anyway. Some services may not even be listening on particular interfaces. Getting your brain tuned to how your network is built, using that to figure out how your systems are connected, and habitually spelling out exactly which way you want to connect by an explicit FQDN can only do good.

Of course, some applications take it as read that a server is reachable by the short FQDN. Sometimes system admins can be even more hardcoded. Both are very, very difficult to change.

Tuesday 12 January 2010

Utility as a Learning Tool

IT, as with other high-skilled vocations, requires a constant cycle of learning and certification if you are to attain, retain and prove your skills. Each practitioner I know has specific competencies or bias towards particular product sets, architectures and vendors, so naturally we keep up with the latest developments and new releases.

One of the recurring problem engineers have is getting to grips with the ins and outs of a new product. Supporting the infrastructure for large organisations requires a lot of time to explore the feature sets (especially as they contrast with the vendor’s stated features), their utility, recoverability, capacity etc etc. This is quite entertaining in itself; incremental product releases generally tend to build on the feature sets of earlier versions, and if the vendor is any good they’ll provide good documentation and training for the upgrade path.

The one aspect I have problems with is a brand new product set from a particular vendor. Headline products servicing databases, messaging and operating systems are very infrequently created from scratch, but the myriad supporting products and protocols are under constant evolution. Quite often they are aimed at improving a small part of a whole, and the learning path can be intriguing at best.

I enjoy getting to grips with new products, but one can only go so far without a goal. My biggest problem with PowerShell was always that it was a reinvention of what most administrators were doing just fine with other technologies. Granted it’s streamlined and feature-rich, but as a fairly hefty departure from Windows command-line scripts or even Windows Scripting Host, the clear need to adopt it wasn’t ever really there while the learning curve was rather steep. This is a big problem; I knew I needed to learn it, but without a problem to solve the effort required doesn’t seem to match the reward.

I’m not the sort of person who will gladly sit through pages of manuals or RFCs to understand a new product or protocol. I’m much more hands-on; my personal systems include a myriad of products that I never imagined I’d come to rely on, they were mostly simply trying to understand how they work. Now that I do depend on them, I have bumped into each of the nasty bugs and side-effects they present, as well as discovering both features that are not in the headline literature and uses that even I hadn’t anticipated when I set out. If you build it, they will come.

And this is the focus of my argument. Pure learning, whether theoretical or practical, has no use in and of itself. Only when technologies are applied do they have value. Storage Area Networks (at least before iSCSI), systems monitoring, ERP applications and even the larger database configurations are beyond the need of the average technical user, and require hardware that should exist outside of a raised-floor, fluorescent data centre. Yet almost every technical enthusiast and support professional I know has some form of lab at home to explore these technologies and the products that offer them.

In my own explorations of technology, I have become decidedly indifferent to the specific products I am evaluating, since they come and go. I am vastly more interested in what’s going on under the hood, since these implementations are much more stable across product versions than the latest trend in user interfaces that mark the biggest visible change in product releases. Using open-source software I’ve been able to emulate and get to grips with almost all of the concepts used in large infrastructure installations, from SANs to firewalling to virtualisation to build and deployment to robust databases, and all in a single server. If my employer found it expedient to spring for a lab to learn about the proprietary equivalents, it would cost in the region of thousands to tens of thousands of dollars, and still I’d only learn more about how to perform specific tasks rather than understand the deeper concepts. Of course, YMMV.

So how do you keep up-to-date with current products, that come and go, while still learning skills you can use professionally? Well obviously the specifics of any implementation is important, and getting to know the interfaces, procedures and maintenance of the products is critical if you’re in the support role. But branching out into other vendors can bring a much deeper understanding of the underlying principles and methods than just installing the latest shiny package. If you are fortunate enough to have an employer that has a good lab, schedule a few hours a week for tinkering, and write the results of your evaluation out.

The modern workplace (especially in IT) is less about protecting your job by hiding what you know than in previous decades, and if you can demonstrate your ability to command a new approach, sharing your experiences can only do you good. I’ve found open IT departments and companies that constantly, and critically, evaluate themselves and the ecosystem they work in are definitely more productive and rewarding places to work.

Sunday 3 January 2010

Getting IP Right in Windows: 5. NAT is not a Firewall

Networking in Windows is deceptively easy. The level of development Microsoft has achieved to make it so is quite considerable, and I contrast it here with the amount of tweaking required to get Unix services off the ground.

That said, a well-implemented IP structure is the cornerstone of any enterprise (or even serious home) office deployment. I’ve composed a series of five articles on topics you should be really getting right! There are certainly more, but these stick out in my mind.

5. NAT is not a Firewall

Here’s the part where I put on my flame-resistant suit. I know this is divisive, so let it be known this part is entirely my opinion :)

NAT was devised as a mechanism for hosts on networks with incompatible routing structures (either overlapping network numbers or RPIPA addresses seeking Internet connectivity) to have their addresses transformed into something more palatable. This happens every day in millions of home and corporate routers and firewalls, allowing millions more computers to consume Internet services without consuming the Internet’s most precious resource – global IP addresses.

Since these private networks use IP space that cannot be Internet routed, they are translated on the fly to, typically, one address which is what the destination sees as the source, while the router/firewall maintains a mapping of who asked for what from where, so that replies make it back to the requestor. If a packet arrives that has no apparent previous relationship to an internal host, it is dropped. In this way, NAT is an implied firewall, dropping unsolicited packets from the nasty Internet. Of course, if we need, say, HTTP or VoIP to be let in, we poke some holes and make exceptions.

Precisely because this is an implicit form of security, it is dangerous. Security is all about paying attention, making sure we understand how a threat can enter a network, how the people are affected (or risks themselves), what systems are vulnerable and how to defend against them etc. Defense in Depth, an NSA-derived concept, is all about layering security at different points in the network to increase the overall robustness.

Yet so often, NAT is simply assumed to be a line of defense. True, unsolicited traffic is bounced, but this causes problems for traffic like FTP (unless the firewall has application-layer awareness) and VoIP, whose Session Initiation Protocol has a rough time of NAT. Why then is the security only played out one way?

A commonly portrayed threat is of a trojan application or other type of malware being installed on your computer, scanning for personal data like credit cards and bank statements then uploading them to the nefarious source. NAT, in assuming that your network is the safe place and the Internet bad, gladly allows the outbound traffic through without question, and bang goes your credit rating.

IPv6 makes the need for NAT moot, since the address space and allocation policy should allow everyone to hold their own huge chunk of the address space with Internet-valid addresses. I haven’t yet seen a convincing argument why NAT should live on in an IPv6 world.

While NAT does indeed provide a great amount of protection, blindly approving that it makes you safer is missing the point. IP is a versatile protocol suite, and the fact that NAT is so readily implemented proves it, but without a little attention, you’re letting your router vendor dictate how your network is protected.

Recent versions of Windows include a host-based firewall, allowing each device to control what traffic is allowed to arrive at the network interfaces, and even what traffic is allowed out. Get to know the workings of the firewall and how to define the rules that are appropriate for your environment, including specific applications and how they communicate. Unfortunately, a lot of the protocols used on Windows tend to negotiate dynamic ports for communication, but since the firewall is also application-aware (specific executables are allowed to communicate instead of simply this or that port), it is a fairly easy task to secure your Windows hosts from a lot of the prevalent threats.

Enterprises know this and carefully craft the types of traffic that are allowed in and out of the network, with a little thought your networks can be secure, responsive and available.

Previous: 4. Disable NetBIOS

Saturday 2 January 2010

Getting IP Right in Windows: 4. Disable NetBIOS

Networking in Windows is deceptively easy. The level of development Microsoft has achieved to make it so is quite considerable, and I contrast it here with the amount of tweaking required to get Unix services off the ground.

That said, a well-implemented IP structure is the cornerstone of any enterprise (or even serious home) office deployment. I’ve composed a series of five articles on topics you should be really getting right! There are certainly more, but these stick out in my mind.

4. Disable NetBIOS over TCP/IP (NBT)

The first network I ever configured around 1996 used the NetBIOS Extended User Interface (NetBEUI) protocol, and worked fantastically on a Windows 3.11 or 95 computer with 4MB RAM, happily fetching my files on my LAN and helping me (virtually) shoot my friends. Locating the file server (or peer) was accomplished using broadcasts, routing wasn’t an option and I had absolutely no need to talk to anything but other Windows devices, which was fine.

These days, I expect to be able to retrieve 4MB per second on my LAN, probably more, my computer regularly sends packets destined for a server thousands of miles away running who-knows-what, and modern network topologies would have baffled me back then. Microsoft has gone a long way to make sure every product of theirs, and supporting services for applications, are fully transitioned to TCP/IP, and yet NetBIOS is still in there, broadcasting the names of my computer, domain and the servers back at the office to all and sundry, just in case.

Turn it off!

There is a minor security concern that these broadcasts advertise to everyone on whatever LAN you’re plugged into where you work, what version of Windows you’re running etc, and there’s even been some mutterings of an exploit or two, but the threat is not significant.

NetBIOS advertises hostname of a service, be it a file share, chat endpoint or workgroup in a 16-byte field, with the last being reserved for the node type (e.g. 00 for Workstation, 03 for Messages, 20 for a File Server etc). From this, we’ve inherited the hideous 15-character limitation on hostnames and domains. Now I’m not advocating long hostnames as a rule, your naming system should be concise and accurate, but just as 8.3 filenames giving way to 255 characters in Windows 95 freed us from ever-more cryptic shorthand, this is a system that is long past the shelf date.

The short hostnames are a bother, but the biggest evil of NetBIOS (specifically NetBIOS over TCP/IP, or NBT) is to hide mistakes. If your DNS is improperly functioning, a NetBIOS Name Service (NBNS) broadcast or Windows Internet Name Service (WINS) query picks up the slack by asking everyone on the network in the hope that the right node will respond, or forcing you to rely on the WINS service, which is steadily being obsoleted by the folks at Microsoft.

Do yourself a favour, disable NetBIOS over TCP/IP (NBT) on every interface of systems in your lab and home from the word go. If you’re doing labs for training, make this part of the base install, or include it in your domain policy. Of course, for your company network run this through your testing process first. You may spend some time fixing the problems that crop up, but like me you’ll be quite surprised just how much you were depending on it in the first place.

Previous: 3. IPv6 is Coming
Next: 5. NAT is not a Firewall

Friday 1 January 2010

Getting IP Right in Windows: 3. IPv6 is coming

Networking in Windows is deceptively easy. The level of development Microsoft has achieved to make it so is quite considerable, and I contrast it here with the amount of tweaking required to get Unix services off the ground.

That said, a well-implemented IP structure is the cornerstone of any enterprise (or even serious home) office deployment. I’ve composed a series of five articles on topics you should be really getting right! There are certainly more, but these stick out in my mind.

3. IPv6 is coming

If you haven’t already started looking at IPv6, you should. Even though there are billions of valid IPv4 addresses, a lot are wasted by the way they’re carved up so there won’t be enough to go around. The predictions of doom get revised by the week, but at the very least the protocols themselves are long overdue for a makeover, and you should get ready sooner than later/

IPv6 includes some considerable improvements, the most obvious and famous is the gargantuan address size, so big we have to dump it down to images like addressing every grain of sand of every beach on the planet.

The big benefit here is that address spaces virtually as large as the entire IPv4 space can be assigned to single countries, and over-provisioning of the space is a key factor in deciding how to carve it up. Internet routers have a lot of work deciding which of the myriad paths is right for traffic, and by dividing the space into these huge units, the routing tables can become much, much smaller, allowing the Internet to continue it’s amazing rate of expansion.

But the address space is only one of the improvements. Considerable work has been done to ensure IPv6 networks just work. One of these innovations is the creation of link-local addresses, a form of DHCP, and Router Solicitation. The task of configuring your devices has been moved from your centralised or distributed DHCP server to the devices that know your network the best: your routers.

IPv4 evolved from the first networks mostly when 256kbps was FAST! The protocols have been extended and augmented with things like Quality of Service, IPSec and all kinds of other solutions for secure (and plain) tunnelling. This has resulted in a confusing array of features and incompatibilities.

IPv6 includes a lot of these as standard (IPSec is now mandatory), and improves on others. QoS is vitally important for letting your routers know that your VoIP conversation is much more important than downloading your iTunes purchase, and IPv6 handles these decisions much more intelligently and consistently. Each part of the data packet (IP header, IP payload, TCP/UDP payload, and frequently the application itself) is also checksummed to detect errors, and each layer adds its own checksum, so IPv6 assumes these problems will be detected higher up in the protocol stack and does away with its own layer, further increasing speed.

You should even be able to request addresses for your entire organisation that are all internet-valid, doing away with RPIPA-type addressing (as I mentioned in my previous post here). How organisations deal with the change is still to be seen, but I sincerely hope NAT dies the death it deserves. More on this in my later article, NAT is not a Firewall.

Not all ISPs route or offer the protocol yet, nor do most Internet services, so don’t expect your Internet connection to be switched over any time soon. Versions of Windows from Vista and Server 2003 onwards (XP/2000 has limited support) now including IPv6 out-the-box running gladly alongside the IPv4 stack, you’re free to experiment and explore.

These are challenges you’ll be facing before long, so getting to grips now is well worth the effort.

Previous: 2. Subnets and Private IP space
Next: 4. Disable NetBIOS