Thursday, 19 July 2012

Do the ends still matter?

One of the biggest conflicts of interest in my role as an infrastructure architect (I'm responsible for servers, applications, security, storage etc) is that I have a heavily vested interest in the service they provide. This may sound like a no-brainer, but often I need to find ways to get my traffic to users at the expense of someone else. I want preferential treatment if conflicts with other traffic.
Of course, it's not quite as simple as that. Pragmatically I realise I am competing with others on limited resources, whether it's rack space, hypervisor RAM or network bandwidth. One of the less-known principles underlying the Internet ethos is the End-to-end Principle. Put simply, it's this:
Complexity in any network should be implemented in the end points - the network stacks of communicating nodes and in applications - and not in the network itself. Since standards change any benefit of implementing too much intelligence in the network is quickly undermined with the need to continually match those changes in end-points, as well as legacy issues.
Essentially, the network should be as dumb as possible. Packet comes in, packet goes out, wait for next packet. This is reasonable in principle but different traffic flows require different treatments - VoIP and video streaming protocols prefer low latency and are almost always jitter sensitive, while file transfers can tolerate enormous latencies as long as they are accompanied by high bandwidth. If both types occupy the same link the risk is that the insensitive consumption of one protocol can impact the requirements of another, so QoS arrives.
This is already a necessary evil (though some may object to my use of that label in this context) since we now need to build knowledge of services into the network layer. Large IP transport installations (Internet backbones) handling gigabits of traffic per second literally don't have the time to implement protocols like this as processing adds both cost and latency. This is different from implementing different virtual circuits for different traffic types, in that it forms logic links and is a very common practice with ISPs in the final mile. As far as intelligence on network devices go, this is very low - again packets come in, figure out where packets go out.
A very interesting turn is in the development of new forms of network acceleration. Routers have long been capable of doing in-line compression of data to reduce consumption of a specific link, but this is point-to-point. If an application protocol can truly benefit from compression, this really should be done at the application protocol level (basically above Layer 3) so routers and switches can shuffle packets. A possible side-effect of link compression is to mask real versus usable bandwidth from applications (some portions of a stream may be highly compressible, others not) hindering flow-control algorithms built into TCP.
TCP sessions can be rather chatty, and some applications implement redundant techniques inside their own specification, so some network acceleration can initially make sense. Essentially, new sessions follow a well-known pattern of window sizes and other parameters, so network accelerators intercept packets and simulate the repetitive parts on each end of a link, reducing session setup time. This sounds simple, but now we enter dangerous waters. Next comes protocol caching; I request a file from a file server across the WAN and the contents are cached on my local acceleration appliance so that the user next to me can get a cached copy when her request is made. Again, sounds simple, but to prevent interception and modification the protocol implements signing, so re-generated content requires a re-generated signature. SSL acceleration similarly can be implemented using reverse proxies holding a copy of the private key for the service to extract the plaintext and look for compression and caching opportunities. I've been involved in design and deployment of many reverse proxy and SSL acceleration projects but these were explicitly part of the service.
To accelerate generic payloads the appliance needs to get heavily involved in the infrastructure, either by hosting all SSL keys and being able to masquerade as endpoints. This is where it gets very complex and risky, not only in the dissemination of privileged access (private keys and domain credentials being the most highly-prized items in any network), but also a continual catch-up game to implement these techniques on newer protocols as they are developed. There are also more intrusive techniques such as automatic downscaling of images when using mobile data to browse the web that are subtle but insidious.
Net Neutrality is the overall drive in this direction and correlates with the End-to-end Principle, treating network devices as simplistic and all traffic as equal.Smarter protocols such as (distributed) BranchCache and BitTorrent (yes, BitTorrent is a case for Net Neutrality) reduce redundancy over constrained links and better content intelligence (I explicitly convert all images in my documents to 8-bit PNGs before embedding) are far better strategies. Content distribution networks are an active participant and used by some of the largest providers (both content generators and ISPs) to reduce long-haul bandwidth and improve user responsiveness. HTTP compression is a rigorously defined standard but very sparsely used even on static pages where on-the-fly zipping is unnecessary. I could go on...
When looking into how best to transport content I prefer to let the network do what it does best and work my requirements into it than engineer a network to suit my needs unless absolutely necessary. Where my application is burdensome, I would prefer to engineer the application than throw bandwidth at the problem.
Technology marches, and bandwidth - while logically finite - seems to be keeping up rather well.

Monday, 16 July 2012

When is security "Over the Line"?

I've been trying out various security apps for Android, including location and theft tracking, secure e-mail and other device control techniques. I came across Google Apps Device Policy in the Play Store (among others). There's a handy little device management panel. I found the features a bit sparse - I could only see options to remotely wipe the device, locate it and make it ring nice and loud (I have a fairly discreet ringtone), though even this came in handy this very morning (housewarming, cocktails, phone-based DJ system, details are fuzzy towards the end of the night, great party).
A bit more digging and I found the full-blown control panel in my Apps admin page, and there all the meat is laid out - requiring the device management policies be enabled before e-mail can be subscribed to, forcing password policies and the like.
I did notice one comment that stood out in the review page on Play Store entitled "Over the Line". It's a bit of a rant, with the poster complaining about the power the device policy gives his IT department. Some education is certainly required here.
We're living in a very close approximation of Arthur C. Clarke's World of Tomorrow, but even he would probably have been surprised by the sheer power and connectedness of the devices we carry in our pockets and purses. The old adage of our phones outpowering the lunar module is somewhat behind us; they're performing megabytes of 256-bit encryption per second and rendering 3D worlds at a rate that would make older Silicon Graphics workstations blink. Yet we treat them with no more concern than a wristwatch, merely a tool for consuming information and perhaps flinging a bird from a catapult.
E-mail is sensitive. I still don't think we've fully understood the ramifications of transferring our brick-and-mortar existence fronted by a postbox to simply occupying a few square millimeters of hard drive space. Pinging an e-mail address to verify my identity when recovering a password (on services as sensitive as PayPal no less) have deep flaws, most based on the assumption that users understand account security and host the address at a secure provider. Corporate e-mail is generally even more precious, and teams of people in IT departments around the world spend their entire day protecting digital assets from the wide world of internets.
So how do casual devices interact with secure systems? A lot of banks have pared-down versions of their mobile banking websites available as apps with little risk of being vectors for attack - sensitive information is not stored on the device (and my previous post describes how easily and frequently that device's security can be engineered for flimsiness), and practices such as limiting the potential harm through daily limits.
E-mail clients tend to be more of a problem, with contents stored for long periods of time, users expecting to be able to retrieve arbitrary content outside of the retention window, creating and sending new messages without explicit authentication, and others. This is something of a nightmare scenario for organisations, large and small.
There are two outcomes from this. First, the powers provided by these policies are only invoked as necessary. I can imagine a very real case of damage claim should the department decide to wipe your device without notifying you up-front, regardless of reason. Yes, there is a lot of power there, but it only really makes sense if the company is facing imminent loss, either through the user misplacing the device or having it stolen, or worse yet if the user has malicious intent. The warning screen is there to explicitly inform you what you're signing up for and gaining your acquiescence. I remember no such introduction when handed my first Blackberry, but it doesn't grant the IT administrators any less power to remotely wipe it (including my own data that happened to be stored) when necessary.
The second though is that the device becomes a very real part of that network. Companies have policies for all kinds of things, the sense of which are not immediately apparent - sign-in at the front desk is ostensibly to ensure you are accounted for in the event of fire evacuation where security is not primary, something I didn't realise for some years when I first started working. A friend of mine with a very corporate job had a clause in her contract that she was permitted to dye her hair, but only to the range of natural hair colours. I can certainly imagine reputational harm if she arrived at a customer properly suited and booted as required, but sporting electric-blue locks. While there is no accounting for taste in corporate contracts, I would imagine this seems entirely reasonable to the vast majority of people. I suspect those who find such a policy repulsive probably wouldn't want to work there for other reasons.
A compromised device closely coupled with a corporate e-mail system can cause very real harm. Address book caching, calendar searches and assuming the identity of the original owner are powerful tools for an intruder. There is an option here.
While a company can require you wear their uniform for a job, they're far less likely to require you sew a logo onto your own clothes. You can choose to, but I doubt they would be happy if you did this in an inappropriate position as it would cause them reputational harm, and e-mail is similarly interesting.
If you truly require mobile e-mail access, perhaps your company should be paying for that service instead of free-riding your mobile contract and making you nervous about your device integrity? I know my Android device allow my to remove the enforcement agent myself, which very quickly removes the corporate e-mail from the phone as a consequence. You're in control, which fits since you're paying for it.
And if you're the only one who thinks you need it, why do you get to decide how much risk the company should accept? I know my organisation has absolutely no policy for remote wipe even though I accepted exactly that power when hooking my device to the corporate e-mail system, but it still jumped out at me.
With great power comes great responsibility.
It's not a wristwatch.

Friday, 13 July 2012

Locking myself out

I've become quite adept with Android ROMs of late. In the last month I've re-flashed my HTC Legend at least eight times, one of them a desperate measure to get back in after idiotic inattentiveness.
The after-market firmware scene is amazingly vibrant, and I'm typing this post on Cyanogenmod's CM7.2, which brings Gingerbread (Android 2.3) to my two-year old device, something the manufacturer has no intentions of doing. Just as in my previous post about digital legacy, the open-source community is doing what vendors simply won't, and surprisingly well. I'm so pleased I've even offered to upgrade a (decidedly untechnical) friend's Legend.
Doing this requires bypassing the vendor's OS locks - "rooting" - and optionally the firmware locks. By default, open ROMs leave these locks off to preserve your Newfoundland freedom. There's a problem there.
Yes, I want to continue hacking away, but I'm not comfortable leaving root accessible for my friend's device. A diagnostic mechanism called USB Debugging doesn't reset itself on reboots rendering the screen lock rather useless. The Cyanogenmod authors have stated root access will be off by default from version 9 (Android 4.0) onwards, but the Tom I tried on my device (so very close to useable) had no such measure.
Should a rooted and reflashed device be misplaced or stolen, the new owner could quite easily get in. Remote wipe and location apps can be disabled (Link2SD makes this trivial) and data compromised. As a network administrator I wouldn't let one of these devices anywhere near my messaging system, since the policy enforcement engines rely in unprivileged users to function.
I could of course re-applt the vendor recovery software to prevent the OS image being altered, and re-enable firmware controls to stop this being undermines, but this is all moot once the OS can do as it pleases.
Open-source software thrives on freedom, and I love that attitude, but ensuring a well-controlled network often works against that when viewed from certain angles. So, what path?
Security systems relying on obscurity of design have repeatedly been subverted, but how do you convince open-sourcers to build effective blockouts into their project?

Perhaps by helping them realise the real loss their moms and kids might experience when their unrelentingly open device is abused?

Tuesday, 3 July 2012

Blizzard, a Few Words on API Implementation

Over on, Blizzard is defending their action of banning users for using Wine to run copies of their hugely popular Sword & Sorcery game Diablo III. In Blizzard’s words:
Account Action: Account Closure
Offense: Unapproved Third Party Software
A third party program is any file or program that is used in addition to the game to gain an unfair advantage. These programs may increase movement speed or teleport heroes from one place to another beyond what is allowed by game design. It also includes any programs that obtain information from the game that is not normally available to the regular player or that transmit or modify any of the game files.
Now I happen to know a fair bit about the internals of Linux and Windows, and can say one thing for certain: Wine is not the problem.
As mentioned on Ubuntu Vibes, the Wine developers are keen to distance themselves from the issue, and fair to them it is not their fight. But Blizzard definitely has some explaining to do.
I have been very much on the side of those calling foul of Blizzard's method of providing the game state, essentially running a single-player game on their servers. While Diablo II saved locally and ran game logic entirely independent of servers, a single-player game in Diablo III is essentially a single user multiplayer instance – all state is held on their servers. This has advantages, like quickly and more reliably allowing me to invite internet users to join my game, but that’s about the only benefit I can see.
It would appear that Blizzard are trying to clarify that they don’t ban based on OS (or API implementation), and its a subtle point. Whether cheating is actually happening in this particular community as they allege is not quite evident, but since Blizzard don’t reveal the exploit a user is supposed to be using there is no recourse for refund or rectification. I’ll leave that point alone, save to say that I would prefer other users don't cheat, but that it’s irrelevant if I’m in a single-player game. Unfortunately, the way the game is designed there is no such thing.
There’s been ranting about other problems, but one I’m most interested in is legacy. The reason almost any software has a minimum specification for operating system is a guarantee that they APIs they need will be present. Wine is an undertaking to replicate the APIs in Windows for use on other platforms, prominently Linux. Backwards compatibility is the bane of Microsoft’s existence, but there’s no option there really.
The big issue here is the digital legacy. I used to play with my father’s toys as a kid, and he with mine – that is until I started gaming. I have a partner who went barking mad as the release for Diablo III approached simply because she was excited to relive the enormous amount of time she wasted in college playing Diablo II. The problem is even graver than if Steam went dark for good where i simply couldn’t install my games. If Blizzard pulls the plug I cant launch games I’ve already installed, ever! And frankly, twenty years from now I just might like a shot of nostalgia.
But the hardware/operating system of the future might not support it you say? I’d say you’re dead wrong! Steam released the classic X-COM games packaged running in DOSBox, showing that there are solutions to anciently legacy games on modern platforms, and open-source no less.
So just what is Blizzard’s problem? The fact that Wine presents a functional implementation of the Win32 API is of little relevance. there’s no clear statement that non-Microsoft/Apple platforms are enough to get you kicked, far from it they leave it up to users to employ best-efforts. Blizzard may very well claim that they only support native implementations, but WOW64 is really an implementation of the Win32 APIs, when the underlying system is in fact 64-bit.
So, we have both closed-source and open-source solutions to the problem of running legacy code, both apparently providing a reasonably good experience (in fairness I think the Windows implementation is excellent, no idea on quality using Wine). Twenty years from now I have a fairly good feeling there will be legacy emulators (Windows 7-in-DOSBox, Wine for Linux 7 or something as-yet undreamed of), but I am almost certain Blizzard won’t be keeping up a service for Diablo III, even if I just want to show my kids how their dad used to kill demons fell more in love with their mom.