Thursday, 19 July 2012

Do the ends still matter?

One of the biggest conflicts of interest in my role as an infrastructure architect (I'm responsible for servers, applications, security, storage etc) is that I have a heavily vested interest in the service they provide. This may sound like a no-brainer, but often I need to find ways to get my traffic to users at the expense of someone else. I want preferential treatment if conflicts with other traffic.
Of course, it's not quite as simple as that. Pragmatically I realise I am competing with others on limited resources, whether it's rack space, hypervisor RAM or network bandwidth. One of the less-known principles underlying the Internet ethos is the End-to-end Principle. Put simply, it's this:
Complexity in any network should be implemented in the end points - the network stacks of communicating nodes and in applications - and not in the network itself. Since standards change any benefit of implementing too much intelligence in the network is quickly undermined with the need to continually match those changes in end-points, as well as legacy issues.
Essentially, the network should be as dumb as possible. Packet comes in, packet goes out, wait for next packet. This is reasonable in principle but different traffic flows require different treatments - VoIP and video streaming protocols prefer low latency and are almost always jitter sensitive, while file transfers can tolerate enormous latencies as long as they are accompanied by high bandwidth. If both types occupy the same link the risk is that the insensitive consumption of one protocol can impact the requirements of another, so QoS arrives.
This is already a necessary evil (though some may object to my use of that label in this context) since we now need to build knowledge of services into the network layer. Large IP transport installations (Internet backbones) handling gigabits of traffic per second literally don't have the time to implement protocols like this as processing adds both cost and latency. This is different from implementing different virtual circuits for different traffic types, in that it forms logic links and is a very common practice with ISPs in the final mile. As far as intelligence on network devices go, this is very low - again packets come in, figure out where packets go out.
A very interesting turn is in the development of new forms of network acceleration. Routers have long been capable of doing in-line compression of data to reduce consumption of a specific link, but this is point-to-point. If an application protocol can truly benefit from compression, this really should be done at the application protocol level (basically above Layer 3) so routers and switches can shuffle packets. A possible side-effect of link compression is to mask real versus usable bandwidth from applications (some portions of a stream may be highly compressible, others not) hindering flow-control algorithms built into TCP.
TCP sessions can be rather chatty, and some applications implement redundant techniques inside their own specification, so some network acceleration can initially make sense. Essentially, new sessions follow a well-known pattern of window sizes and other parameters, so network accelerators intercept packets and simulate the repetitive parts on each end of a link, reducing session setup time. This sounds simple, but now we enter dangerous waters. Next comes protocol caching; I request a file from a file server across the WAN and the contents are cached on my local acceleration appliance so that the user next to me can get a cached copy when her request is made. Again, sounds simple, but to prevent interception and modification the protocol implements signing, so re-generated content requires a re-generated signature. SSL acceleration similarly can be implemented using reverse proxies holding a copy of the private key for the service to extract the plaintext and look for compression and caching opportunities. I've been involved in design and deployment of many reverse proxy and SSL acceleration projects but these were explicitly part of the service.
To accelerate generic payloads the appliance needs to get heavily involved in the infrastructure, either by hosting all SSL keys and being able to masquerade as endpoints. This is where it gets very complex and risky, not only in the dissemination of privileged access (private keys and domain credentials being the most highly-prized items in any network), but also a continual catch-up game to implement these techniques on newer protocols as they are developed. There are also more intrusive techniques such as automatic downscaling of images when using mobile data to browse the web that are subtle but insidious.
Net Neutrality is the overall drive in this direction and correlates with the End-to-end Principle, treating network devices as simplistic and all traffic as equal.Smarter protocols such as (distributed) BranchCache and BitTorrent (yes, BitTorrent is a case for Net Neutrality) reduce redundancy over constrained links and better content intelligence (I explicitly convert all images in my documents to 8-bit PNGs before embedding) are far better strategies. Content distribution networks are an active participant and used by some of the largest providers (both content generators and ISPs) to reduce long-haul bandwidth and improve user responsiveness. HTTP compression is a rigorously defined standard but very sparsely used even on static pages where on-the-fly zipping is unnecessary. I could go on...
When looking into how best to transport content I prefer to let the network do what it does best and work my requirements into it than engineer a network to suit my needs unless absolutely necessary. Where my application is burdensome, I would prefer to engineer the application than throw bandwidth at the problem.
Technology marches, and bandwidth - while logically finite - seems to be keeping up rather well.

No comments:

Post a Comment