Friday, 24 September 2010

I'm very disappointed to report that it works

A friend of mine was recently reviewing a friend's CV, and noted he had an ethical hacking qualification. I found this rather amusing, since I know there are a gazillion unqualified, uncertified hackers (and simply curious people) who I'd much, much rather be on some kind of radar. It reminds me of the DRM/DMCA debate: it's the ones not following the rules that tend to get the benefit, or at least know enough about the rules to not care about enforcement.

I have been having a series of discussions with a few security specialists in the last two weeks, and they've put a few seeds in my brain. I've reviewed a few articles about application vulnerabilities, and with more and more of the world moving into "the cloud" (btw, I hate that phrase) we're handing over more control to this nebulous entity. Google are definitely at the front, at least as far as end-user experience goes. Heck, I'm hosting this very blog on Google's servers, and neither know nor care where they are.

I also recently acquired an Android phone, and allowed Google's hooks into my life to sink just that bit deeper with integrated messaging, contacts, calendaring, apps I really don't need, Facebook on-the-go, Twitter... it's all cloud! I left my laptop at a friend's house recently, and realised (to my own shock) that frankly, I can live without it for a day or two, such is the functionality in this great new device.

So while all the focus is off in the cumulo-nimbus, I'm still dealing with daily life that's hosted and automated on some providers that are definitely well-defined. My bank is one of them, and as an extra I do some share dealing with their attached brokerage. Side note: three years ago I had spare cash and thought "what's safer than banks?".

Today, I placed an order to sell a few shares and received an order number. I recalled a conversation with one of these specialists about session identifiers where we discussed collision avoidance and non-sequentialness as two good markers for session tracking. On a whim, I took the URL generated by the transaction ID to view details and incremented the trade identifier by one digit.

Lo and behold, I got the details of someone else's trade. One thousand shares of an oil company, concluded around the same time as mine. Alarmed, I did it again, this time decrementing (I'd hit an upper bound), and found an incomplete trade. This is an order, as yet unfulfilled, awaiting the conditions set by the initiator. Now granted, i couldn't see the identity of the trader (in either case), so perhaps on the surface not such a big deal. But, if you know dealing, complete trades are not so significant as they are done and dusted, while incomplete trades show intention. Script this query for current and future IDs, and you could get a feel for investor sentiment that gives you an advantage.

I've been using this particular trading system for years, and regardless of the losses I've made (seriously, I have no aptitude for this) thought of the security measures as fairly robust: SSL encryption, separate login and dealing password both never revealed in full, limits on trading volume by account type and history, approved browser versions only. How easily we are placated.

Handing over so much of our personal info into the (at least free) cloud scares me, though I'm conscious of the fact that free products are, in the sage words of my father, worth what you paid for them. Paid services may not fare much better; by abstracting services into this fog, we run the risk of losing touch with how services are delivered, how we control them, and what we stand to lose if it all goes wrong.

But most of all, they're still built and run to the same rules as traditional systems, no matter how abstractly they're presented. The same DBMSs, the same web servers and runtimes, the same developers and critically the same developer mentalities.

A sobering lesson indeed.

Oh, and yes I raised this with the brokerage concerned. Does that get me the ethical badge?

Friday, 10 September 2010

Security by Default in the Defined Domain

Some simple security concepts are tougher for engineers to grasp than they should be. A pervasive view I've found is that firewalls, security policies, LAN partitioning and excess of routing tend to just get in the way of making systems work.

Now certainly, large enterprises and especially security sensitive ones requiring secure access to and transmission of data such as financial services, R&D, law enforcement and others do require a higher level of attention to detail when designing and operating their systems. Most engineers cut their teeth building systems in the privacy of their homes or labs, neither of which can match the scale and complexity of real-world systems spanning continents, regulatory domains and most importantly untrusted links. When learning and experimenting, the engineer tends to be in control of everything.

Secure authentication is enabled by default in Windows, with Kerberos providing one of the most resilient systems available, and AD integration makes expansion and administration a breeze. But most of the remaining protocols don't pay too much attention, starting with the most pervasive - CIFS (Windows File Sharing). Even RDP starts off with secure authentication then moves on to a plaintext stream for keyboard inputs and screen updates.

I'm a big fan of IPSec, since it allows for transparent encryption and authentication without having to do pretty much anything to your applications or network layout. The problem comes in getting all your nodes to play nice, since it does require some configuration. Windows domain policies and domain membership vastly simplifies this - if your system is a domain member, enabling opportunistic IPSec encryption is a breeze.

This is the "Defined Domain", a set of systems over which you have complete control. A visitor to your network would not have these policies defined, so their device would not be able to participate, but placing them in a guest LAN and tunneling the connection at the router over a secure connection in Tunnel mode can help solve that problem, but it can quickly spiral out from there - how do you get simpler devices and protocols like SNMP and even PING to cooperate?

The Defined Domain needs to incorporate not just the systems you're protecting, but also how and why. Secure protocols aren't that hard to come by these days (e.g. SNMPv3 incorporates some measure of encryption, SSL is available in all major web platforms), but still requires some configuration and attention. Applications need to be designed with at least some measure of awareness of the security domain if they are to be trusted, and administrators and designers most of all need to keep this in their head when going about their work.

The biggest threat to any security domain is and will always be the human factor. By incorporating security devices in an easy-to-use package, and often transparent, it can make the average engineer immune to the considerations required to make systems truly secure. Yet again, ease induces laziness, and the worst kind of security is the kind you can't (or won't) verify. It took about two hours of use before a colleague of mine noticed his newly-secured RDP session (certificate-based authentication) placed a new padlock symbol on the control bar of his Terminal Services Client. Until then, the idea that the connection was relatively more secure was fairly unverifiable, and eventually irrelevant.

Attention is required, and a lot of education. No one security solution is a panacea (everyone kept screaming certificates at me, as a one-size fits all solution, no no no), and this will always be the problem.

At least in a defined domain, it's more manageable, more approachable. And then along come the users...