Password Haystacks

In recent months the “dead horse” of password-based authentication got some new life in the form of so-called ‘password haystacks‘. An approach introduced by well-known security expert (and one of my favorite gurus) Steve Gibson relies on the knowledge of the logic used by password brute force attackers. In essence the attackers – after trying a list of well-known passwords (“password”, “123456”, “cat” etc.), their variations (“pa$$w0rd”) and finally plain dictionary – switch to ‘pure guessing’ when arbitrary combination of alphanumeric characters and some special signs is generated and tried methodically until the password is guessed. Hence the “brute force” nature of the attack. So far the best prescription for passwords was to make them both random and very long – an advise routinely ignored by the users community as it made such passwords extremely hard for humans to remember. What Steve came out with is that passwords with similarly high “strength” (i.e. resistance to guessing) could be created by artificially increasing their length (each added character increases time needed to crack it exponentially) and the space of characters used in them (the bigger variety of small, capital case, number and special characters is used the more combinations are possible – again drastically increasing the cracking time) by, say, prepending or appending some easy-to-remember “padding” to passwords. For example, ‘000Johny000’ is infinitely harder to brute force than ‘johny’ – yet it requires comparable effort for humans to remember them. Makes perfect sense – you come out with your own secret “padding” pattern, and use it to enhance your simple but consequently easy-to-guess passwords. Once enhanced such passwords are both easy to remember and hard to crack (get more detailed explanation from the source here). Sounds like a perfect solution, huh?

Up to the point. When followed the “haystack” approach while certainly adds to the password-based security is hardly the end of the game. Like anything else in security, password attacks are never ending cat-and-mouse game between the ‘locks’ and the ‘keys’. Thus it’s a matter of time till fraudsters update their password guessing algorithms/tools to check ‘popular padding’ patterns first before switching to ‘pure brute forcing’. Not to mention the possibility of ‘leaking’ your password in some other way (e.g. through phishing site) thus revealing the “secret sauce” of all your strong passwords – the “padding pattern” – to the attackers.

At the end of the day, as often mentioned in the past, passwords as viable protection mechanism are pretty much dead (mostly). Indeed, other approaches like multi-factor authentication have no real alternatives no matter what clever way we come out to make our passwords less guessable.

Automated spear phishing – a perfect storm?

Back in January one of my 2011 predictions for “cyber fraud story of the year” was having more targeted yet massive phishing attacks. Two biggest news trends in cyber security seem to be indicating that this threat can actually become real in 2011:

  1. highly effective attacks targeting what one would expect to be the most impenetrable companies whose bread and butter is cyber security – RSA and Oak Ridge National Lab. The frequently used term to describe these attacks is “Advanced Persistent Threat” – but in reality what hides behind is a successful spear phishing attack.
  2. repeated exposure of massive amounts of user personal data – names, emails, addresses, and in some cases even dates of birth, credit card numbers (!) and SSN (!!). Just a couple of breaches in recent months exposes the scale of the problem:

Spear phishing attacks have always been considered a highly targeted version of a cyber attack tailored to the potential victim’s profile (root – phishing with a ‘spear’ rather than a ‘wide net’). RSA and Oak Ridge National Lab breaches are yet another confirmation of the efficiency of such attacks. Typically targets of spear phishing attacks are senior executives (sometimes spear phishing is referred to as ‘whaling’ for that particular reason) or companies which represent a hefty prize to the fraudsters community.

Could usually hand-crafted spear phishing attacks be automated and put on a massive scale? I don’t see why couldn’t they (most probably to some extent they already are). As common knowledge in the industry goes, a simple addition of victim’s name in the phishing email’s opening line drastically increases the probability of the end user trusting the message (and then clicking the link). Add to it the knowledge of the companies the victim has an established relationship with, the phone (BTW, has anybody thought of automated phone attacks?), address – and the attack can be personalized to a degree that an ‘average Joe’ stands no chance of distinguish it from the email communication coming from the real business.

To be sure exposure of user data in itself is a very dangerous phenomena. In addition to “old-fashion” identity theft, stolen user data can be applied in other types of attacks – such as password guessing (your name is John and you were born in 1970? Chances that you use one of ‘john1970’, ‘Johny70’, ‘JOHN70’, etc. are infinitely higher than a dictionary-based random gibberish). However, marrying phishing attacks with intimate knowledge of victim’s data may prove to have the most severe and widespread impact.

What will happen when spear phishing goes massive? Hopefully, it’ll speed up the adoption of well-known counter-measures. For businesses – discipline storing user data and adoption of 2FA. For end users – a practice of using different passwords across different sites (should be as weird as using the same key for unlocking your house, car and the office), not clicking on links in your emails (should be as weird as opening your door to a stranger) and keeping your personal data away from the rest of the World.

The best cyber security practices are…

…the ones which don’t expect any action or assume any expertise from the end user. Naturally.

I did try to make a case for ‘no substitution for user education’ several years ago. However, clearly, with explosive penetration of Internet being as ubiquitous and essential service as phone or even water & electricity the prospect of having a security-savvy user base – capable of understanding the difference between HTTP and HTTPS, or paypal.com and paypal.abc.com – keeps getting further away. Indeed, the answer to growing cyber fraud threat cannot rely solely on an assumption of average netizen’s abilities to detect and fight back the ever sophisticated attacks from the bad guys. Continuing the analogy with physical security it’s equivalent to saying “let’s assume all good guys have a gun and know how and when to use it to defend themselves”. This strategy might have worked in the Wild West (if it did), but has poor chances in the 21st century’s Cyber World  (sorry, NRA).

Not surprisingly, the industry slowly but surely moves towards, let’s call it, “built-in security”. The shift in mindset could be characterized by security considerations becoming more of a driver and less of an afterthought.

For example, it’s well known that many users chronically fail to patch their computers – operating systems and applications (browsers, PDF readers, Java VM, etc.). That leaves them wide open to ‘exploits in the wild’ – inevitably resulting in data being stolen, machines being infected and getting ‘enlisted’ to a botnet. In order to address this situation more companies are switching to ‘stealth update’ mode. For instance, unlike its competitors, Google’s Chrome chooses not to ask the user to initiate an update – it does it silently without users even knowing it. Windows 7 seems to adopt the same approach – by default the users are not asked to perform any action to have their operating system to be patched.

The same rule applies to other security measures. Facebook recently introduced a nice feature enabling switching its traffic to HTTPS. Alas, the option is off by default and the 600 mln users are expected to go to their account settings and turn it on manually (most probably Facebook was afraid of the cost of wholesale movement to HTTPS). Again, Google shines here. Not only it moved all its gmail service to HTTPS well before Facebook did, it also made it universal and by default – no user action was expected. I bet vast majority of gmail service users didn’t even notice the change. Another less known example is also recently introduced Strict Transport Security which allows web servers to prevent non-secure (or even suspicious) connections in order to prevent man-in-the-middle attacks. Again, “average” users need not to even know the mechanism exists.

These trends are bound to gain momentum. I imagine more and more companies will switch to HTTPS in the near future, and patching will not require user confirmation by default (perhaps leaving an “ask me first” before updating option – off by default – for tech-savvy – or perky – users). More services will move away from simple password-based authentication. Microsoft Essentials will become an integral part of the Windows OS (if anti-trust allows them to do so). Applications will become increasingly sandboxed. And so on…

This is not to say that one day you will be able to survive in the Cyber World without some basic knowledge and prudence – just like you need some common sense to live everyday life – from how to cross the street to avoiding dangerous neighborhoods. However, that knowledge should be kept to minimum, be intuitive, be transparent and belong to public domain and even school (kindergarten?) curriculum.  In the end the rules should be simple enough that – unless you are striving for the Darwin Award – by following them you are not risking your (cyber) well-being. The rest should be taken care of by the smart technology. Ideally.

Cyber security trends for 2011

Well, it’s this time of the year again. Scores of well-known gurus, security companies as well as some simple mortals come out with their prediction on how the cyber fraud will evolve in coming 12 months. Sometimes these “prognosis” is limited to attaching “security threat” or “attack vector” to general emerging technologies – e.g. “more fraud on smart devices”, “cloud security threats” etc. – such predictions are based on common principle of any new functionality is a potential security threat, and the fraud attempts are proportional to its popularity. Naturally, like any generalization, this approach has its limits… indeed, if a new functionality proves to have a higher bar for penetration than the existing ones, the fraudsters will happily stick to the old known methods without complicating their lives.

Having said that, I couldn’t resist the temptation myself – and came out with some prognosis of my own:

  • Trojans will become more mature and deadly. User machines are becoming both Holy Grail and the Weakest Link in the defense against the cyber criminals. With the client machine compromised most of the server-side anti-fraud technologies are useless – even in some cases 2FA may be circumvented (naturally, this is true for client-side attacks like XSS or XSRF). There’s little hope that a remedy is within reach – the trend of fraudsters to shift their attention from relatively hardened OSes to application layer (such as browser plugins, but also stand-alone ones like PDF reader) will continue to grow in 2011 resulting in a race which good guys may not be able to win.
  • Phishing – i.e. tricking netizens to reveal their passwords, PII, SSN, and other information – the problem is going to get more severe – taking spear attacks to mass production. Indeed, taking into account the volume and availability of mass information (enough to mention alleged 100 mln Facebook accounts information put on torrent) it’s only a matter of time before massive old-style phishing attacks (with the low success rate of around 0.1-0.3%) become more personal and targeted and thus much more effective (success rate may jump to 1-3%).
  • Information Security – how long it’ll take governments and corporations to move to close environments – with machines which have no burnable DVD drives or USB ports, hard drives living in clouds and isolated access to the public net (not even mentioning having our smartphones banned at workplace – as we could still take a picture of the screen and email it right away?). My take – forever. So WikiLeaks will continue making headlines and more copycats of it will proliferate in 2011.
  • IPv6 – most probably 2011 will be the first year where IPv6 starts to be used in wild (as IPv4 free space will finally be depleted). Taking into account general procrastination of big businesses (for whom security is an afterthought until it bites them in the a*s) they are going to be less prepared (to put it mildly) to the transition to IPv6 than the fraudsters community. Now imagine all the IP3 filters, IP geolocation and other techniques which became mainstream, all the infrastructure tuned to IPv4 built on back-ends of the companies start behaving “strange” as soon as requests come in with IPv6 addresses. Subsequently, if these requests prove to be more effective in hiding fraud, guess how much (or little) time fraudster will need to jump on the opportunity.
  • Smartphones – if anything, Android – being inherently more open platform than iPhone OS – but overall I do not think we’ll witness any spectacular security breaches (including using smartphones as tools to commit fraud) because of obvious smartphones proliferation; generally speaking they are safer than our desktops and laptops, harder to get by, harder to infect and inherently easier to locate (tied to a geolocation).
  • Cloud computing – if anything, it’ll be increasingly leveraged by the bad guys to achieve their nefarious goals, rather than having breaches itself (e.g. stealing data from the cloud). Not that it’s impossible, I just think there are more available and easier to access means.
  • Virtual currency – as much as it’s volumes are going through some spectacular growth period, there’s a conceivable ceiling to their expansion, and so for the associated fraud. I don’t think that they will become the Big Story for 2011, although the fraud will grow proportionally to the volume of virtual goods and services.

All the above is more intuition than science, and naturally only time will show how right or wrong I am now (fortunately, we don’t have to wait too long). Plus, many reputable specialists would disagree with my relatively low risk ranking of smartphones, clouds and virtual currency – which makes it even more intriguing and worth looking forward to.

Superiority of the "known good" over "known bad"

Okay, some definitions first:

  • Known bad” strategy implies covert collection of attributes used by the fraudsters – first of all devices, but also email addresses, phones etc. – in order to be able to detect repeat usage of them. It’s essentially blacklisting technique, implying that if you are not blacklisted, you are good to go.
  • Known good” is pretty much the opposite – it’s an overt policy of collecting the attributes – first of all devices, but also email addresses, phones etc. – to have necessary assurance of the legitimacy of their usage by the good guys. It’s effectively white listing, implying that if you are not whitelisted, you are a potential suspect. Naturally, to make an attribute whitelisted (or to mark it as ‘trusted’), the users will have to go through a certain verification process. For example, to whitelist a machine – the user will have to enter a code sent via email or SMS (essentially, following a 2FA approach).

Now, traditional strategy adopted by the cyber security guys has always been the first one – just like in “offline” life where we all enjoy presumption of innocence (unless we slide into totalitarian form of government) and where the “blacklists” are for few suspected criminals. It definitely is more intuitive and, to a certain degree, effective way of raising the bar in the online security. However, it becomes increasingly inefficient as fraudsters get more sophisticated in hiding their identity. Indeed, only lazy or grossly uneducated fraudsters do not delete their cookies (historically, number one way of identifying a device) today. Adobe’s FSO – which succeeded the cookie – is next to fall. Soon the larger fraudster community will discover the beauty of sandboxing. In essence, it’s a matter of appropriate tools being developed and available on the “black market” – average fraudster doesn’t even have to know all the gory details to use them. Thus, as I mentioned in my previous post, device fingerprinting is pretty much doomed.

By contrast, the “known good” strategy is increasingly getting traction in the online businesses. Initially unpopular since they introduce another hoop for the legitimate users to jump through (businesses hate that), it just by definition works much better. Fraudsters now need to get an access to the victim’s email account, cellphone, or hack the computer to get around it (it should also be mentioned that on a conceptual level the superiority of whitelisting over blacklisting is apparent in many other cases – such as in keeping user input under control).

The switch to “known good” is not a painless exercise and, yes, it introduces an additional hurdle to the business, but it may prove to be the cheapest way of putting a dent on losses by making account takeovers much more difficult to hide. Both in terms of nuisance to the users and the cost it fares much better that some extra measures I see on many websites – such as selecting an image, asking additional questions etc. – thus my take is that the popularity of “known good” approach will continue to rise.

Device fingerprinting to fight fraudsters? Please…

“Machine/device fingerprinting” technologies allow collecting and recording unique traces of individual devices. This technique has been primarily used for tracking bad guys and making it difficult for them to repeatedly use the same device for nefarious purposes. Typically a client-side script is used to collect information (“fingerprint”) of the device which is subsequently stored on the server side. Today several vendors exist on the market offering various patented ways of collecting the device data (including the internal clock, screen parameters, OS data etc.). Recently announced and hyped “evercookie” is an example of an open source code offering even more innovative ways of doing the same. Alas, while sophistication of these techniques is impressive, it doesn’t take equal sophistication for the fraudsters to neutralize (or neuter – if you will) these measures to completely circumvent device identification. Using a virtual machine (or a simple sandboxie), not to mention completely avoiding the usage of browsers in mounting cyber attacks, is a sufficient antidote to the pains companies go to “fingerprint” fraudsters’ devices. Indeed, it’s a matter of time for the fraudster community to fully adapt to the “fingerprinting” technologies…

Having said that, device “fingerprinting” is far from being dead – it is definitely finding second – perhaps more significant – life in growing trends for “average” user tracking – e.g. serving advertisement industry. Here an average netizen – being far less sophisticated than an average fraudster – is pretty much powerless against them (unless tracking is made illegal by law). Device fingerprinting will not lose its value, it’s just IMHO the days of it as a way of fighting fraud are numbered.

Is conficker a (nuclear) time bomb?

Conficker malware generates a lot of buzz these days. No wonder – it represents a new generation of highly-sophisticated general-purpose software platform rapidly spreading over unsuspecting user machines. Conficker is in more than one way state-of-the-art malware:

  • Highly efficient
  • Applies the latest encryption technologies
  • Hides itself in the most sophisticated ways
  • Virtually unstoppable way of updating itself

Not surprisingly, it targets Windows machines (the main platform used across the World). Currently up to 10 mln machines are infected with Conficker. Another remarkable feature of the worm is that up until recently it hasn’t really caused any significant damage – yet. We know it just hangs in there waiting for instructions to come from the “mother ship”. When and how it’ll strike – is anybody’s guess. At the same time – judging from the hitherto behavior of the guys behind conficker – they will use the platform for many “mini-explosions” (ideally unnoticed) rather than a big “blast”. It’s anything but one-time usage platform.

For details you are welcome to go through a presentation I recently put together to raise awareness of Conficker with my colleagues:

Summary: Like any other malware which infects the end user machines, it’s very powerful and may render bulk of traditional anti-fraud tools & technologies useless. It’s possibilities are virtually limitless – from dDOS, spamming to key logging and information stealing. But sophistication of Conficker compared to more primitive trojan predecessors takes the challenge to the next level (I am sure we’ll witness more conficker-like trojans on the market – fraudsters have their own “arms races”).

What companies could to to be ready for Conficker? I can’t think of anything else but educating end users, perhaps mandating (or providing incentives for) installation of virus protection software on user machines (the trend has already started). Using 2FA will definitely slow the bad guys down, but by no means it’s the definitive remedy to Conficker’s and alike. How efficient all those measures are will become apparent in the upcoming years.

Online Identity services – an emerging new business model?

Every time I visit one of financial institutions’ websites I happen to be client of, I am daunted by the hops I need to go through (neither of which is really unstoppable from the fraudsters standpoint) to login to my account. It’s obvious that serious businesses are trying to counter account takeovers and each is doing that in its own way – possibly spending lots of money on something which is not its core expertise. Countered by fraudsters for whom it actually is the core expertise, these businesses seem to be doomed to continue investing lots of resources on online identity management with only a modest success.

Needless to say, the online identity is becoming a big issue. Little wonder – whole chunks of our daily life – including very personal fields like romance and friendship – is being absorbed by the Net. In all the mess one thing stands out – acute need of a better identification. A need which itself may warrant a separate industry – call it online identity services. I do not mean anything ominous (“I’ve got lightly used identity of Judd Law! Anybody?”) – just satisfying a legitimate need of identifying people online – like an online bank needing to make sure person logging into their website is the actual account holder. Today it’s moving from traditional password-based identification (see my earlier post) to more sophisticated multi-layered mechanisms (some less efficient than others) – pictures, personal questions, 2FA tools etc. It is becoming more costly to develop and maintain, hence it would make a lot of sense to delegate this headache to a company which actually specializes in online identification. In that case the bank just needs to redirect the login to the company’s page (for non-technical user that could be quite seamless, e.g. by putting the bank’s logo to the site it redirects to or do it in a iframe), let it do all the dirty work, and return the user to the bank’s page with full guarantee (covered by the third party) that the user is authenticated. Just like PayPal handles all the payment and gets back to the merchant with guaranteed payment, the ‘identity merchant’ would come back with ‘successful login’. Now, the ‘services’ may charge per login or per month or per user – details will depend on particular business model. Such services may even offer multiple types of support – the spectrum would include periodic user screening (e.g. verifying the phone), sending 2FA tokens, sending SMS-es, in short focus on linking the physical identity with cyber one.

Now, I am not saying this has never occurred to anybody else – the Open ID concept is similar one. Too bad it didn’t really take off. My take is – people who care about this most (online banks, for example) are inherently distrustful to anything free or open source. And that serious identity management needs serious resources – to screen, to support 2FA tokens etc. Microsoft passport probably was ahead of its time. PayPal could use its clout to add “identity management” to its portfolio, or better yet Facebook could do that (the model of your identity being vetted by your friends is quite powerful), too. However, either of these companies have their hands in many jars, and the last thing a bank wants is to divulge its user base to some 3rd party who can turn out to be a competitor. My take is – in order to succeed, these services should be very specific – commercial, stand-alone, not engaged in any other type of business, but solely focused on online identity and committed by binding agreements to not to use the information for any other purposes. Naturally, there needs to be safeguards that each client’s (bank’s) user data is secure and stays its property even if login is supported by the third-party.

Perhaps there are such companies, I admit I didn’t do much research here, but even if there are – it’s anything but a mature industry. I wonder if it will ever become one.

Is there an alternative to user education?…

…in global fight against fraud? IMHO, there isn’t. Although I am not – by any stretch of imagination – the first one who arrived to this conclusion, nobody came out with a working idea on how realistically we can move the needle in this direction.

I recently had a chance to present a hastily-put-together “Cyber-security 101 – Defensive Browsing for Everyone” presentation* to a not-necessarily-technical audience. A friend of mine joked after the presentation – “most of them will never use Internet again” 🙂 While it wasn’t really my intention I can’t but acknowledge that the sheer number of steps to be taken, “rules of thumb” and details to pay attention to in order to remain safe online can be pretty daunting to an average surfer. Bridging that knowledge gap for the “masses” seems to be – so far – insurmountable challenge for the industry.

Now, as a humble “soldier” in this fight, I have worked out my own tricks to convey the message. For example, I consider cyber crime having a lot in common with the crime in physical world – a phenomena which average person is much more familiar with either personally or from the books/movies. Consequently when evangelizing “defensive browsing” I use this analogy to explain concepts from the cyber “equivalent”. From my past experience – it generally proves to be quite effective.

For instance:

  • Browser – the door between your house (in this case perhaps an RV) and the street
  • Unpatched PC – the door poorly locked leaving you increasingly vulnerable to all potential thieves in the neighborhood (in case on Internet – the ‘neighborhood’ is the whole World including the criminals who are beyond American justice system)
  • Clicking on a link in an email – opening the door without checking who’s on the other side; alternatively – rushing to a place suggested by a letter in the mail
  • Anti virus – pest control in the house
  • Browsing suspicious sites – strolling in known bad neighborhoods at night
  • Phishing site – an impostor pretending to be your cleaning person’s ‘cousin’ to get the keys to your house
  • Open Wi-fi (with no additional precautions) – a place where the bad guys can easily hook you with a tracking device, a bug or a video surveillance device

In a way, cyber security can be viewed as an extension of our physical security, so the analogies are really limitless. Making the connection between them is the first step in educating crime-aware and responsible “netizens”.

*[update] I’ve put the presentation here – click on the image to view the slides
Cyber security 101

Passwords are passé

It’s clear. Authenticating users via passwords is hopelessly outdated – the sooner online businesses (who are serious about keeping their customers safe) understand this the better. Security questions are of no substantial help – they just put some short-lived life support on dying passwords. IP/cookie check on server side (if any exists, of course) helps, but only incrementally, as there are know workarounds actively used by fraudster community. The only – as of today – viable improvement qualitatively raising the bar is 2FA.

Many would say – 2FA might be an overkill for most of our online authentication needs. Well, I could definitely argue with this statement – at least in 90% of cases. For example, our email box contains extremely valuable information about us – allowing identity theft, great for waging a spear attack or simply allowing to learn about your immediate plans to conduct “brick and mortar” theft. Not to mention social network accounts – they are remarkable in keeping comprehensive log about their owners – contacts, friends, photos, status, communication – all in one place! In other words – the wet dream for a whole line of businesses – illegal as well as legal ones. And what – a pathetic password being a single key to this wealth of information? Hell, no!

That said 2FA is far from being bulletproof (e.g. it’s susceptible to particular type of client-site attack). However, there’s little doubt that 2FA is the next major step in securing users identities online, and that will be the direction the industry will move towards (and finally quit trying to find a cheap alternative) in the next several years.

Design a site like this with WordPress.com
Get started