Password Haystacks

In recent months the “dead horse” of password-based authentication got some new life in the form of so-called ‘password haystacks‘. An approach introduced by well-known security expert (and one of my favorite gurus) Steve Gibson relies on the knowledge of the logic used by password brute force attackers. In essence the attackers – after trying a list of well-known passwords (“password”, “123456”, “cat” etc.), their variations (“pa$$w0rd”) and finally plain dictionary – switch to ‘pure guessing’ when arbitrary combination of alphanumeric characters and some special signs is generated and tried methodically until the password is guessed. Hence the “brute force” nature of the attack. So far the best prescription for passwords was to make them both random and very long – an advise routinely ignored by the users community as it made such passwords extremely hard for humans to remember. What Steve came out with is that passwords with similarly high “strength” (i.e. resistance to guessing) could be created by artificially increasing their length (each added character increases time needed to crack it exponentially) and the space of characters used in them (the bigger variety of small, capital case, number and special characters is used the more combinations are possible – again drastically increasing the cracking time) by, say, prepending or appending some easy-to-remember “padding” to passwords. For example, ‘000Johny000’ is infinitely harder to brute force than ‘johny’ – yet it requires comparable effort for humans to remember them. Makes perfect sense – you come out with your own secret “padding” pattern, and use it to enhance your simple but consequently easy-to-guess passwords. Once enhanced such passwords are both easy to remember and hard to crack (get more detailed explanation from the source here). Sounds like a perfect solution, huh?

Up to the point. When followed the “haystack” approach while certainly adds to the password-based security is hardly the end of the game. Like anything else in security, password attacks are never ending cat-and-mouse game between the ‘locks’ and the ‘keys’. Thus it’s a matter of time till fraudsters update their password guessing algorithms/tools to check ‘popular padding’ patterns first before switching to ‘pure brute forcing’. Not to mention the possibility of ‘leaking’ your password in some other way (e.g. through phishing site) thus revealing the “secret sauce” of all your strong passwords – the “padding pattern” – to the attackers.

At the end of the day, as often mentioned in the past, passwords as viable protection mechanism are pretty much dead (mostly). Indeed, other approaches like multi-factor authentication have no real alternatives no matter what clever way we come out to make our passwords less guessable.

Automated spear phishing – a perfect storm?

Back in January one of my 2011 predictions for “cyber fraud story of the year” was having more targeted yet massive phishing attacks. Two biggest news trends in cyber security seem to be indicating that this threat can actually become real in 2011:

  1. highly effective attacks targeting what one would expect to be the most impenetrable companies whose bread and butter is cyber security – RSA and Oak Ridge National Lab. The frequently used term to describe these attacks is “Advanced Persistent Threat” – but in reality what hides behind is a successful spear phishing attack.
  2. repeated exposure of massive amounts of user personal data – names, emails, addresses, and in some cases even dates of birth, credit card numbers (!) and SSN (!!). Just a couple of breaches in recent months exposes the scale of the problem:

Spear phishing attacks have always been considered a highly targeted version of a cyber attack tailored to the potential victim’s profile (root – phishing with a ‘spear’ rather than a ‘wide net’). RSA and Oak Ridge National Lab breaches are yet another confirmation of the efficiency of such attacks. Typically targets of spear phishing attacks are senior executives (sometimes spear phishing is referred to as ‘whaling’ for that particular reason) or companies which represent a hefty prize to the fraudsters community.

Could usually hand-crafted spear phishing attacks be automated and put on a massive scale? I don’t see why couldn’t they (most probably to some extent they already are). As common knowledge in the industry goes, a simple addition of victim’s name in the phishing email’s opening line drastically increases the probability of the end user trusting the message (and then clicking the link). Add to it the knowledge of the companies the victim has an established relationship with, the phone (BTW, has anybody thought of automated phone attacks?), address – and the attack can be personalized to a degree that an ‘average Joe’ stands no chance of distinguish it from the email communication coming from the real business.

To be sure exposure of user data in itself is a very dangerous phenomena. In addition to “old-fashion” identity theft, stolen user data can be applied in other types of attacks – such as password guessing (your name is John and you were born in 1970? Chances that you use one of ‘john1970’, ‘Johny70’, ‘JOHN70’, etc. are infinitely higher than a dictionary-based random gibberish). However, marrying phishing attacks with intimate knowledge of victim’s data may prove to have the most severe and widespread impact.

What will happen when spear phishing goes massive? Hopefully, it’ll speed up the adoption of well-known counter-measures. For businesses – discipline storing user data and adoption of 2FA. For end users – a practice of using different passwords across different sites (should be as weird as using the same key for unlocking your house, car and the office), not clicking on links in your emails (should be as weird as opening your door to a stranger) and keeping your personal data away from the rest of the World.

The best cyber security practices are…

…the ones which don’t expect any action or assume any expertise from the end user. Naturally.

I did try to make a case for ‘no substitution for user education’ several years ago. However, clearly, with explosive penetration of Internet being as ubiquitous and essential service as phone or even water & electricity the prospect of having a security-savvy user base – capable of understanding the difference between HTTP and HTTPS, or paypal.com and paypal.abc.com – keeps getting further away. Indeed, the answer to growing cyber fraud threat cannot rely solely on an assumption of average netizen’s abilities to detect and fight back the ever sophisticated attacks from the bad guys. Continuing the analogy with physical security it’s equivalent to saying “let’s assume all good guys have a gun and know how and when to use it to defend themselves”. This strategy might have worked in the Wild West (if it did), but has poor chances in the 21st century’s Cyber World  (sorry, NRA).

Not surprisingly, the industry slowly but surely moves towards, let’s call it, “built-in security”. The shift in mindset could be characterized by security considerations becoming more of a driver and less of an afterthought.

For example, it’s well known that many users chronically fail to patch their computers – operating systems and applications (browsers, PDF readers, Java VM, etc.). That leaves them wide open to ‘exploits in the wild’ – inevitably resulting in data being stolen, machines being infected and getting ‘enlisted’ to a botnet. In order to address this situation more companies are switching to ‘stealth update’ mode. For instance, unlike its competitors, Google’s Chrome chooses not to ask the user to initiate an update – it does it silently without users even knowing it. Windows 7 seems to adopt the same approach – by default the users are not asked to perform any action to have their operating system to be patched.

The same rule applies to other security measures. Facebook recently introduced a nice feature enabling switching its traffic to HTTPS. Alas, the option is off by default and the 600 mln users are expected to go to their account settings and turn it on manually (most probably Facebook was afraid of the cost of wholesale movement to HTTPS). Again, Google shines here. Not only it moved all its gmail service to HTTPS well before Facebook did, it also made it universal and by default – no user action was expected. I bet vast majority of gmail service users didn’t even notice the change. Another less known example is also recently introduced Strict Transport Security which allows web servers to prevent non-secure (or even suspicious) connections in order to prevent man-in-the-middle attacks. Again, “average” users need not to even know the mechanism exists.

These trends are bound to gain momentum. I imagine more and more companies will switch to HTTPS in the near future, and patching will not require user confirmation by default (perhaps leaving an “ask me first” before updating option – off by default – for tech-savvy – or perky – users). More services will move away from simple password-based authentication. Microsoft Essentials will become an integral part of the Windows OS (if anti-trust allows them to do so). Applications will become increasingly sandboxed. And so on…

This is not to say that one day you will be able to survive in the Cyber World without some basic knowledge and prudence – just like you need some common sense to live everyday life – from how to cross the street to avoiding dangerous neighborhoods. However, that knowledge should be kept to minimum, be intuitive, be transparent and belong to public domain and even school (kindergarten?) curriculum.  In the end the rules should be simple enough that – unless you are striving for the Darwin Award – by following them you are not risking your (cyber) well-being. The rest should be taken care of by the smart technology. Ideally.

Superiority of the "known good" over "known bad"

Okay, some definitions first:

  • Known bad” strategy implies covert collection of attributes used by the fraudsters – first of all devices, but also email addresses, phones etc. – in order to be able to detect repeat usage of them. It’s essentially blacklisting technique, implying that if you are not blacklisted, you are good to go.
  • Known good” is pretty much the opposite – it’s an overt policy of collecting the attributes – first of all devices, but also email addresses, phones etc. – to have necessary assurance of the legitimacy of their usage by the good guys. It’s effectively white listing, implying that if you are not whitelisted, you are a potential suspect. Naturally, to make an attribute whitelisted (or to mark it as ‘trusted’), the users will have to go through a certain verification process. For example, to whitelist a machine – the user will have to enter a code sent via email or SMS (essentially, following a 2FA approach).

Now, traditional strategy adopted by the cyber security guys has always been the first one – just like in “offline” life where we all enjoy presumption of innocence (unless we slide into totalitarian form of government) and where the “blacklists” are for few suspected criminals. It definitely is more intuitive and, to a certain degree, effective way of raising the bar in the online security. However, it becomes increasingly inefficient as fraudsters get more sophisticated in hiding their identity. Indeed, only lazy or grossly uneducated fraudsters do not delete their cookies (historically, number one way of identifying a device) today. Adobe’s FSO – which succeeded the cookie – is next to fall. Soon the larger fraudster community will discover the beauty of sandboxing. In essence, it’s a matter of appropriate tools being developed and available on the “black market” – average fraudster doesn’t even have to know all the gory details to use them. Thus, as I mentioned in my previous post, device fingerprinting is pretty much doomed.

By contrast, the “known good” strategy is increasingly getting traction in the online businesses. Initially unpopular since they introduce another hoop for the legitimate users to jump through (businesses hate that), it just by definition works much better. Fraudsters now need to get an access to the victim’s email account, cellphone, or hack the computer to get around it (it should also be mentioned that on a conceptual level the superiority of whitelisting over blacklisting is apparent in many other cases – such as in keeping user input under control).

The switch to “known good” is not a painless exercise and, yes, it introduces an additional hurdle to the business, but it may prove to be the cheapest way of putting a dent on losses by making account takeovers much more difficult to hide. Both in terms of nuisance to the users and the cost it fares much better that some extra measures I see on many websites – such as selecting an image, asking additional questions etc. – thus my take is that the popularity of “known good” approach will continue to rise.

Device fingerprinting to fight fraudsters? Please…

“Machine/device fingerprinting” technologies allow collecting and recording unique traces of individual devices. This technique has been primarily used for tracking bad guys and making it difficult for them to repeatedly use the same device for nefarious purposes. Typically a client-side script is used to collect information (“fingerprint”) of the device which is subsequently stored on the server side. Today several vendors exist on the market offering various patented ways of collecting the device data (including the internal clock, screen parameters, OS data etc.). Recently announced and hyped “evercookie” is an example of an open source code offering even more innovative ways of doing the same. Alas, while sophistication of these techniques is impressive, it doesn’t take equal sophistication for the fraudsters to neutralize (or neuter – if you will) these measures to completely circumvent device identification. Using a virtual machine (or a simple sandboxie), not to mention completely avoiding the usage of browsers in mounting cyber attacks, is a sufficient antidote to the pains companies go to “fingerprint” fraudsters’ devices. Indeed, it’s a matter of time for the fraudster community to fully adapt to the “fingerprinting” technologies…

Having said that, device “fingerprinting” is far from being dead – it is definitely finding second – perhaps more significant – life in growing trends for “average” user tracking – e.g. serving advertisement industry. Here an average netizen – being far less sophisticated than an average fraudster – is pretty much powerless against them (unless tracking is made illegal by law). Device fingerprinting will not lose its value, it’s just IMHO the days of it as a way of fighting fraud are numbered.

Is conficker a (nuclear) time bomb?

Conficker malware generates a lot of buzz these days. No wonder – it represents a new generation of highly-sophisticated general-purpose software platform rapidly spreading over unsuspecting user machines. Conficker is in more than one way state-of-the-art malware:

  • Highly efficient
  • Applies the latest encryption technologies
  • Hides itself in the most sophisticated ways
  • Virtually unstoppable way of updating itself

Not surprisingly, it targets Windows machines (the main platform used across the World). Currently up to 10 mln machines are infected with Conficker. Another remarkable feature of the worm is that up until recently it hasn’t really caused any significant damage – yet. We know it just hangs in there waiting for instructions to come from the “mother ship”. When and how it’ll strike – is anybody’s guess. At the same time – judging from the hitherto behavior of the guys behind conficker – they will use the platform for many “mini-explosions” (ideally unnoticed) rather than a big “blast”. It’s anything but one-time usage platform.

For details you are welcome to go through a presentation I recently put together to raise awareness of Conficker with my colleagues:

Summary: Like any other malware which infects the end user machines, it’s very powerful and may render bulk of traditional anti-fraud tools & technologies useless. It’s possibilities are virtually limitless – from dDOS, spamming to key logging and information stealing. But sophistication of Conficker compared to more primitive trojan predecessors takes the challenge to the next level (I am sure we’ll witness more conficker-like trojans on the market – fraudsters have their own “arms races”).

What companies could to to be ready for Conficker? I can’t think of anything else but educating end users, perhaps mandating (or providing incentives for) installation of virus protection software on user machines (the trend has already started). Using 2FA will definitely slow the bad guys down, but by no means it’s the definitive remedy to Conficker’s and alike. How efficient all those measures are will become apparent in the upcoming years.

Is there an alternative to user education?…

…in global fight against fraud? IMHO, there isn’t. Although I am not – by any stretch of imagination – the first one who arrived to this conclusion, nobody came out with a working idea on how realistically we can move the needle in this direction.

I recently had a chance to present a hastily-put-together “Cyber-security 101 – Defensive Browsing for Everyone” presentation* to a not-necessarily-technical audience. A friend of mine joked after the presentation – “most of them will never use Internet again” 🙂 While it wasn’t really my intention I can’t but acknowledge that the sheer number of steps to be taken, “rules of thumb” and details to pay attention to in order to remain safe online can be pretty daunting to an average surfer. Bridging that knowledge gap for the “masses” seems to be – so far – insurmountable challenge for the industry.

Now, as a humble “soldier” in this fight, I have worked out my own tricks to convey the message. For example, I consider cyber crime having a lot in common with the crime in physical world – a phenomena which average person is much more familiar with either personally or from the books/movies. Consequently when evangelizing “defensive browsing” I use this analogy to explain concepts from the cyber “equivalent”. From my past experience – it generally proves to be quite effective.

For instance:

  • Browser – the door between your house (in this case perhaps an RV) and the street
  • Unpatched PC – the door poorly locked leaving you increasingly vulnerable to all potential thieves in the neighborhood (in case on Internet – the ‘neighborhood’ is the whole World including the criminals who are beyond American justice system)
  • Clicking on a link in an email – opening the door without checking who’s on the other side; alternatively – rushing to a place suggested by a letter in the mail
  • Anti virus – pest control in the house
  • Browsing suspicious sites – strolling in known bad neighborhoods at night
  • Phishing site – an impostor pretending to be your cleaning person’s ‘cousin’ to get the keys to your house
  • Open Wi-fi (with no additional precautions) – a place where the bad guys can easily hook you with a tracking device, a bug or a video surveillance device

In a way, cyber security can be viewed as an extension of our physical security, so the analogies are really limitless. Making the connection between them is the first step in educating crime-aware and responsible “netizens”.

*[update] I’ve put the presentation here – click on the image to view the slides
Cyber security 101

Passwords are passé

It’s clear. Authenticating users via passwords is hopelessly outdated – the sooner online businesses (who are serious about keeping their customers safe) understand this the better. Security questions are of no substantial help – they just put some short-lived life support on dying passwords. IP/cookie check on server side (if any exists, of course) helps, but only incrementally, as there are know workarounds actively used by fraudster community. The only – as of today – viable improvement qualitatively raising the bar is 2FA.

Many would say – 2FA might be an overkill for most of our online authentication needs. Well, I could definitely argue with this statement – at least in 90% of cases. For example, our email box contains extremely valuable information about us – allowing identity theft, great for waging a spear attack or simply allowing to learn about your immediate plans to conduct “brick and mortar” theft. Not to mention social network accounts – they are remarkable in keeping comprehensive log about their owners – contacts, friends, photos, status, communication – all in one place! In other words – the wet dream for a whole line of businesses – illegal as well as legal ones. And what – a pathetic password being a single key to this wealth of information? Hell, no!

That said 2FA is far from being bulletproof (e.g. it’s susceptible to particular type of client-site attack). However, there’s little doubt that 2FA is the next major step in securing users identities online, and that will be the direction the industry will move towards (and finally quit trying to find a cheap alternative) in the next several years.

Design a site like this with WordPress.com
Get started