New AI System Makes CAPTCHAs Even More Vulnerable to Hacks

By George Dvorsky on at

Computer scientists have developed an artificially intelligent system that’s an improvement over existing techniques used to crack CAPTCHAs, those super-annoying prompts that check to see whether you’re a human or a bot. For security experts, it means that existing CAPTCHA-based systems may soon be obsolete—if they aren’t already.

AI developers have been trying to defeat CAPTCHAs for years, but these systems require a lot of digital horsepower to do so. Challenge-response tests that present jumbled and distorted characters in various fonts and configurations are super-tough for machines, but not so tough for humans. We don’t have a problem picking out the letters, inferring characters from warped shapes, or telling two overlapping letters apart. Our highly adaptable brains allow us to do this and AIs, for the most part, are narrow systems that can’t think very well outside of the box.

That’s why Dileep George, co-founder of Vicarious, has incorporated insights from neuroscience to “train” a computer to generalise beyond what it’s primarily taught. His new system, called the Recursive Cortical Network (RCN), is apparently able to parse the CAPTCHA test more effectively than previous models, and with less training. This means that answer-response systems can now be cracked with greater efficiency, leaving sites increasingly vulnerable to bots. The new research was published yesterday in Science.

Most CAPTCHA-defeating systems are trained on literally millions of pre-labeled CAPTCHA image examples, or they’ve been equipped with specific rules about how to discern each type of image. But like the human brain, the new system can apparently learn and generalise using just a few examples. George says RCN is 300 times more data-efficient than previous techniques, and it works by making assumptions about the visual world. As George explains in NPR:

During the training phase, it builds internal models of the letters that it is exposed to. So if you expose it to As and Bs and different characters, it will build its own internal model of what those characters are supposed to look like. So it would say, these are the contours of the letter, this is the interior of the letter, this is the background, etc. And then, when a new image comes in ... it tries to explain that new image, trying to explain all the pixels of that new image in terms of the characters it has seen before. So it will say, this portion of the A is missing because it is behind this B.

CAPTCHA systems vary greatly around the internet, but RCN proved to be highly adaptable, solving reCAPTCHAs and BotDetect about two-thirds of the time, and Yahoo And PayPal CAPTCHAs roughly 57 per cent of the time. It’s not perfect, but it’s a step in the right direction. And by right direction, the researchers are referring to systems that can visually reason like humans. Ultimately, the researchers are working towards generalised AI that functions similarly to the human brain.

“This has been a long time coming,” Marc Goodman, author of Future Crimes: Everything Is Connected, Everyone Is Vulnerable and What We Can Do About It,” told Gizmodo. “CAPTCHAs depend on human vision being better than computer vision. Unfortunately that’s changing rapidly and computer vision is getting as good—and will soon be potentially better—than human vision. When that happens, all image-related authentication systems will come under threat as artificial intelligence-based computer vision systems will be able to solve the same exact puzzles that people can.”

For some sites, this means CAPTCHA-based security systems will become obsolete. One solution is to make CAPTCHAs more visually difficult, but that’ll just make it more difficult for humans, who will start to get frustrated, and gradually find themselves unable to pass the majority of CAPTCHAs. Another option is to devise entirely new authentication systems. And in fact, Google has already killed the CAPTCHA.

“Ultimately, system authentication will be based more upon biometrics, and in particular, behavior metrics, which track and measure things about you and your behaviors to authenticate you,” said Goodman. “For example, the accelerometer within your phone nowadays is being used by many financial apps to uniquely fingerprint how you hold your phone when you enter your password. We all do that uniquely, and that is a type of digital fingerprint that can help authenticate you on websites and apps. We will see much much more of that in the future.”

One thing’s certain, an arms race is currently under way in which humans are finding it increasingly difficult to prove they’re the real deal. It’s frightening to imagine, but the day may arrive when it’ll be practically impossible for a human to prove their human-ness over the internet. [Science]

More Ai Posts: