Monday 11 November 2013

AI Startup Vicarious Claims Milestone In Quest To Build A Brain: Cracking CAPTCHA

Robert Hof
Can machines think? Not yet. But there is one at least partial test: theCAPTCHA, or “Completely Automated Public Turing test to tell Computers and Humans Apart,” those distorted characters you have to type into a website that wants to repel automated programs from spamming or making comments in blogs. Because CAPTCHAs by definition are intended to be recognizable only by humans, they’re widely considered one test of whether a machine can at least display a visual understanding close to that of people.
startup Vicariouswill release the results of a test, shown in a video, that it says shows its early prototype software can solve CAPTCHAs reliably. In particular, two of the three-year-old company’s cofounders, Dileep George and D. 
reCAPTCHA, the most widely used test of a computer’s ability to act like a human being.
In the tests shown in the video, the system scans the CAPTCHA and presents a list of possible answers–often topped by the correct one. The company claims it gets 95% per letter on reCAPTCHA, and that it solves reCAPTCHA 90% of the time. That compares with essentially 0% for state-of-the-art algorithms cited in aMicrosoft Research paper. Even a solve rate of 1% is considered to beat the CAPTCHA system.
While Vicarious doesn’t plan to do anything with its CAPTCHA recognition, it’s a demonstration of its broader goal. Vicarious says it’s creating software, which it calls a recursive cortical network, that thinks and learns like a human, even to the extent of being able to use what we think of as imagination. The company distinguishes its approach from many other companies, from IBM IBM -0.01% to Google to Microsoft MSFT +0.72% and a raft of startups, in focusing on visual perception.
Instead of trying to model and simulate the brain itself, like projects such as the Human Brain Project, George says, Vicarious is trying to identify only the elements of the brain needed for information processing–in particular the neocortex’s ability to understand the structure of the physical world. Vicarious also calls out shortcomings in deep learning, a branch of AI that has produced big advances in image and speech recognition in recent years–most obviously in services such as the iPhone’s Siri and Google’s voice search. That approach, George and Phoenix say, rests on a model of neuron behavior much more primitive than the human brain’s, so it depends on heavy computation and extensive learning from many more examples–10,000 or more in many cases–than even a child. “That is not intelligence,” says Phoenix.
Instead, Vicarious is “trying to do the math behind the processes of the brain,” says Phoenix. It’s the same thinking, he adds, that is behind the obvious fact that “airplanes don’t flap their wings. We’re focusing on lift and thrust  vs. feathers and flapping.”
The company even claims its breakthrough is more impressive than the best-known AI demonstration so far: IBM’s Jeopardy-winning Watson computer. Although that was clearly impressive, Vicarious says the approaches IBM used doesn’t truly involved understanding of words because they don’t include an understanding of or experience in physical objects, like humans innately have–and like Vicarious claims to have simulated at least in preliminary fashion in software. “The brain is trying to model the structure of the world, so the world is another clue” in addition to research on what the brain itself is doing, says George.
One interesting wrinkle is that Vicarious is setting up the system to “imagine” what shapes might be, filling in blanks that humans naturally do. “Perception is a lot about imagination,” says George. “Imagining what you’re seeing is a big part of perception.” He aims to have a system that can “see a dog in the clouds.”
George and Phoenix say the CAPTCHA demonstration is just that, and that its software can be used to solve other sensory perception and even reasoning problems. “We have solved other problems we’re not telling people about yet,” says George. The company plans to do other Turing tests as well.
company’s technology, since it’s keeping a tight lid on details. George and Phoenix even requested that its location, which is to the east of Silicon Valley, not be identified. When it was pointed out that this was revealed on its employment page, they promptly removed it. The secrecy is understandable, especially given that bad guys who want to beat CAPTCHAs would love to see what they’re doing.
But as a result, even the two experts they referred me to couldn’t provide much insight. Indeed, Luis von Ahn, one of the Carnegie Mellon University team that coined the term CAPTCHA, says he’s skeptical. “It’s hard for me to be impressed since I see these every few months,” he says–about 50 claims since 2003. Each time, CAPTCHAs are adjusted to foil the bots–which he predicts will happen again as CAPTCHAs go from chiefly text-based to picture-based. “I guarantee they will not be able to break that, because if they could, they’d be announcing a big breakthrough in computer vision.”
George, however, says they are testing CAPTCHAs with colors, 3-D shapes, lighting angles, and other variations.
Indeed, in what is hard to view as a coincidence, Google itself announced on Oct. 25 that its reCAPTCHA system had been improved to make it at once easier for humans and harder for bots. “The updated system uses advanced risk analysis techniques, actively considering the user’s entire engagement with the CAPTCHA—before, during and after they interact with it,” Google said in a blog post. “That means that today the distorted letters serve less as a test of humanity and more as a medium of engagement to elicit a broad range of cues that characterize humans and bots. As part of this, we’ve recently released an update that creates different classes of CAPTCHAs for different kinds of users.”
One company is even trying to make CAPTCHA solving fun and make a little money while it tries to make sure only humans can solve them. A Detroit company called Are You a Human has created a verification system it calls PlayThru, which plays an ad while users play a game to prove they’re human.
Still, CAPTCHAs are merely the demonstration point for the technology, and 90% accuracy even on current versions is clearly a big advance over other methods. Nils Nilsson, an emeritus professor of engineering at Stanford University’s computer science department, said he hasn’t “the foggiest idea of their technology,” but said visual perception is an important part of AI.
However, Nilsson says visual perception isn’t the only avenue AI needs to pursue. Understanding movement and actions is another. “There’s nothing to say that the human brain follows only one method of computation,” he says. In fact, George says the system is trained on videos, not just images, so next on the agenda is recognizing objects in a 3-D scene and recognizing actions or motions.
Vicarious is clearly thinking very long-term, at least in terms of the usual Silicon Valley startup these days. They don’t expect to release any products for at least five years. Indeed, staying out of the tech echo chamber is one reason the company is headquartered outside the usual local tech hubs. The six-person company is backed with $16.1 million so far from Good Ventures, including Facebook cofounder Dustin Moskovitz, and Peter Thiel’s Founders Fund.
Oh, and in case anyone running bots to break CAPTCHAs is wondering, Vicarious will not be releasing its software into the wild.

0 comments:

Post a Comment