All the while Jeff Hawkins was creating the PalmPilot, launching the era of handheld computing, and amassing hundreds of millions of dollars, a big part of his mind was somewhere else. It was somewhere else in 1994 when he dreamed up the Palm’s clever handwriting-recognition system–the first that ever really worked. It was somewhere else a decade later when Hawkins helped spearhead smart phones, which can tap into the Internet and act as organizers besides letting you call home.
As far as the 47-year-old engineer is concerned, all that was mere prelude. His true passion, the one he has pursued on the side through all those years of success, is something entirely different. It is an Einstein-worthy puzzle that has fascinated scientists for centuries: What is the source of intelligence? In recent years Hawkins has been closing in on an answer, and this fall he is set to unveil his most revolutionary product yet: a big-picture theory on how the brain works.
Hawkins lays out the theory in a book that aims for breakthrough status in both neuroscience and the computer world. On Intelligence, co-authored by New York Times science writer Sandra Blakeslee and published this fall, has already won spirited applause from top scientists like Nobel laureates Eric Kandel of Columbia University and James Watson, co-discoverer of the structure of DNA. Michael Merzenich, a brain researcher at the University of California at San Francisco, says, “It could be a big happening in neuroscience.” Such strong endorsements from big-name scientists are rare–even though Hawkins has no academic credentials in science, he seems to really be onto something.
If Hawkins is right, his work will have practical implications far greater than anything he has invented so far. Imagine a computer that not only easily understands your spoken commands but also is smart enough to tell whether you’re talking to a friend or to it. Picture a car whose computer constantly scans surrounding traffic via radar, and when a drunk is about to sideswipe you seizes control to steer you out of harm’s way. Imagine a system that monitors a crowded beach via cameras, and when it spots a swimmer in danger cries out a warning. Engineers pursuing computers that could perform such intelligent feats have long fumbled because they don’t understand the thing they’re trying to imitate; Hawkins’s work could help them finally succeed at creating digital systems with human-like smarts. It also gives neuroscientists a clarifying aerial view of their subject; if Hawkins is right, his book will serve as a guide to a long-awaited meeting of the minds on how the brain works.
One sign that he may be right is the elegant simplicity of his main ideas–a brilliant stroke in science typically collapses a mountain of data into a mental construct you can slip in your pocket. Hawkins theorizes that intelligence, as well as perception, is largely a matter of the brain’s using memory to make predictions (most of which we’re not consciously aware of, by the way). Achieving such simple-sounding clarity is difficult. Hawkins spent nearly 20 years researching and developing his theory, even as he founded and helped run companies like Palm Computing and Handspring.
Recently he sat in a Starbucks near Boston, taking time to discuss the brain while on an East Coast vacation with his wife and their 15-year-old daughter. Dressed in jeans, T-shirt, and sneakers, he radiates a combination of just-a-regular-guy affability, nerdy exactitude, and scarcely containable enthusiasm. An avid sailor–another of his passions is his 33-foot sailboat, Jakatan–Hawkins seems remarkably little changed from the lanky teen who went to sea in a big round boat. That was a floating platform, 50 feet across, that his dad, a self-employed inventor, built at a Long Island shipyard in the mid-1960s with help from Jeff and his two older brothers. They sold the thing to a Pittsburgh orchestra that used it as a stage for summer concerts–the platform would be floated onto public beaches near New York City for performances. Led by their free-spirit father, says Hawkins, “we were always building weird contraptions. There were projects all over the house, vats of chemicals in the basement, partly rebuilt cars in the garage. We never had much commercial success. But as a child, I got a sense that I could do things.”
Hawkins’s dad steered him toward computers via a brief conversation about choosing his undergraduate major at Cornell. “My father said, ‘This microelectronics stuff looks interesting,’ ” says Hawkins. “So I thought, ‘Maybe I’ll work on that.’ It seemed a reasonable thing to do.”
Actually, it was an insanely great idea. The personal computer era was dawning, and Hawkins’s bachelor’s degree in electrical engineering landed him a job at Intel working on the chips that made it happen. But a few months after he joined Intel in 1979, a new interest grabbed him and never let go. Poring over an issue of Scientific American devoted to the brain, he was galvanized by an essay noting that although gobs of details were known about it, no overarching theory showed how they meshed to conjure up the magic of intelligence. Neuroscience was still waiting for its Darwin. The article, by the late Nobelist Francis Crick, who co-discovered DNA’s structure with Jim Watson before turning to brain research, inspired Hawkins to change careers. He dashed off a letter to Intel chairman Gordon Moore proposing that Intel start a research group on the brain: Penetrating its secrets, Hawkins argued, would show how to emulate them with computers.
Intel’s brass said no, recalls Hawkins, viewing the idea as way ahead of its time. (They were right, he adds.) Another rejection followed from MIT, where Hawkins proposed studying the brain to pursue artificial intelligence–AI research at the time was dominated by computer scientists with little interest in biology. In the mid-1980s Hawkins made a final stab at formally joining the brain game. By then a rising star at Grid Systems, a laptop-computer pioneer in Silicon Valley, he quit to study biophysics at the University of California at Berkeley. Again he slammed into a bureaucratic barrier: The school wouldn’t let him pursue a degree combining brain and computer studies. So after two years as a student he rejoined Grid, determined to make enough money to do his own thing in neuroscience.
Hawkins founded Palm Computing in 1992 and was soon on his way to becoming very rich indeed–at the height of the dot-com bubble, he was a billionaire on paper, and he’s still worth more than $100 million. Yet wealth didn’t buy Hawkins the freedom he had expected right away. Startups that have formed around legends can’t do without them, he learned; he couldn’t drop everything and go into brain research without sinking the fortunes of his pals at Palm and, later, Handspring. “When everyone is depending on you for their kids’ college-education money, it’s not something you just walk away from,” he says. He still hasn’t left–he’s chief technology officer at palmOne, a successor to the startups.
But over the years he has cut back on his day job to pursue his theory. He focused on the neocortex, the most recently evolved part of the mammalian brain and the component most central to human intelligence –it’s the home of conscious perception, thought, language, and purposeful movement. If you could pop your hood and look in, you’d see your neocortex right on top–it’s a pinkish-gray sheet of cells that seems shrink-wrapped around the ancient parts of the brain. As Hawkins puts it, this gray matter resembles a smooth cauliflower with ridges and valleys. But zoom in, and you’d find that it actually consists of six interconnected layers of neurons, each only as thick as a business card. If stretched out flat, the layered sheet would be the size of a large dinner napkin. That may not seem very big, considering that it’s the real you. But it’s huge compared with, say, the gray matter of a rat, which would barely cover a postage stamp.
For decades researchers analyzed the neocortex as a mosaic of regions devoted to specific functions, such as hearing, seeing, and motor control. They viewed themselves as gray-matter Magellans, charting the various regions and discovering their different operating principles. That pursuit has yielded a wealth of detail about the organization of the neocortex, as well as insights into how it breaks down cognitive tasks into sub-processes handled by interconnected parts. But it hasn’t shed much light on intelligence.
One paper that galvanized Hawkins’s thinking, however, provided a tantalizing clue. Its author, Vernon Mountcastle of Johns Hopkins University, noted that although different parts of the neocortex do have different functions, it has the same six-layered neuronal structure throughout. That led to a leap: Perhaps the different parts perform the same basic operation. It follows that their differing functions arise from the way they’re connected to other regions of the brain rather than, as many thought, from basic differences in the way they work.
“I nearly fell out of my chair when I first read” the paper, says Hawkins, for it illuminates an array of mysteries. Consider the fact that people who suffer loss of faculties after strokes often recover at least some of their lost abilities over time. That makes sense–an intact part of the neocortex can readily fill in for a permanently injured area, given that both parts do the same basic thing. But what is that thing?
An answer occurred to Hawkins soon after, when he was musing in his home office. What would happen, he wondered, if someone had left, say, a blue coffee cup he’d never seen before in the room? The answer may seem trivial: It would catch his attention as not belonging. But the automatic effortlessness of such acts of recognition suggests something important: that the brain specializes in them. Perhaps, he conjectured, that’s actually the neocortex’s main activity. That is, maybe it is continually drawing on memory to make predictions and create expectations about sights, sounds, or other inputs before they arrive on the mental scene. Novelties jump out at us because they clash with such predictions.
We’re usually not aware of those anticipations, for the neocortex processes information millisecond by millisecond at different levels of detail, most of which are subliminal. When the “lower” layers of the neocortex process the raw data of vision, for instance, a running stream of expectations is pulled from memory based on what we have just seen–confronted with, say, a face, the brain’s visual systems are cued to perceive eyes, nose, and other features in certain relative positions as our eyes scan its detailed structure. If what they detect conflicts with these expectations–a missing eye, for instance–a kind of shockwave of unfamiliarity rises through the neocortex, attracting our attention.
Hawkins’s book explores that theme, showing that it gives order to much of the confusing jumble of knowledge about the brain. Perhaps 50% or more of the visual system’s neural wiring, for instance, is thought to carry information in a direction you wouldn’t expect: from higher parts of the neocortex–the home of conscious perceptions, ideas, and memories–to lower parts that crunch incoming data from the eyes and deal with the shapes, lines, colors, and boundaries of things.
Researchers have long known of the surprising “top down” flow of information. But they’ve given it little attention, instead studying the bottom-up circuitry. Hawkins, in contrast, spotlights top-down processing–it must be crucial if the neocortex does its thing by downloading predictions to compare with incoming information. This emphasis “is very timely,” says neuroscientist Malcolm Young at Britain’s University of Newcastle upon Tyne. “A current wave of experiments is showing that what neurons do is listen both to what the eye, for example, is telling the brain, and also to onboard information about what’s likely, given a particular scene.”
The auditory system works similarly, which is why we can make out what someone is saying in a noisy room (a feat notoriously hard to replicate with computers). The neocortex continually fills in from memory what our ears miss. It also explains why Hawkins’s mother twice took him to the doctor to get his hearing checked when he was a child–his ears were fine, but he tended to hear what he expected to hear rather than what was said.
Intelligence and creativity can both be described in terms of Hawkins’s theory. Intelligence, he argues, is simply the ability to form an internal model of the world and use it to make predictions that abet survival. This again is deceptive simplicity–the idea has deep implications. Such as: There’s no fundamental difference between human and animal intelligence. Your cat’s world model, for instance, enables it to cleverly predict which of your buttons to push to get fed. In fact, Hawkins argues that lower animals also possess rudimentary powers of memory and prediction–even a one-celled creature has a teensy claim to intelligence as it scours its microworld for nutrients.
Hawkins doesn’t deny we’re special, though. We have uniquely intricate world models and unequaled ability to spin complex predictions by analogy, a facility greatly amplified by language. It’s no wonder, he notes, that IQ tests are all about making predictions, with questions like: Given a sequence of numbers, what should the next one be?
Creativity is, at its heart, just predicting by analogy, in Hawkins’s view. Among other things, that means you can train yourself to be more creative. “You need to let your mind wander,” he writes. “You need to give your brain the time and space to discover the solution. Finding a solution to a problem is literally finding a stored pattern in your cortex that is analogous to the problem you are working on. If you are stuck on a problem, the memory-prediction model suggests that you should find different ways to look at it to increase the likelihood of seeing an analogy with a past experience…. Try taking the parts of your problem and rearranging them in different ways–literally and figuratively. When I play Scrabble, I constantly shuffle the order of the tiles. It isn’t that I hope the letters will by chance spell a new word, but that different letter combinations will remind me of words or parts of words that might be part of a solution.”
For all his high hopes about the book, Hawkins cautions that some of his ideas may be wrong, and that all of them are likely to be revised in time–in an appendix, he suggests a dozen experiments to vet them. And he freely concedes that few of his theory’s concepts are novel. What’s new is his clear, provocative synthesis. That may sound like a minor contribution, but it’s not. Brain researchers who have read the book agree that Hawkins’s penchant for boldly hacking through the data jungle to arrive at a sweeping view is just what their discipline needs. “It’s particularly easy for neuroscientists to get lost in details,” says University of Newcastle professor Young. Hawkins’s theory should be a “very helpful shock” to the field, potentially helping to set its agenda for years to come.
Outside the brain-research field, the practical implications of Hawkins’s work are likely to get the most attention. For one thing, he says, you should forget about robots; he views the pursuit of humanoid machines such as Star Wars’ R2D2 as silly. Instead, he foresees a world filled with intelligent systems that won’t be like us at all. Full-fledged human cognition, including the drives and emotions we’ve inherited from our animal ancestors, will be superfluous to these powerful machines. Rather, they’ll mostly be specialized, disembodied intelligences with nonhuman sensors. The “eyes” and “ears” of a weather-predicting system might be a worldwide array of sensors that monitor things like temperature and wind speeds. The machine would use vast amounts of memory and lightning processing speed to create the cyber-equivalent of expectations. As it “looks” across the globe, the system would see the atmosphere’s intricate dance in terms and at a level of detail no human could perceive or comprehend. After watching for a while, it might be able to foresee, among other things, exactly where the awful dervish of a hurricane is headed.
Speech-recognition systems should get a lot better at recognizing our words in noisy places. Vision systems would be able to tell the difference between someone knocking on a door with a gift or trying to break in with a crowbar. Security systems might discern signs of terrorist activity in data flowing from a network of cameras spanning a city. (Hawkins is aware of risks like computerized Big Brothers, but he sees them as relatively easy to handle compared with issues surrounding genetic and nuclear technology.)
Intelligent-systems development is set to take off, he says, adding, “It’s not too early to start businesses doing this.” But he isn’t eager to play entrepreneur again. “I need a break,” he says. “I might be willing to sit on boards and help fund some startups [developing novel systems based on his theory]. But what I really want is to create a self-sustaining movement. My goal is to get young scientists and engineers to read my book. The ones that get it will self-select, and [the movement] will happen.”
To help get things started, in 2002 he founded the Redwood Neuroscience Institute, a Menlo Park, Calif., center for studying models of memory and cognition. While it isn’t heavily focused on Hawkins’s ideas, one of its researchers is working on a simple “memory prediction” system to identify written characters. Professor Young says his team also plans to soon begin applying Hawkins’s ideas. Memory-prediction systems, he adds, may be useful for analyzing visual images, guiding stock investments, and searching the Internet.
Despite such developments and the lavish kudos for Hawkins’s book, his theory’s power to change the world is far from assured. Many artificial-intelligence experts, for example, still tend to view biology as a mathless mess–they may ignore Hawkins’s ideas. Some brain scientists may complain that he fails to give sufficient credit to the authors of ideas and data he has drawn on. (Doing that would have taken a huge effort for little gain, he says.) Still, he has an uncanny record of spotting technology revolutions about to take off and then personally making sure they do. If he’s done it again, he’ll have an easy time winning over skeptics–his computerized brainchildren will do the job for him.