Published: July 31, 2007

In June 2002, Heidi Cullen, a researcher at the National Center for Atmospheric Research in Boulder, Colo., received a telephone call from an executive at the Weather Channel. Would she audition for a program on climate and global warming that producers at the Atlanta-based cable television network were contemplating?

Dr. Cullen, a climatologist with a doctorate from the Lamont-Doherty Earth Observatory at Columbia University, was dubious. A specialist in droughts, she had no broadcast experience. Moreover, she rarely watched television. She had never even seen the Weather Channel.

“My interests were in trying to find new ways to make climate forecasts practical for engineers and farmers,” Ms. Cullen, 37, said on a recent visit to New York. She had, she said, just gotten a grant from the National Science Foundation, “and I didn’t want to leave what I was doing.”

But the lure of a national audience won out. After a successful tryout, Dr. Cullen packed her clothes, furniture and dog and moved to Atlanta. Today, she is the only climatologist with a Ph.D. in the country who has her own weekly show, “Forecast Earth,” a half-hour-long video-magazine focused on climate and the environment.

Q: What were you studying when you got that call from the Weather Channel?

A: I was trying to understand the large-scale mechanisms that had caused a drought in Afghanistan from 1999 to 2001. I was also working with engineers in Brazil and Paraguay to apply climate forecasts to optimize water resource management at Itaipu Binacional, the largest operational hydropower facility in the world.

I hesitated when I got that call. Television was a world I couldn’t imagine. No one I knew had ever done anything like that.

Q: How did the Weather Channel executives know of you?

A: I think they’d been asking around. They were hunting for a Ph.D. scientist who could explain the science behind climate news. As it happened, my doctoral thesis has a lot of relevance to current affairs. Part of it involved looking at how to use climate information to manage water resources in the Middle East. It’s often said that the next war in the Middle East will be fought over water.

For my thesis, I studied droughts and the collapse of the first Mesopotamian empire — the Akkadian civilization. I was able to show that a megadrought at roughly 2200 B.C. played a role in its demise. I found the proof by examining the sediment cores of ancient mud. When one looked at the mud from the period around the Akkadian collapse, one found a huge spike in the mineral dolomite. That substance is an indicator of drought.

Q: What’s the point of knowing this?

A: Because until recently, historians, anthropologists and archaeologists were reluctant to say that civilizations could collapse because of nature. The prevailing theories were that civilizations collapsed because of political, military or medical reasons — plagues. Climate was often factored out.

And yet, indifference to the power of nature is civilization’s Achilles’ heel. I think the events around Hurricane Katrina reminded us that Mother Nature is something we haven’t yet conquered.

Q: Did you have to take lessons in broadcasting techniques?

A: Not at first. I’ve since done some voice training and have become obsessed with the craft of television. It’s important, for instance, to be very still when you’re on camera. My coach says that if you move around wildly, it erodes people’s faith in you. It’s been said to me that 9 times out of 10, the visual trumps what you say on television. I was floored. I had grown up among the cops and firemen of New York’s Staten Island, a world where your word is everything. So when I heard that, it was like, ‘Oh my God, why did I consciously choose to get into this?’

Q: O.K., why did you?

A: Because they were giving a chance to cover things people need to know more about: global warming, El Niño, energy policy.

Q: It has to be hard to put together a weekly magazine show on one subject. Where do you find your stories?

A: I’ve become a media junkie. I read far more widely now than when I was a researcher. Also, I watch a lot of TV, which means all the news programs, “Frontline,” even ESPN, which I watch to learn how to write punchy leads. I also listen to NPR, check out Greenwire and troll the scientific journals like Science, Nature and Geophysical Research Letters.

My problem is that I think everything climate-related is interesting. In my four years on the job, I’ve learned that just because I think something is interesting doesn’t mean it’ll make for good television. It’s often a challenge to make climate issues visual. When I first began, all we had was a little stock video of droughts in the Sahara with dead animal carcasses, and glaciers falling into the sea. We ran them over and over again. My father, who’s a retired New York City policeman, kept phoning me: “Heidi, are those same glaciers falling again?”

Q: Your coverage of global warming has been controversial. Are you surprised?

A: In a way, yes. To me, global warming isn’t a political issue, it’s a scientific one. But a lot of people out there think you’re being an advocate when you talk climate science.

Last December, I wrote a blog about how reticent some broadcast meteorologists are about reporting on climate change. Meteorologists — they are the forecasters — have training in atmospheric science. Many are certified by the American Meteorological Society. I suggested there’s a disconnect when they use their A.M.S. seal for on-camera credibility and refuse to give viewers accurate information on climate. The society has a very clear statement saying that global warming is largely due to the burning of fossil fuels.

The next thing I knew, I was being denounced on the Web sites of Senator James Inhofe, Matt Drudge and Rush Limbaugh. The Weather Channel’s own Web site got about 4,000 e-mails in one day, mostly angry. Some went, ‘Listen here, weather girl, just give me my five-day forecast and shut up.’

Q: Rush Limbaugh accused you of Stalinism. Did you suggest that meteorologists who doubt global warming should be fired?

A: I didn’t exactly say that. I was talking about the American Meteorological Society’s seal of approval. I was saying the A.M.S. should test applicants on climate change as part of their certification process. They test on other aspects of weather science.

A lot of viewers want to know about climate change. They are experiencing events they perceive as unusual and they want to know if there’s a connection to global warming. Certainly when Katrina hit, they wanted to know if it was global warming or not. Most Americans get their daily dose of science through their televised weather report. Given that fact, I think it’s the responsibility of broadcast meteorologists to provide viewers with scientific answers.

Q: What do your ex-colleagues from academia think of your new career?

A: Oh, they’re so funny. Some of them claim that they haven’t seen me on television because they don’t own one. But when I was being denounced by Matt Drudge, they were all, ‘Hey, saw you on Drudge!’

Actually, a lot of my friends are relieved that there’s at least one scientist out there doing this.


Published: July 31, 2007

When Martin Nowak was in high school, his parents thought he would be a nice boy and become a doctor. But when he left for the University of Vienna, he abandoned medicine for something called biochemistry. As far as his parents could tell, it had something to do with yeast and fermenting. They became a little worried. When their son entered graduate school, they became even more worried. He announced that he was now studying games.

In the end, Dr. Nowak turned out all right. He is now the director of the Program for Evolutionary Dynamics at Harvard. The games were actually versatile mathematical models that Dr. Nowak could use to make important discoveries in fields as varied as economics and cancer biology.

“Martin has a passion for taking informal ideas that people like me find theoretically important and framing them as mathematical models,” said Steven Pinker, a Harvard linguist who is collaborating with Dr. Nowak to study the evolution of language. “He allows our intuitions about what leads to what to be put to a test.”

On the surface, Dr. Nowak’s many projects may seem randomly scattered across the sciences. But there is an underlying theme to his work. He wants to understand one of the most puzzling yet fundamental features of life: cooperation.

When biologists speak of cooperation, they speak more broadly than the rest of us. Cooperation is what happens when someone or something gets a benefit because someone or something else pays a cost. The benefit can take many forms, like money or reproductive success. A friend takes off work to pick you up from the hospital. A sterile worker bee tends to eggs in a hive. Even the cells in the human body cooperate. Rather than reproducing as fast as it can, each cell respects the needs of the body, helping to form the heart, the lungs or other vital organs. Even the genes in a genome cooperate, to bring an organism to life.

In recent papers, Dr. Nowak has argued that cooperation is one of the three basic principles of evolution. The other two are mutation and selection. On their own, mutation and selection can transform a species, giving rise to new traits like limbs and eyes. But cooperation is essential for life to evolve to a new level of organization. Single-celled protozoa had to cooperate to give rise to the first multicellular animals. Humans had to cooperate for complex societies to emerge.

“We see this principle everywhere in evolution where interesting things are happening,” Dr. Nowak said.

While cooperation may be central to evolution, however, it poses questions that are not easy to answer. How can competing individuals start to cooperate for the greater good? And how do they continue to cooperate in the face of exploitation? To answer these questions, Dr. Nowak plays games.

His games are the intellectual descendants of a puzzle known as the Prisoner’s Dilemma. Imagine two prisoners are separately offered the same deal: if one of them testifies and the other doesn’t talk, the talker will go free and the holdout will go to jail for 10 years. If both refuse to talk, the prosecutor will only be able to put them in jail for six months. If each prisoner rats out the other, they will both get five-year sentences. Not knowing what the other prisoner will do, how should each one act?

The way the Prisoner’s Dilemma pits cooperation against defection distills an important feature of evolution. In any encounter between two members of the same species, each one may cooperate or defect. Certain species of bacteria, for example, spray out enzymes that break down food, which all the bacteria can then suck up. It costs energy to make these enzymes. If one of the microbes stops cooperating and does not make the enzymes, it can still enjoy the meal. It can gain a potential reproductive edge over bacteria that cooperate.

The Prisoner’s Dilemma may be abstract, but that’s why Dr. Nowak likes it. It helps him understand fundamental rules of evolution, just as Isaac Newton discovered that objects in motion tend to stay in motion.

“If you were obsessed with friction, you would have never discovered this law,” Dr. Nowak said. “In the same sense, I try to get rid of what is inessential to find the essential. Truth is simple.”

Dr. Nowak found his first clues to the origin of cooperation in graduate school, collaborating with his Ph.D. adviser, Karl Sigmund. They built a version of the Prisoner’s Dilemma that captured more of the essence of how organisms behave and evolve.

In their game, an entire population of players enters a round-robin competition. The players are paired up randomly, and each one chooses whether to cooperate or defect. To make a choice, they can recall their past experiences with other individual players. Some players might use a strategy in which they had a 90-percent chance of cooperating with a player with whom they have cooperated in the past.

The players get rewarded based on their choices. The most successful players get to reproduce. Each new player had a small chance of randomly mutating its strategy. If that strategy turned out to be more successful, it could dominate the population, wiping out its ancestors.

Dr. Nowak and Dr. Sigmund observed this tournament through millions of rounds. Often the winners used a strategy that Dr. Nowak called, “win-stay, lose-shift.” If they did well in the previous round, they did the same thing again. If they did not do so well, they shifted. Under some conditions, this strategy caused cooperation to become common among the players, despite the short-term payoff of defecting.

In order to study this new version of the Prisoner’s Dilemma, Dr. Nowak had to develop new mathematical tools. It turned out that these tools also proved useful for studying cancer. Cancer and the Prisoner’s Dilemma may seem like apples and oranges, but Dr. Nowak sees an intimate connection between the two. “Cancer is a breakdown of cooperation,” he said.

Mutations sometimes arise in cells that cause them to replicate quickly, ignoring signals to stop. Some of their descendants acquire new mutations, allowing them to become even more successful as cancer cells. They evolve, in other words, into more successful defectors. “Cancer is an evolution you don’t want,” Dr. Nowak said.

To study cancer, however, Dr. Nowak had to give his models some structure. In the Prisoner’s Dilemma, the players usually just bump into each other randomly. In the human body, on the other hand, cells only interact with cells in their neighborhood.

A striking example of these neighborhoods can be found in the intestines, where the lining is organized into millions of tiny pockets. A single stem cell at the bottom of a pocket divides, and its daughter cells are pushed up the pocket walls. The cells that reach the top get stripped away.

Dr. Nowak adapted a branch of mathematics known as graph theory, which makes it possible to study networks, to analyze how cancer arises in these local neighborhoods. “Our tissue is actually organized to delay the onset of cancer,” he said.

Pockets of intestinal cells, for example, can only hold a few cell generations. That lowers the chances that any one will turn cancerous. All the cells in each pocket are descended from a single stem cell, so that there’s no competition between lineages to take over the pocket.

As Dr. Nowak developed this neighborhood model, he realized it would help him study human cooperation. “The reality is that I’m much more likely to interact with my friends, and they’re much more likely to interact with their friends,” Dr. Nowak said. “So it’s more like a network.”

Dr. Nowak and his colleagues found that when they put players into a network, the Prisoner’s Dilemma played out differently. Tight clusters of cooperators emerge, and defectors elsewhere in the network are not able to undermine their altruism. “Even if outside our network there are cheaters, we still help each other a lot,” Dr. Nowak said. That is not to say that cooperation always emerges. Dr. Nowak identified the conditions when it can arise with a simple equation: B/C>K. That is, cooperation will emerge if the benefit-to-cost (B/C) ratio of cooperation is greater than the average number of neighbors (K).

“It’s the simplest possible thing you could have expected, and it’s completely amazing,” he said.

Another boost for cooperation comes from reputations. When we decide whether to cooperate, we don’t just rely on our past experiences with that particular person. People can gain reputations that precede them. Dr. Nowak and his colleagues pioneered a version of the Prisoner’s Dilemma in which players acquire reputations. They found that if reputations spread quickly enough, they could increase the chances of cooperation taking hold. Players were less likely to be fooled by defectors and more likely to benefit from cooperation.

In experiments conducted by other scientists with people and animals, Dr. Nowak’s mathematical models seem to fit. Reputation has a powerful effect on how people play games. People who gain a reputation for not cooperating tend to be shunned or punished by other players. Cooperative players get rewarded.

“You help because you know it gives you a reputation of a helpful person, who will be helped,” Dr. Nowak said. “You also look at others and help them according to whether they have helped.”

The subject of human cooperation is important not just to mathematical biologists like Dr. Nowak, but to many people involved in the current debate over religion and science. Some claim that it is unlikely that evolution could have produced humans’ sense of morality, the altruism of heroes and saints. “Selfless altruism presents a major challenge for the evolutionist,” Dr. Francis S. Collins, the director of the National Human Genome Research Institute, wrote in his 2006 book, “The Language of God.”

Dr. Nowak believes evolutionary biologists should study average behavior rather than a few extreme cases of altruism. “Saintly behavior is unfortunately not the norm,” Dr. Nowak said. “The current theory can certainly explain a population where some people act extremely altruistically.” That does not make Dr. Nowak an atheist, however. “Evolution describes the fundamental laws of nature according to which God chose to unfold life,” he declared in March in a lecture titled “Evolution and Christianity” at the Harvard Divinity School. Dr. Nowak is collaborating with theologians there on a project called “The Evolution and Theology of Cooperation,” to help theologians address evolutionary biology in their own work.

Dr. Nowak sometimes finds his scientific colleagues astonished when he defends religion. But he believes the astonishment comes from a misunderstanding of the roles of science and religion. “Like mathematics, many theological statements do not need scientific confirmation. Once you have the proof of Fermat’s Last Theorem, it’s not like we have to wait for the scientists to tell us if it’s right. This is it.”




























石刚 (中国现代国际关系研究院反恐中心)











Published: July 30, 2007

Odile Crick, an artist whose original sketch of the double helix of DNA, the genetic blueprint for life, became a symbol of modern molecular biology, died July 5 at her home in La Jolla, Calif. She was 86.

Mrs. Crick’s illustration of DNA’s double helix structure first appeared in the journal Nature in 1953.

The cause was cancer, said her stepson, Michael Crick, who said the family had not announced Mrs. Crick’s death until last week.

The structure of DNA, or deoxyribonucleic acid, was discovered in 1953 by Mrs. Crick’s husband, Francis H. C. Crick, and James D. Watson. The breakthrough laid the foundation for molecular biology by making it clear that the DNA molecule is the medium in which genetic information is stored and passed from generation to generation.

The double helix consists of two chains of DNA spiraling in opposite directions, each made up of four types of chemical units that are linked together. The sequence of those chemical units is the basis for genes, which signal the synthesis of the essential components of every living cell. Dr. Crick, who died in 2004, and Dr. Watson were awarded the Nobel Prize for medicine in 1962.

In a brief interview on Thursday, Dr. Watson recalled why he and his colleague had asked Mrs. Crick to make the original black-and-white sketch — based on their mathematical analysis of a pattern of spots revealed by a process called X-ray crystallography — for the April 1953 issue of the journal Nature.

“Francis can’t draw, and I can’t draw, and we need something done quick,” Dr. Watson said. The drawing “showed the essence of the structure,” he said. “And it became historically important, reproduced over and over.”

Dr. Watson pointed out that his sister, Betty, had been recruited to type the historic research paper.

Terrence Sejnowski, the Francis Crick professor at the Salk Institute for Biological Studies, in La Jolla, said Mrs. Crick’s sketch “has iconic importance beyond its scientific value; it came to symbolize man’s discovery of the biological basis of life and evolution.”

While the original work accurately portrayed the spacing of the helixes and the locations of the nucleic acids, Dr. Sejnowski said, it did not include the locations of all the atoms. Still, he said, “all the original textbooks, all the original scientific articles referenced that sketch as the starting point for the variations” that have followed.

Odile Speed was born in King’s Lynn, Norfolk, England, on Aug. 11, 1920, the daughter of Alfred and Marie-Thérèse Speed. Her mother was French; her father, a jeweler, was British. Mrs. Crick was an art student in Vienna when the Nazis occupied Austria in 1938. She returned to Britain, joined the Women’s Royal Naval Service, and because of her fluency in German became a code-breaker and translator of secret documents.

She and Dr. Crick married in 1949; he had previously been married. In addition to her stepson, of Bellevue, Wash., Mrs. Crick is survived by a brother, Philippe, and two daughters, Gabrielle Crick and Jacqueline Nichols, all of whom live in Britain; two grandchildren; and four step-grandchildren.

In the late 1970s, when Dr. Crick was offered a professorship at the Salk Institute, the family moved to La Jolla. Over the years, several exhibitions have been held of Mrs. Crick’s paintings, which her stepson said have been described as Rubenesque nudes.

Michael Crick said his stepmother “never wanted to make a big fuss” about her famous double-helix drawing. In fact, on the day in 1953 when her husband and Dr. Watson realized that they had finally made a major scientific breakthrough, she sort of shrugged.

In his memoir, “What Mad Pursuit,” Dr. Crick recalled going home that day and telling his wife of the historic discovery. Only years later, he wrote, had Mrs. Crick told him that she did not believe a word of it, saying, “You were always coming home and saying things like that, so naturally I thought nothing of it.”

The amazing part comes next. Berlin, in the brown T-shirt, comes back into the room and tries to open the lock on the first box. Leo sees Berlin struggling, and it decides to help by pressing a lever that will deliver to Berlin the item he’s looking for. Leo presses the lever for the chips. It knows that there are cookies in the box that Berlin is trying to open, but it also knows — and this is the part that struck me as so amazing — that Berlin is trying to open the box because he wants chips. It knows that Berlin has a false belief about what is in the first box, and it also knows what Berlin wants. If Leo had indeed passed this important developmental milestone, I wondered, could it also be capable of all sorts of other emotional tasks: empathy, collaboration, social bonding, deception?

Unfortunately, Leo was turned off the day I arrived, inertly presiding over one corner of the lab like a fuzzy Buddha. Berlin and Gray and their colleague, Andrea Thomaz, a postdoctoral researcher, said that they would be happy to turn on the robot for me but that the process would take time and that I would have to come back the next morning. They also wanted to know what it was in particular that I wanted to see Leo do because, it turned out, the robot could go through its paces only when the right computer program was geared up. This was my first clue that Leo maybe wasn’t going to turn out to be quite as clever as I had thought.

When I came back the next day, Berlin and Gray were ready to go through the false-belief routine with Leo. But it wasn’t what I expected. I could now see what I had seen on the video. But in person, I could also peek behind the metaphoric curtain and see something that the video camera hadn’t revealed: the computer monitor that showed what Leo’s cameras were actually seeing and another monitor that showed the architecture of Leo’s brain. I could see that this wasn’t a literal demonstration of a human “theory of mind” at all. Yes, there was some robotic learning going on, but it was mostly a feat of brilliant computer programming, combined with some dazzling Hollywood special effects.

It turned out Leo wasn’t seeing the young men’s faces or bodies; it was seeing something else. Gray and Berlin were each wearing a headband and a glove, which I hadn’t noticed in the video, and the robot’s optical motion tracking system could see nothing but the unique arrangements of reflective tape on their accessories. What the robot saw were bunches of dots. Dots in one geometric arrangement meant Person A; in a different arrangement, they meant Person B. There was a different arrangement of tape on the two different snacks, too, and also on the two different locks for the boxes. On a big monitor alongside Leo was an image of what was going on inside its “brain”: one set of dots represented Leo’s brain; another set of dots represented Berlin’s brain; a third set of dots represented Gray’s. The robot brain was programmed to keep track of it all.

Leo did not learn about false beliefs in the same way a child did. Robot learning, I realized, can be defined as making new versions of a robot’s original instructions, collecting and sorting data in a creative way. So the learning taking place here was not Leo’s ability to keep track of which student believed what, since that skill had been programmed into the robot. The learning taking place was Leo’s ability to make inferences about Gray’s and Berlin’s actions and intentions. Seeing that Berlin’s hand was near the lock on Box 1, Leo had to search through its internal set of task models, which had been written into its computer program, and figure out what it meant for a hand to be moving near a lock and not near, say, a glass of water. Then it had to go back to that set of task models to decide why Berlin might have been trying to open the box — that is, what his ultimate goal was. Finally, it had to convert its drive to be helpful, another bit of information written into its computer program, into behavior. Leo had to learn that by pressing a particular lever, it could give Berlin the chips he was looking for. Leo’s robot learning consisted of integrating the group of simultaneous computer programs with which it had begun.

Leo’s behavior might not have been an act of real curiosity or empathy, but it was an impressive feat nonetheless. Still, I felt a little twinge of disappointment, and for that I blame Hollywood. I’ve been exposed to robot hype for years, from the TV of my childhood — Rosie the robot maid on “The Jetsons,” that weird talking garbage-can robot on “Lost in Space” — to the more contemporary robots-gone-wild of films like “Blade Runner” and “I, Robot.” Despite my basic cold, hard rationalism, I was prepared to be bowled over by a robot that was adorable, autonomous and smart. What I saw in Leo was no small accomplishment in terms of artificial intelligence and the modeling of human cognition, but it was just not quite the accomplishment I had been expecting. I had been expecting something closer to “real.”

Why We Might Want to Hug a Desk Lamp

I had been seduced by Leo’s big brown eyes, just like almost everyone else who encounters the robot, right down to the students who work on its innards. “There we all are, soldering Leonardo’s motors, aware of how it looks from behind, aware that its brain is just a bunch of wires,” Guy Hoffman, a graduate student, told me. Yet as soon as they get in front of it, he said, the students see its eyes move, see its head turn, see the programmed chest motion that looks so much like breathing, and they start talking about Leo as a living thing.

People do the same thing with a robotic desk lamp that Hoffman has designed to move in relation to a user’s motions, casting light wherever it senses the user might need it. It’s just a lamp with a bulky motor-driven neck; it looks nothing like a living creature. But, he said, “as soon as it moves on its own and faces you, you say: ‘Look, it’s trying to help me.’ ‘Why is it doing that?’ ‘What does it want from me?’ ”

When something is self-propelled and seems to engage in goal-directed behavior, we are compelled to interpret those actions in social terms, according to Breazeal. That social tendency won’t turn off when we interact with robots. But instead of fighting it, she said, “we should embrace it so we can design robots in a way that makes sense, so we can integrate robots into our lives.”

The brain activity of people who interacted with Cog and Kismet, and with their successors like Mertz, is probably much the same as the brain activity of someone interacting with a real person. Neuroscientists recently found a collection of brain cells called mirror neurons, which become activated in two different contexts: when someone performs an activity and when someone watches another person perform the same activity. Mirror-neuron activation is thought to be the root of such basic human drives as imitation, learning and empathy. Now it seems that mirror neurons fire not only when watching a person but also when watching a humanoid robot. Scientists at the University of California, San Diego, reported last year that brain scans of people looking at videos of a robotic hand grasping things showed activity in the mirror neurons. The work is preliminary, but it suggests something that people in the M.I.T. robotics labs have already seen: when these machines move, when they direct their gaze at you or lean in your direction, they feel like real creatures.

Would a Robot Make a Better Boyfriend?

Cog, Kismet and Mertz might feel real, but they look specifically and emphatically robotic. Their gears and motors show; they have an appealing retro-techno look, evoking old-fashioned images of the future, not too far from the Elektro robot of the 1939 World’s Fair, which looked a little like the Tin Man of “The Wizard of Oz.” This design was in part a reflection of a certain kind of aesthetic sensibility and in part a deliberate decision to avoid making robots that look too much like us.

Another robot-looking robot is Domo, whose stylized shape somehow evokes the Chrysler Building almost as much as it does a human. It can respond to some verbal commands, like “Here, Domo,” and can close its hand around whatever is placed in its palm, the way a baby does. Shaking hands with Domo feels almost like shaking hands with something alive. The robot’s designer, Aaron Edsinger, has programmed it to do some domestic tricks. It can grab a box of crackers placed in its hand and put it on a shelf and then grab a bag of coffee beans — with a different grip, based on sensors in its mechanical hand — and put it, too, on a shelf. Edsinger calls this “helping with chores.” Domo tracks objects with its big blue eyes and responds to verbal instructions in a high-pitched artificial voice, repeating the words it hears and occasionally adding an obliging “O.K.”

Domo’s looks are just barely humanoid, but that probably works to its advantage. Scientists believe that the more a robot looks like a person, the more favorably we tend to view it, but only up to a point. After that, our response slips into what the Japanese roboticist Masahiro Mori has called the “uncanny valley.” We start expecting too much of the robots because they so closely resemble real people, and when they fail to deliver, we recoil in something like disgust.

If a robot had features that made it seem, say, 50 percent human, 50 percent machine, according to this view, we would be willing to fill in the blanks and presume a certain kind of nearly human status. That is why robots like Domo and Mertz are interpreted by our brains as creaturelike. But if a robot has features that make it appear 99 percent human, the uncanny-valley theory holds that our brains get stuck on that missing 1 percent: the eyes that gaze but have no spark, the arms that move with just a little too much stiffness. This response might be akin to an adaptive revulsion at the sight of corpses. A too-human robot looks distressingly like a corpse that moves.

This zombie effect is one aspect of a new discipline that Breazeal is trying to create called human-robot interaction. Last March, Breazeal and Alan Schultz of the Naval Research Laboratory convened the field’s second annual conference in Arlington, Va., with presentations as diverse as describing how people react to instructions to “kill” a humanoid robot and a film festival featuring videos of human-robot interaction bloopers.

To some observers, the real challenge is not how to make human-robot interaction smoother and more natural but how to keep it from overshadowing, and eventually seeming superior to, a different, messier, more complicated, more flawed kind of interaction — the one between one human and another. Sherry Turkle, a professor in the Program in Science, Technology and Society at M.I.T., worries that sociable robots might be easier to deal with than people are and that one day we might actually prefer our relationships with our machines. A female graduate student once approached her after a lecture, Turkle said, and announced that she would gladly trade in her boyfriend for a sophisticated humanoid robot as long as the robot could produce what the student called “caring behavior.” “I need the feeling of civility in the house,” she told Turkle. “If the robot could provide a civil environment, I would be happy to help produce the illusion that there is somebody really with me.” What she was looking for, the student said, was a “no-risk relationship” that would stave off loneliness; a responsive robot, even if it was just exhibiting scripted behavior, seemed better to her than an unresponsive boyfriend.

The encounter horrified Turkle, who thought it revealed how dangerous, and how seductive, sociable robots could be. “They push our Darwinian buttons,” she told me. Sociable robots are programmed to exhibit the kind of behavior we have come to associate with sentience and empathy, she said, which leads us to think of them as creatures with intentions, emotions and autonomy: “You see a robot like that as a creature; you feel a desire to nurture it. And with this desire comes the fantasy of reciprocation. You begin to care for these creatures and to want the creatures to care about you.”

If Lijin Aryananda, Brooks’s former student, had ever wanted Mertz to “care” about her, she certainly doesn’t anymore. On the day she introduced me to Mertz, Aryananda was heading back to a postdoctoral research position at the University of Zurich. Her new job is in the Artificial Intelligence Lab, and she will still be working with robots, but Aryananda said she wants to get as far away as possible from humanoids and from the study of how humans and robots interact.

“Anyone who tells you that in human-robot interactions the robot is doing anything — well, he is just kidding himself,” she told me, grumpy because Mertz was misbehaving. “Whatever there is in human-robot interaction is there because the human puts it there.”

Nagging, a Killer App

The building and testing of sociable robots remains a research-based enterprise, and when the robots do make their way out of the laboratory, it is usually as part of somebody’s experiment. Breazeal is now overseeing two such projects. One is the work of Cory Kidd, a graduate student who designed and built 17 humanoid robots to serve as weight-loss coaches. The robot coach, a child-size head and torso holding a small touch screen, is called Autom. It is able, using basic artificial-voice software, to speak approximately 1,000 phrases, things like “It’s great that you’re doing well with your exercise” or “You should congratulate yourself on meeting your calorie goals today.” It is programmed to get a little more informal as time goes on: “Hello, I hope that we can work together” will eventually shift to “Hi, it’s good to see you again.” It is also programmed to refer to things that happened on other days, with statements like “It looks like you’ve had a little more to eat than usual recently.”

Kidd is recruiting 15 volunteers from around Boston to take Autom into their homes for six weeks. They will be told to interact with the robot at least once a day, recording food intake and exercise on its touch screen. The plan is to compare their experiences with those of two other groups of 15 dieters each. One group will interact with the same weight-loss coaching software through a touch screen only; the other will record daily food intake and exercise the old-fashioned way, with paper and pen. Kidd said that the study is too short-term to use weight loss as a measure of whether the robot is a useful dieting aid. But at this point, his research questions are more subjective anyway: Do participants feel more connected to the robot than they do to the touch screen? And do they think of that robot on the kitchen counter as an ally or a pest?

Autom Your next demanding weight-loss coach?

Breazeal’s second project is more ambitious. In collaboration with Rod Grupen, a roboticist at the University of Massachusetts in Amherst, she is designing and building four toddler-size robots. Then she will put them into action at the Boston Science Museum for two weeks in June 2009. The robots, which will cost several hundred thousand dollars each, will roll around in what she calls “a kind of robot Romper Room” and interact with a stream of museum visitors. The goal is to see whether the social competencies programmed into these robots are enough to make humans comfortable interacting with them and whether people will be able to help the robots learn to do simple tasks like stacking blocks.

The bare bones of the toddler robots already exist, in the form of a robot designed in Grupen’s lab called uBot-5. A few of these uBots are now being developed for use in assisted-living centers in research designed to see how the robots interact with the frail elderly. Each uBot-5 is about three feet tall, with a big head, very long arms (long enough to touch the ground, should the arms be needed for balance) and two oversize wheels. It has big eyes, rubber balls at the ends of its arms and a video screen for a face. (Breazeal’s version will have sleek torsos, expressive faces and realistic hands.) In one slide that Grupen uses in his PowerPoint presentations, the uBot-5 robot is holding a stethoscope to the chest of a woman lying on the ground after a simulated fall. The uBot is designed to connect by video hookup to a health care practitioner, but still, the image of a robot providing even this level of emergency medical care is, to say the least, disconcerting.

Does It Know It’s a Robot?

More disconcerting still is the image of a robot looking at itself in the mirror and waving hello — a robot with a primitive version of self-awareness. A first step in this direction occurred in September 2004 with reports from Yale about Nico, a humanoid robot. Nico, its designers announced, was able to recognize itself in a mirror. One of its creators, Brian Scassellati, earned his doctorate in 2001 at M.I.T., where he worked on Cog and Kismet — to which Nico bears a family resemblance. Nico has visible workings, a head, arms and torso made of steel and a graceful tilt to its shoulders and neck. Like the M.I.T. robots, Nico has no legs, because Scassellati, now an associate professor of computer science at Yale, wanted to concentrate on what it could do with its upper body and, in particular, the cameras in its eyes.

Here is how Nico learned to recognize itself. The robot had a camera behind its eye, which was pointed toward a mirror. When a reflection came back, Nico was programmed to assign the image a score based on whether it was most likely to be “self,” “another” or “neither.” Nico was also programmed to move its arm, which sent back information to the computer about whether the arm was moving. If the arm was moving and the reflection in the mirror was also moving, the program assigned the image a high probability of being “self.” If the reflection moved but Nico’s arm was not moving, the image was assigned a high probability of being “another.” If the image did not move at all, it was given a high probability of being “neither.”

Nico spent some time moving its arm in front of the mirror, so it could learn when its motor sensors were detecting arm movement and what that looked like through its camera. It learned to give that combination a high score for “self.” Then Nico and Kevin Gold, a graduate student, stood near each other, looking into the mirror, as the robot and the human took turns moving their arms. In 20 runs of the experiment, Nico correctly identified its own moving arm as “self” and Gold’s purposeful flailing as “another.”

One way to interpret this might be to conclude that Nico has a kind of self-awareness, at least when in motion. But that would be quite a leap. Robot consciousness is a tricky thing, according to Daniel Dennett, a Tufts philosopher and author of “Consciousness Explained,” who was part of a team of experts that Rodney Brooks assembled in the early 1990s to consult on the Cog project. In a 1994 article in The Philosophical Transactions of the Royal Society of London, Dennett posed questions about whether it would ever be possible to build a conscious robot. His conclusion: “Unlikely,” at least as long as we are talking about a robot that is “conscious in just the way we human beings are.” But Dennett was willing to credit Cog with one piece of consciousness: the ability to be aware of its own internal states. Indeed, Dennett believed that it was theoretically possible for Cog, or some other intelligent humanoid robot in the future, to be a better judge of its own internal states than the humans who built it. The robot, not the designer, might some day be “a source of knowledge about what it is doing and feeling and why.”

But maybe higher-order consciousness is not even the point for a robot, according to Sidney Perkowitz, a physicist at Emory. “For many applications,” he wrote in his 2004 book, “Digital People: From Bionic Humans to Androids,” “it is enough that the being seems alive or seems human, and irrelevant whether it feels so.”

In humans, Perkowitz wrote, an emotional event triggers the autonomic nervous system, which sparks involuntary physiological reactions like faster heartbeat, increased blood flow to the brain and the release of certain hormones. “Kismet’s complex programming includes something roughly equivalent,” he wrote, “a quantity that specifies its level of arousal, depending on the stimulus it has been receiving. If Kismet itself reads this arousal tag, the robot not only is aroused, it knows it is aroused, and it can use this information to plan its future behavior.” In this way, according to Perkowitz, a robot might exhibit the first glimmers of consciousness, “namely, the reflexive ability of a mind to examine itself over its own shoulder.”

Robot consciousness, it would seem, is related to two areas: robot learning (the ability to think, to reason, to create, to generalize, to improvise) and robot emotion (the ability to feel). Robot learning has already occurred, with baby steps, in robots like Cog and Leonardo, able to learn new skills that go beyond their initial capabilities. But what of emotion? Emotion is something we are inclined to think of as quintessentially human, something we only grudgingly admit might be taking place in nonhuman animals like dogs and dolphins. Some believe that emotion is at least theoretically possible for robots too. Rodney Brooks goes so far as to say that robot emotions may already have occurred — that Cog and Kismet not only displayed emotions but, in one way of looking at it, actually experienced them.

“We’re all machines,” he told me when we talked in his office at M.I.T. “Robots are made of different sorts of components than we are — we are made of biomaterials; they are silicon and steel — but in principle, even human emotions are mechanistic.” A robot’s level of a feeling like sadness could be set as a number in computer code, he said. But isn’t a human’s level of sadness basically a number, too, just a number of the amounts of various neurochemicals circulating in the brain? Why should a robot’s numbers be any less authentic than a human’s?

“If the mechanistic explanation is right, then one can in principle make a machine which is living,” he said with a grin. That explains one of his longtime ultimate goals: to create a robot that you feel bad about switching off.

The permeable boundary between humanoid robots and humans has especially captivated Kathleen Richardson, a graduate student in anthropology at Cambridge University in England. “I wanted to study what it means to be human, and robots are a great way to do that,” she said, explaining the 18 months she spent in Brooks’s Humanoid Robotics lab in 2003 and 2004, doing fieldwork for her doctorate. “Robots are kind of ambiguous, aren’t they? They’re kind of like us but not like us, and we’re always a bit uncertain about why.”

To her surprise, Richardson found herself just as fascinated by the roboticists at M.I.T. as she was by the robots. She observed a kinship between human and humanoid, an odd synchronization of abilities and disabilities. She tried not to make too much of it. “I kept thinking it was merely anecdotal,” she said, but the connection kept recurring. Just as a portrait might inadvertently give away the painter’s own weaknesses or preoccupations, humanoid robots seemed to reflect something unintended about their designers. A shy designer might make a robot that’s particularly bashful; a designer with physical ailments might focus on the function — touch, vision, speech, ambulation — that gives the robot builder the greatest trouble.

“A lot of the inspiration for the robots seems to come from some kind of deficiency in being human,” Richardson, back in England and finishing her dissertation, told me by telephone. “If we just looked at a machine and said we want the machine to help us understand about being human, I think this shows that the model of being human we carry with us is embedded in aspects of our own deficiencies and limitations.” It’s almost as if the scientists are building their robots as a way of completing themselves.

“I want to understand what it is that makes living things living,” Rodney Brooks told me. At their core, robots are not so very different from living things. “It’s all mechanistic,” Brooks said. “Humans are made up of biomolecules that interact according to the laws of physics and chemistry. We like to think we’re in control, but we’re not.” We are all, human and humanoid alike, whether made of flesh or of metal, basically just sociable machines.