By CLAUDIA DREIFUS
Published: July 31, 2007

In June 2002, Heidi Cullen, a researcher at the National Center for Atmospheric Research in Boulder, Colo., received a telephone call from an executive at the Weather Channel. Would she audition for a program on climate and global warming that producers at the Atlanta-based cable television network were contemplating?

Dr. Cullen, a climatologist with a doctorate from the Lamont-Doherty Earth Observatory at Columbia University, was dubious. A specialist in droughts, she had no broadcast experience. Moreover, she rarely watched television. She had never even seen the Weather Channel.

“My interests were in trying to find new ways to make climate forecasts practical for engineers and farmers,” Ms. Cullen, 37, said on a recent visit to New York. She had, she said, just gotten a grant from the National Science Foundation, “and I didn’t want to leave what I was doing.”

But the lure of a national audience won out. After a successful tryout, Dr. Cullen packed her clothes, furniture and dog and moved to Atlanta. Today, she is the only climatologist with a Ph.D. in the country who has her own weekly show, “Forecast Earth,” a half-hour-long video-magazine focused on climate and the environment.

Q: What were you studying when you got that call from the Weather Channel?

A: I was trying to understand the large-scale mechanisms that had caused a drought in Afghanistan from 1999 to 2001. I was also working with engineers in Brazil and Paraguay to apply climate forecasts to optimize water resource management at Itaipu Binacional, the largest operational hydropower facility in the world.

I hesitated when I got that call. Television was a world I couldn’t imagine. No one I knew had ever done anything like that.

Q: How did the Weather Channel executives know of you?

A: I think they’d been asking around. They were hunting for a Ph.D. scientist who could explain the science behind climate news. As it happened, my doctoral thesis has a lot of relevance to current affairs. Part of it involved looking at how to use climate information to manage water resources in the Middle East. It’s often said that the next war in the Middle East will be fought over water.

For my thesis, I studied droughts and the collapse of the first Mesopotamian empire — the Akkadian civilization. I was able to show that a megadrought at roughly 2200 B.C. played a role in its demise. I found the proof by examining the sediment cores of ancient mud. When one looked at the mud from the period around the Akkadian collapse, one found a huge spike in the mineral dolomite. That substance is an indicator of drought.

Q: What’s the point of knowing this?

A: Because until recently, historians, anthropologists and archaeologists were reluctant to say that civilizations could collapse because of nature. The prevailing theories were that civilizations collapsed because of political, military or medical reasons — plagues. Climate was often factored out.

And yet, indifference to the power of nature is civilization’s Achilles’ heel. I think the events around Hurricane Katrina reminded us that Mother Nature is something we haven’t yet conquered.

Q: Did you have to take lessons in broadcasting techniques?

A: Not at first. I’ve since done some voice training and have become obsessed with the craft of television. It’s important, for instance, to be very still when you’re on camera. My coach says that if you move around wildly, it erodes people’s faith in you. It’s been said to me that 9 times out of 10, the visual trumps what you say on television. I was floored. I had grown up among the cops and firemen of New York’s Staten Island, a world where your word is everything. So when I heard that, it was like, ‘Oh my God, why did I consciously choose to get into this?’

Q: O.K., why did you?

A: Because they were giving a chance to cover things people need to know more about: global warming, El Niño, energy policy.

Q: It has to be hard to put together a weekly magazine show on one subject. Where do you find your stories?

A: I’ve become a media junkie. I read far more widely now than when I was a researcher. Also, I watch a lot of TV, which means all the news programs, “Frontline,” even ESPN, which I watch to learn how to write punchy leads. I also listen to NPR, check out Greenwire and troll the scientific journals like Science, Nature and Geophysical Research Letters.

My problem is that I think everything climate-related is interesting. In my four years on the job, I’ve learned that just because I think something is interesting doesn’t mean it’ll make for good television. It’s often a challenge to make climate issues visual. When I first began, all we had was a little stock video of droughts in the Sahara with dead animal carcasses, and glaciers falling into the sea. We ran them over and over again. My father, who’s a retired New York City policeman, kept phoning me: “Heidi, are those same glaciers falling again?”

Q: Your coverage of global warming has been controversial. Are you surprised?

A: In a way, yes. To me, global warming isn’t a political issue, it’s a scientific one. But a lot of people out there think you’re being an advocate when you talk climate science.

Last December, I wrote a blog about how reticent some broadcast meteorologists are about reporting on climate change. Meteorologists — they are the forecasters — have training in atmospheric science. Many are certified by the American Meteorological Society. I suggested there’s a disconnect when they use their A.M.S. seal for on-camera credibility and refuse to give viewers accurate information on climate. The society has a very clear statement saying that global warming is largely due to the burning of fossil fuels.

The next thing I knew, I was being denounced on the Web sites of Senator James Inhofe, Matt Drudge and Rush Limbaugh. The Weather Channel’s own Web site got about 4,000 e-mails in one day, mostly angry. Some went, ‘Listen here, weather girl, just give me my five-day forecast and shut up.’

Q: Rush Limbaugh accused you of Stalinism. Did you suggest that meteorologists who doubt global warming should be fired?

A: I didn’t exactly say that. I was talking about the American Meteorological Society’s seal of approval. I was saying the A.M.S. should test applicants on climate change as part of their certification process. They test on other aspects of weather science.

A lot of viewers want to know about climate change. They are experiencing events they perceive as unusual and they want to know if there’s a connection to global warming. Certainly when Katrina hit, they wanted to know if it was global warming or not. Most Americans get their daily dose of science through their televised weather report. Given that fact, I think it’s the responsibility of broadcast meteorologists to provide viewers with scientific answers.

Q: What do your ex-colleagues from academia think of your new career?

A: Oh, they’re so funny. Some of them claim that they haven’t seen me on television because they don’t own one. But when I was being denounced by Matt Drudge, they were all, ‘Hey, saw you on Drudge!’

Actually, a lot of my friends are relieved that there’s at least one scientist out there doing this.

Advertisements

By CARL ZIMMER
Published: July 31, 2007

When Martin Nowak was in high school, his parents thought he would be a nice boy and become a doctor. But when he left for the University of Vienna, he abandoned medicine for something called biochemistry. As far as his parents could tell, it had something to do with yeast and fermenting. They became a little worried. When their son entered graduate school, they became even more worried. He announced that he was now studying games.

In the end, Dr. Nowak turned out all right. He is now the director of the Program for Evolutionary Dynamics at Harvard. The games were actually versatile mathematical models that Dr. Nowak could use to make important discoveries in fields as varied as economics and cancer biology.

“Martin has a passion for taking informal ideas that people like me find theoretically important and framing them as mathematical models,” said Steven Pinker, a Harvard linguist who is collaborating with Dr. Nowak to study the evolution of language. “He allows our intuitions about what leads to what to be put to a test.”

On the surface, Dr. Nowak’s many projects may seem randomly scattered across the sciences. But there is an underlying theme to his work. He wants to understand one of the most puzzling yet fundamental features of life: cooperation.

When biologists speak of cooperation, they speak more broadly than the rest of us. Cooperation is what happens when someone or something gets a benefit because someone or something else pays a cost. The benefit can take many forms, like money or reproductive success. A friend takes off work to pick you up from the hospital. A sterile worker bee tends to eggs in a hive. Even the cells in the human body cooperate. Rather than reproducing as fast as it can, each cell respects the needs of the body, helping to form the heart, the lungs or other vital organs. Even the genes in a genome cooperate, to bring an organism to life.

In recent papers, Dr. Nowak has argued that cooperation is one of the three basic principles of evolution. The other two are mutation and selection. On their own, mutation and selection can transform a species, giving rise to new traits like limbs and eyes. But cooperation is essential for life to evolve to a new level of organization. Single-celled protozoa had to cooperate to give rise to the first multicellular animals. Humans had to cooperate for complex societies to emerge.

“We see this principle everywhere in evolution where interesting things are happening,” Dr. Nowak said.

While cooperation may be central to evolution, however, it poses questions that are not easy to answer. How can competing individuals start to cooperate for the greater good? And how do they continue to cooperate in the face of exploitation? To answer these questions, Dr. Nowak plays games.

His games are the intellectual descendants of a puzzle known as the Prisoner’s Dilemma. Imagine two prisoners are separately offered the same deal: if one of them testifies and the other doesn’t talk, the talker will go free and the holdout will go to jail for 10 years. If both refuse to talk, the prosecutor will only be able to put them in jail for six months. If each prisoner rats out the other, they will both get five-year sentences. Not knowing what the other prisoner will do, how should each one act?

The way the Prisoner’s Dilemma pits cooperation against defection distills an important feature of evolution. In any encounter between two members of the same species, each one may cooperate or defect. Certain species of bacteria, for example, spray out enzymes that break down food, which all the bacteria can then suck up. It costs energy to make these enzymes. If one of the microbes stops cooperating and does not make the enzymes, it can still enjoy the meal. It can gain a potential reproductive edge over bacteria that cooperate.

The Prisoner’s Dilemma may be abstract, but that’s why Dr. Nowak likes it. It helps him understand fundamental rules of evolution, just as Isaac Newton discovered that objects in motion tend to stay in motion.

“If you were obsessed with friction, you would have never discovered this law,” Dr. Nowak said. “In the same sense, I try to get rid of what is inessential to find the essential. Truth is simple.”

Dr. Nowak found his first clues to the origin of cooperation in graduate school, collaborating with his Ph.D. adviser, Karl Sigmund. They built a version of the Prisoner’s Dilemma that captured more of the essence of how organisms behave and evolve.

In their game, an entire population of players enters a round-robin competition. The players are paired up randomly, and each one chooses whether to cooperate or defect. To make a choice, they can recall their past experiences with other individual players. Some players might use a strategy in which they had a 90-percent chance of cooperating with a player with whom they have cooperated in the past.

The players get rewarded based on their choices. The most successful players get to reproduce. Each new player had a small chance of randomly mutating its strategy. If that strategy turned out to be more successful, it could dominate the population, wiping out its ancestors.

Dr. Nowak and Dr. Sigmund observed this tournament through millions of rounds. Often the winners used a strategy that Dr. Nowak called, “win-stay, lose-shift.” If they did well in the previous round, they did the same thing again. If they did not do so well, they shifted. Under some conditions, this strategy caused cooperation to become common among the players, despite the short-term payoff of defecting.

In order to study this new version of the Prisoner’s Dilemma, Dr. Nowak had to develop new mathematical tools. It turned out that these tools also proved useful for studying cancer. Cancer and the Prisoner’s Dilemma may seem like apples and oranges, but Dr. Nowak sees an intimate connection between the two. “Cancer is a breakdown of cooperation,” he said.

Mutations sometimes arise in cells that cause them to replicate quickly, ignoring signals to stop. Some of their descendants acquire new mutations, allowing them to become even more successful as cancer cells. They evolve, in other words, into more successful defectors. “Cancer is an evolution you don’t want,” Dr. Nowak said.

To study cancer, however, Dr. Nowak had to give his models some structure. In the Prisoner’s Dilemma, the players usually just bump into each other randomly. In the human body, on the other hand, cells only interact with cells in their neighborhood.

A striking example of these neighborhoods can be found in the intestines, where the lining is organized into millions of tiny pockets. A single stem cell at the bottom of a pocket divides, and its daughter cells are pushed up the pocket walls. The cells that reach the top get stripped away.

Dr. Nowak adapted a branch of mathematics known as graph theory, which makes it possible to study networks, to analyze how cancer arises in these local neighborhoods. “Our tissue is actually organized to delay the onset of cancer,” he said.

Pockets of intestinal cells, for example, can only hold a few cell generations. That lowers the chances that any one will turn cancerous. All the cells in each pocket are descended from a single stem cell, so that there’s no competition between lineages to take over the pocket.

As Dr. Nowak developed this neighborhood model, he realized it would help him study human cooperation. “The reality is that I’m much more likely to interact with my friends, and they’re much more likely to interact with their friends,” Dr. Nowak said. “So it’s more like a network.”

Dr. Nowak and his colleagues found that when they put players into a network, the Prisoner’s Dilemma played out differently. Tight clusters of cooperators emerge, and defectors elsewhere in the network are not able to undermine their altruism. “Even if outside our network there are cheaters, we still help each other a lot,” Dr. Nowak said. That is not to say that cooperation always emerges. Dr. Nowak identified the conditions when it can arise with a simple equation: B/C>K. That is, cooperation will emerge if the benefit-to-cost (B/C) ratio of cooperation is greater than the average number of neighbors (K).

“It’s the simplest possible thing you could have expected, and it’s completely amazing,” he said.

Another boost for cooperation comes from reputations. When we decide whether to cooperate, we don’t just rely on our past experiences with that particular person. People can gain reputations that precede them. Dr. Nowak and his colleagues pioneered a version of the Prisoner’s Dilemma in which players acquire reputations. They found that if reputations spread quickly enough, they could increase the chances of cooperation taking hold. Players were less likely to be fooled by defectors and more likely to benefit from cooperation.

In experiments conducted by other scientists with people and animals, Dr. Nowak’s mathematical models seem to fit. Reputation has a powerful effect on how people play games. People who gain a reputation for not cooperating tend to be shunned or punished by other players. Cooperative players get rewarded.

“You help because you know it gives you a reputation of a helpful person, who will be helped,” Dr. Nowak said. “You also look at others and help them according to whether they have helped.”

The subject of human cooperation is important not just to mathematical biologists like Dr. Nowak, but to many people involved in the current debate over religion and science. Some claim that it is unlikely that evolution could have produced humans’ sense of morality, the altruism of heroes and saints. “Selfless altruism presents a major challenge for the evolutionist,” Dr. Francis S. Collins, the director of the National Human Genome Research Institute, wrote in his 2006 book, “The Language of God.”

Dr. Nowak believes evolutionary biologists should study average behavior rather than a few extreme cases of altruism. “Saintly behavior is unfortunately not the norm,” Dr. Nowak said. “The current theory can certainly explain a population where some people act extremely altruistically.” That does not make Dr. Nowak an atheist, however. “Evolution describes the fundamental laws of nature according to which God chose to unfold life,” he declared in March in a lecture titled “Evolution and Christianity” at the Harvard Divinity School. Dr. Nowak is collaborating with theologians there on a project called “The Evolution and Theology of Cooperation,” to help theologians address evolutionary biology in their own work.

Dr. Nowak sometimes finds his scientific colleagues astonished when he defends religion. But he believes the astonishment comes from a misunderstanding of the roles of science and religion. “Like mathematics, many theological statements do not need scientific confirmation. Once you have the proof of Fermat’s Last Theorem, it’s not like we have to wait for the scientists to tell us if it’s right. This is it.”

张学刚
2007/07/31

  2007年8月8日,东盟将迎来成立40周年纪念日。40年来,东盟在政治发展、经济建设和地区合作等方面都取得了非凡成就,成为当今东南亚地区最重要的国家组织和国际舞台上一支不可忽视的地区性力量。同时作为世界上发展中国家最为集中的地区之一,东盟的继续前进也面临内外各种挑战。

  东盟的艰辛探索

  东盟的诞生和发展壮大,与国际环境、国际格局和东南亚国家的自身发展息息相关,道路充满坎坷。

  1967年8月8日,印度尼西亚、泰国、新加坡、菲律宾和马来西亚5国发表《东南亚国家联盟成立宣言》,正式宣告东盟成立。但初创时期成员国矛盾突出。

  20世纪70年代是东盟的发展巩固期。随着美苏加紧争夺势力范围,东盟国家自主意识上升,要求加强集体安全合作,以平衡超级大国对该地区的控制。1971年,东盟出台旨在将东南亚建设成为“和平自由中立区”的《吉隆坡宣言》。1973年,东盟集体抵制苏联提出的“亚洲集体安全体系”。1976年,东盟第一届首脑会议签署《东南亚友好合作条约》和《东盟协调一致宣言》,标志着东盟作为一支地区性力量出现在国际舞台。1984年文莱加入东盟。

  90年代初以来,东盟进入快速发展期。两极对立格局的结束为东盟开展全方位对内对外合作提供了战略机遇期,也使东盟面临全球化浪潮的更大冲击。1992年,东盟第四届首脑会议提出“深化合作、吸纳新成员国、建立地区多边安全对话机制”三大战略,开始实施“大东盟计划”,此后逐步吸收越南(1995年)、老挝(1997年)、缅甸(1997年)和柬埔寨(1999年)入盟,最终形成了包括10个成员国在内,总面积达448万平方公里,约5.3亿人口的国家集团。截至2006年底,东盟GDP总值已达5000亿美元,人均1000美元,总体经济规模相当于中国的近1/3,有2个观察员国(东帝汶、巴布亚新几内亚)和10个对话伙伴国(美国、中国、日本、印度、韩国、澳大利亚、欧盟、新西兰、加拿大和俄罗斯)。

  自立自强的立身之道

  东南亚国家除印尼外均为中小国家,它们大多有沦为西方殖民地的历史。二战后,东南亚民族国家纷纷独立,但又很快成为超级大国冷战的战场和牺牲品,其中越南战争是最为惨痛的记忆。这种在夹缝中求生存,自己的命运仰人鼻息的历史,令东南亚国家对“走联合自强的道路”非常重视。东盟成立以来致力于推动内部一体化,提高整体实力,以团结对外。

  致力于促进地区一体化。2003年10月,东盟首脑会议通过在2020年前把东盟建设成为经济、安全和社会文化三大共同体的《巴厘岛宣言》。东盟领导人2005年提出制定《东盟宪章》,致力于组织的更机制化发展。2007年初,东盟首脑会议决定年底前完成《东盟宪章》的制定。2005年以来,东盟10国经济部长已签署3个经济共同体协议,涉及贸易、旅游、航空、储运服务和免签证等。2007年东盟首脑会议决定将建成经济共同体的时间提前至2015年。

  冷战后,东盟历经金融危机、非典、海啸和禽流感等跨国危机,安全观发生重大变化,加大了非传统安全领域的合作。2006年5月,东盟召开首届防长会议,重申2020年前建成安全共同体的目标,并将建设重点放在解决海盗、恐怖主义、跨国犯罪等非传统安全问题上。同时,东盟积极倡导跨地区安全合作。

  东盟还注意强化内部沟通协商机制。逐步建立了包括首脑会议、外长会议、常务委员会、经济部长会议、其他部长级会议、东盟秘书处、专门委员会及民间和半官方机构等一整套工作机制。这些机制为维护东盟内部团结、妥善处理成员国纠纷和促进共同发展提供了有力保证。

  独特的战略文化

  东盟自成立以来,奉行大国平衡战略和独立自主的和平外交方针,形成了独特的外交和战略文化,被国际社会誉为“东盟方式”。

  首先,大国平衡战略思维是东盟外交的支点。东盟国家认识到,只有发挥中小国家集团的“智慧”,巧妙实施大国平衡战略,才能最大限度维护自身的安全与繁荣。东盟的大国平衡战略,至少包含两个层次的内容:一是在宏观层次上与各大国尽力保持均衡外交态势,避免过于倚重某一大国,从而沦为其附庸和枪手。二是在中观和微观层次上,不排斥与某一大国拉近关系,但最终目的并不是倒向某一大国的怀抱,而是借助其影响力,制约和威慑另一大国,最终维持与各大国的等距离外交。

  两极格局瓦解以来,东盟已逐步摆脱对美国的过分依赖,在东亚地区初步构建了一个以东盟为核心、各大国彼此平衡和牵制的地区安全框架。1994年,东盟创建了亚太地区惟一的官方安全合作机制“东盟地区论坛”,首次将各大国拉上东盟舞台,彰显东盟的平衡外交思想。东盟还在1996年倡议召开了首届亚欧首脑会议,与拉美和中东国家建立了合作论坛。2002年以来,东盟与亚太大国分别建立了东盟+1关系。以东盟对外合作的政治性文件《东南亚友好合作条约》为例,中、日、印、澳、俄、法等大国先后加入,标志着“东盟方式”越来越得到认可。

  其次,在地区合作中始终坚持自己的核心主导作用,是东盟外交的基轴。在跨地区合作中,东盟一直强调自己应发挥核心主导作用,力求突出自己的特点,不愿沦为大国的陪衬,这与其“自立自强”的建盟精神一脉相承。东盟成员国早在上世纪90年代初就提出了“东亚经济核心论坛”的设想。1997年12月15日,东盟倡议并举办了第一届“东盟与中日韩领导人峰会”,开了“10+3”对话机制的先河。此后10年来,东盟逐步营造了以东盟为核心、以10+3和10+1机制为重要组成部分的东亚合作体系。2005年12月,东亚合作体系迎来了新平台“东亚峰会(10+6)”,随着印度、澳大利亚和新西兰的加入,东盟的核心作用进一步提升。应当说,东亚合作之所以有今天这样“热力四射”的局面,与东盟多年来的推动分不开。

  东亚合作新亮点

  近年来,中国与东盟政治立场相近,经济合作你中有我、我中有你,可谓中国与发展中国家发展新型睦邻友好关系的典范。

  政治上,双方互信日益加深。1991年7月,中国外长钱其琛出席了第24届东盟外长会议,这是中国首次同东盟进行正面接触。此后,中国外长每年都要出席东盟外长会议。1994年7月,中国作为东盟的“磋商伙伴国”参加了在泰国曼谷举行的首届“东盟地区论坛”,1996年7月中国由东盟的“磋商伙伴国”升格为“全面对话伙伴国”,标志着双方关系进入更深的互信和合作阶段。2003年,中国与东盟将双方关系提升为“面向和平与繁荣的战略合作伙伴关系”,中国于当年成为第一个加入《东南亚友好合作条约》的非东盟国家,并带动日本、印度、澳大利亚等国的相继加入。目前,中国已同东盟建立了高官磋商、商务理事会、联合合作委员会、经贸联委会及科技联委会等五大平行对话合作机制。

  经济上,合作成效突飞猛进。2001年11月,在文莱举行的第5次东盟与中国领导人10+1会议上,双方领导人一致决定在10年内建成“中国-东盟自由贸易区”。2002年,中国与东盟签署《中国—东盟全面经济合作框架协议》和《在2010年建成中国—东盟自贸区的协议》,进一步加深了彼此的经济合作关系。2005年7月,中国与东盟签署自贸区框架下的《货物贸易协议》,2007年1月双方签署自贸区框架下的《服务贸易协议》,为中国-东盟自贸区建设进一步扫清了障碍。截至2006年底,双方贸易额已突破1500亿美元,中国成为东盟第四大贸易伙伴,东盟是中国第八大贸易伙伴,双方贸易额有望在2010年突破2000亿美元大关。

  一体化道路依然曲折

  40年艰难探索,东盟取得了骄人的成绩,但也面临着新世纪严峻的挑战。

  一是内部凝聚力下降,建设共同体的道路不可能一帆风顺。缅甸问题长期导致东盟内部出现分化,一部分成员国主张与西方共同对缅施压,逼迫军政府作出让步。而另一部分成员国主张与缅对话,鼓励缅走渐进式民主改革的道路。目前,东盟各国围绕《东盟宪章》起草工作,也出现一些分歧。

  二是部分国家政局动荡频仍,和谐发展道路受阻。缅甸政局的不确定性增大,美、印、欧盟加大介入和施压力度,令东盟如芒在背,颇为被动。泰国军事政变后,军方、王室、民主派、他信支持者以及南部伊斯兰分裂主义势力之间展开角力,矛盾仍在上升。经历了国会弹劾、未遂军事政变的菲律宾总统阿罗约暂时稳住了阵脚,但执政基础遭到削弱,被视为“跛鸭总统”。印尼总统苏西洛疲于应付各种国内危机,新的大选又将来临,内部关系需要理顺。马来西亚现总理与前总理发生“口水战”,反映了新老利益集团的深刻矛盾。而越南政改也日益触及实质性问题,对社会经济的影响日益显现。

  三是东盟各成员国经济建设后劲不足,社会转型期矛盾突出。近年来,新加坡、马来西亚经济受中国、印度的带动保持快速增长,但也面临对外依存度过高、国际竞争力下降、发展后劲不足等问题,提出发展知识经济和“多媒体走廊”计划。泰国实行“他信经济”多年,经济表现颇为不俗,但2006年以来政治矛盾激化,国内政局动荡,经济发展进入调整期。印尼、菲律宾长期以来“有增长、无发展”,外资下降,增长效果不明显。越南、老挝、柬埔寨保持了较快的发展速度,但仍处于计划经济向市场经济转轨初期,基础差、底子薄,社会风险因素上升。

  四是恐怖主义、禽流感、海啸、跨国犯罪、毒品走私、贩卖人口等非传统安全领域的挑战,依然困扰着东盟国家。

石刚 (中国现代国际关系研究院反恐中心)
2007/07/31

  尽管塔利班几次推迟处死韩国人质的时限,但噩耗还是传来。韩国政府7月26日早晨证实,被塔利班绑架的23名韩国人质的负责人,现年42岁的牧师裴衡圭已经被塔利班杀害。其余22名人质的生命也危在旦夕。

  在恐怖主义泛滥的今天,绑架、爆炸等成为许多恐怖组织的“常用手段”,虽然血腥,但不能否认这些手段确实起到了一定的威慑作用。但公然绑架平民以要挟政府,在塔利班的历史上,虽非绝无仅有,但也实属少见。起码在2005年之前,塔利班很少针对平民发动袭击。塔利班到底怎么了?

  在过去的几年中,自杀性袭击及汽车炸弹在阿富汗也开始被塔利班广泛使用,但其目标多是北约军队或阿富汗军事人员,这些司空见惯的“恐怖手段”可以理解为其针对强大对手的“非常规作战手段”。塔利班曾经几乎控制阿富汗全部国土,并建立了政权,一夕之间忽然成为“叛军”和“造反者”,生存成为难题,“成王败寇”的巨大心理落差导致其想尽一切手段“东山再起”,这并不难理解。而且,在阿富汗战争之后,世界上许多国家包括美英等西方国家,都并未将其列入恐怖组织名单,除了默认其作为一个“前政府”组织存在之外,其所作所为也确实很难让人将其与恐怖组织直接挂钩,这其中就包括只与北约及政府军作战、避免祸及平民等。

  但仔细分析阿富汗战争之后的几年,不难发现塔利班事实上已经并在继续向“恐怖化”堕落。这种堕落,既有先天上的“不足”,也有后天国际环境的潜移默化,尤其重要的是,塔利班与“基地”之间难以分开的“盟友”关系。

  以极端宗教色彩起家的塔利班,崛起于乱世之中,自提出“铲除军阀,恢复和平,重建家园”的政治口号后,它几乎受到了阿富汗国内大多数人的拥护,由最初800人的学生军发展到两年后建立起全国性政权的“执政党”。但是,这种与极端宗教色彩的挂钩既造就了其短期的“辉煌”,也为日后的败落及恐怖化埋下伏笔。极端宗教,这种逆时代的产物,其所迸发的狂热和非理性可以使其最大可能地招募人员并激发其效死精神。极端宗教思想的意识形态化又会使其在没落之后走上极端,激进与教条的融合决定了塔利班日后“恐怖形态化”的造就,不顾一切的“再起”又为其走上恐怖道路提供了“天然的理由”。

  伊拉克的混乱局势无疑对塔利班影响甚大。凭着肆无忌惮的恐怖袭击,伊拉克正成为世界上最混乱的区域和美英等多国部队的噩梦。绑架、爆炸、自杀性袭击在震惊世界的同时,也折磨着美英等“占领者”的心灵。美国的急于抽身及西方国家的纷纷撤军似乎在证明恐怖袭击的“有效性”。这种“成功”的经验开始在塔利班领导层中引起震动,其中就包括今年5月在阿富汗南部赫尔曼德省被打死的塔利班武装军事领导人毛拉达杜拉。在一贯暴虐的达杜拉领导下,游击战、恐怖战已成为塔利班主要作战方式,绑架、斩首录像等伊拉克“流行”的恐怖手法也开始被“嫁接”到阿富汗。

  可怕的是,这种早已脱离塔利班创始宗旨的行为也得到了塔利班最高层的支持。2005年,奥马尔就威胁称塔利班已经成立了一支由两千多人组成的人弹部队,准备在阿境内发动一场“人弹闪电战”,以报复美军和阿境内所有与美国合作的人。

  与“基地”的结合则为塔利班政权的覆灭及今天的恐怖化制造了因果。当初为着伊斯兰极端化的宗教梦想,两者的结合是一种“共赢”。塔利班得到外部的支持而迅速壮大了力量,“基地”则获得了“栖息”之地并以此扩大了全球网络。但随着美国的进入及塔利班政权的溃败,“客大欺主”的现象不可避免地发生。塔利班众多领导都已深深受到“基地”思想的影响,“恐怖主义”开始潜移默化地进入其行动思维。

  有消息称,活跃在阿富汗和巴基斯坦交界地区的塔利班组织正在与“基地”组织商讨合并事宜,并已经取得了一定进展,“基地”组织不仅承诺将替塔利班武装分子培训“圣战斗士”,还同意在阿富汗南部山区冰雪融化后,协助塔利班武装恢复各种各样的恐怖袭击活动。姑且不论其真假,两者之间的界限在日后肯定会逐渐模糊,“基地化”的塔利班终将走上恐怖之路。

  当曾经的抱负被愤怒所掩盖,当勃勃生机的学生军蜕化成了心狠手辣的暴徒,不择手段也就成了最后的手段,塔利班已经彻底改变。为了权力,平民的生命、政治声誉及当初的梦想都可以牺牲,如今的塔利班正在滑向恐怖的深渊。塔利班最初禁止种植鸦片,罂粟田在其统治末期几乎绝迹,现在它为了资金早已与毒枭勾结在一起;阿富汗本来是没有自杀性袭击的,塔利班从“基地”学到后,开始广泛运用;塔利班一开始就是为了惩罚绑架妇女的军阀才揭竿而起的,而今塔利班也开始了绑架,并且指向了远道而来的医疗队,这不能不说是一个莫大的讽刺。(中国现代国际关系研究院反恐中心)

By DENNIS HEVESI
Published: July 30, 2007

Odile Crick, an artist whose original sketch of the double helix of DNA, the genetic blueprint for life, became a symbol of modern molecular biology, died July 5 at her home in La Jolla, Calif. She was 86.

Mrs. Crick’s illustration of DNA’s double helix structure first appeared in the journal Nature in 1953.

The cause was cancer, said her stepson, Michael Crick, who said the family had not announced Mrs. Crick’s death until last week.

The structure of DNA, or deoxyribonucleic acid, was discovered in 1953 by Mrs. Crick’s husband, Francis H. C. Crick, and James D. Watson. The breakthrough laid the foundation for molecular biology by making it clear that the DNA molecule is the medium in which genetic information is stored and passed from generation to generation.

The double helix consists of two chains of DNA spiraling in opposite directions, each made up of four types of chemical units that are linked together. The sequence of those chemical units is the basis for genes, which signal the synthesis of the essential components of every living cell. Dr. Crick, who died in 2004, and Dr. Watson were awarded the Nobel Prize for medicine in 1962.

In a brief interview on Thursday, Dr. Watson recalled why he and his colleague had asked Mrs. Crick to make the original black-and-white sketch — based on their mathematical analysis of a pattern of spots revealed by a process called X-ray crystallography — for the April 1953 issue of the journal Nature.

“Francis can’t draw, and I can’t draw, and we need something done quick,” Dr. Watson said. The drawing “showed the essence of the structure,” he said. “And it became historically important, reproduced over and over.”

Dr. Watson pointed out that his sister, Betty, had been recruited to type the historic research paper.

Terrence Sejnowski, the Francis Crick professor at the Salk Institute for Biological Studies, in La Jolla, said Mrs. Crick’s sketch “has iconic importance beyond its scientific value; it came to symbolize man’s discovery of the biological basis of life and evolution.”

While the original work accurately portrayed the spacing of the helixes and the locations of the nucleic acids, Dr. Sejnowski said, it did not include the locations of all the atoms. Still, he said, “all the original textbooks, all the original scientific articles referenced that sketch as the starting point for the variations” that have followed.

Odile Speed was born in King’s Lynn, Norfolk, England, on Aug. 11, 1920, the daughter of Alfred and Marie-Thérèse Speed. Her mother was French; her father, a jeweler, was British. Mrs. Crick was an art student in Vienna when the Nazis occupied Austria in 1938. She returned to Britain, joined the Women’s Royal Naval Service, and because of her fluency in German became a code-breaker and translator of secret documents.

She and Dr. Crick married in 1949; he had previously been married. In addition to her stepson, of Bellevue, Wash., Mrs. Crick is survived by a brother, Philippe, and two daughters, Gabrielle Crick and Jacqueline Nichols, all of whom live in Britain; two grandchildren; and four step-grandchildren.

In the late 1970s, when Dr. Crick was offered a professorship at the Salk Institute, the family moved to La Jolla. Over the years, several exhibitions have been held of Mrs. Crick’s paintings, which her stepson said have been described as Rubenesque nudes.

Michael Crick said his stepmother “never wanted to make a big fuss” about her famous double-helix drawing. In fact, on the day in 1953 when her husband and Dr. Watson realized that they had finally made a major scientific breakthrough, she sort of shrugged.

In his memoir, “What Mad Pursuit,” Dr. Crick recalled going home that day and telling his wife of the historic discovery. Only years later, he wrote, had Mrs. Crick told him that she did not believe a word of it, saying, “You were always coming home and saying things like that, so naturally I thought nothing of it.”

The amazing part comes next. Berlin, in the brown T-shirt, comes back into the room and tries to open the lock on the first box. Leo sees Berlin struggling, and it decides to help by pressing a lever that will deliver to Berlin the item he’s looking for. Leo presses the lever for the chips. It knows that there are cookies in the box that Berlin is trying to open, but it also knows — and this is the part that struck me as so amazing — that Berlin is trying to open the box because he wants chips. It knows that Berlin has a false belief about what is in the first box, and it also knows what Berlin wants. If Leo had indeed passed this important developmental milestone, I wondered, could it also be capable of all sorts of other emotional tasks: empathy, collaboration, social bonding, deception?

Unfortunately, Leo was turned off the day I arrived, inertly presiding over one corner of the lab like a fuzzy Buddha. Berlin and Gray and their colleague, Andrea Thomaz, a postdoctoral researcher, said that they would be happy to turn on the robot for me but that the process would take time and that I would have to come back the next morning. They also wanted to know what it was in particular that I wanted to see Leo do because, it turned out, the robot could go through its paces only when the right computer program was geared up. This was my first clue that Leo maybe wasn’t going to turn out to be quite as clever as I had thought.

When I came back the next day, Berlin and Gray were ready to go through the false-belief routine with Leo. But it wasn’t what I expected. I could now see what I had seen on the video. But in person, I could also peek behind the metaphoric curtain and see something that the video camera hadn’t revealed: the computer monitor that showed what Leo’s cameras were actually seeing and another monitor that showed the architecture of Leo’s brain. I could see that this wasn’t a literal demonstration of a human “theory of mind” at all. Yes, there was some robotic learning going on, but it was mostly a feat of brilliant computer programming, combined with some dazzling Hollywood special effects.

It turned out Leo wasn’t seeing the young men’s faces or bodies; it was seeing something else. Gray and Berlin were each wearing a headband and a glove, which I hadn’t noticed in the video, and the robot’s optical motion tracking system could see nothing but the unique arrangements of reflective tape on their accessories. What the robot saw were bunches of dots. Dots in one geometric arrangement meant Person A; in a different arrangement, they meant Person B. There was a different arrangement of tape on the two different snacks, too, and also on the two different locks for the boxes. On a big monitor alongside Leo was an image of what was going on inside its “brain”: one set of dots represented Leo’s brain; another set of dots represented Berlin’s brain; a third set of dots represented Gray’s. The robot brain was programmed to keep track of it all.

Leo did not learn about false beliefs in the same way a child did. Robot learning, I realized, can be defined as making new versions of a robot’s original instructions, collecting and sorting data in a creative way. So the learning taking place here was not Leo’s ability to keep track of which student believed what, since that skill had been programmed into the robot. The learning taking place was Leo’s ability to make inferences about Gray’s and Berlin’s actions and intentions. Seeing that Berlin’s hand was near the lock on Box 1, Leo had to search through its internal set of task models, which had been written into its computer program, and figure out what it meant for a hand to be moving near a lock and not near, say, a glass of water. Then it had to go back to that set of task models to decide why Berlin might have been trying to open the box — that is, what his ultimate goal was. Finally, it had to convert its drive to be helpful, another bit of information written into its computer program, into behavior. Leo had to learn that by pressing a particular lever, it could give Berlin the chips he was looking for. Leo’s robot learning consisted of integrating the group of simultaneous computer programs with which it had begun.

Leo’s behavior might not have been an act of real curiosity or empathy, but it was an impressive feat nonetheless. Still, I felt a little twinge of disappointment, and for that I blame Hollywood. I’ve been exposed to robot hype for years, from the TV of my childhood — Rosie the robot maid on “The Jetsons,” that weird talking garbage-can robot on “Lost in Space” — to the more contemporary robots-gone-wild of films like “Blade Runner” and “I, Robot.” Despite my basic cold, hard rationalism, I was prepared to be bowled over by a robot that was adorable, autonomous and smart. What I saw in Leo was no small accomplishment in terms of artificial intelligence and the modeling of human cognition, but it was just not quite the accomplishment I had been expecting. I had been expecting something closer to “real.”

Why We Might Want to Hug a Desk Lamp

I had been seduced by Leo’s big brown eyes, just like almost everyone else who encounters the robot, right down to the students who work on its innards. “There we all are, soldering Leonardo’s motors, aware of how it looks from behind, aware that its brain is just a bunch of wires,” Guy Hoffman, a graduate student, told me. Yet as soon as they get in front of it, he said, the students see its eyes move, see its head turn, see the programmed chest motion that looks so much like breathing, and they start talking about Leo as a living thing.

People do the same thing with a robotic desk lamp that Hoffman has designed to move in relation to a user’s motions, casting light wherever it senses the user might need it. It’s just a lamp with a bulky motor-driven neck; it looks nothing like a living creature. But, he said, “as soon as it moves on its own and faces you, you say: ‘Look, it’s trying to help me.’ ‘Why is it doing that?’ ‘What does it want from me?’ ”

When something is self-propelled and seems to engage in goal-directed behavior, we are compelled to interpret those actions in social terms, according to Breazeal. That social tendency won’t turn off when we interact with robots. But instead of fighting it, she said, “we should embrace it so we can design robots in a way that makes sense, so we can integrate robots into our lives.”

The brain activity of people who interacted with Cog and Kismet, and with their successors like Mertz, is probably much the same as the brain activity of someone interacting with a real person. Neuroscientists recently found a collection of brain cells called mirror neurons, which become activated in two different contexts: when someone performs an activity and when someone watches another person perform the same activity. Mirror-neuron activation is thought to be the root of such basic human drives as imitation, learning and empathy. Now it seems that mirror neurons fire not only when watching a person but also when watching a humanoid robot. Scientists at the University of California, San Diego, reported last year that brain scans of people looking at videos of a robotic hand grasping things showed activity in the mirror neurons. The work is preliminary, but it suggests something that people in the M.I.T. robotics labs have already seen: when these machines move, when they direct their gaze at you or lean in your direction, they feel like real creatures.

Would a Robot Make a Better Boyfriend?

Cog, Kismet and Mertz might feel real, but they look specifically and emphatically robotic. Their gears and motors show; they have an appealing retro-techno look, evoking old-fashioned images of the future, not too far from the Elektro robot of the 1939 World’s Fair, which looked a little like the Tin Man of “The Wizard of Oz.” This design was in part a reflection of a certain kind of aesthetic sensibility and in part a deliberate decision to avoid making robots that look too much like us.

Another robot-looking robot is Domo, whose stylized shape somehow evokes the Chrysler Building almost as much as it does a human. It can respond to some verbal commands, like “Here, Domo,” and can close its hand around whatever is placed in its palm, the way a baby does. Shaking hands with Domo feels almost like shaking hands with something alive. The robot’s designer, Aaron Edsinger, has programmed it to do some domestic tricks. It can grab a box of crackers placed in its hand and put it on a shelf and then grab a bag of coffee beans — with a different grip, based on sensors in its mechanical hand — and put it, too, on a shelf. Edsinger calls this “helping with chores.” Domo tracks objects with its big blue eyes and responds to verbal instructions in a high-pitched artificial voice, repeating the words it hears and occasionally adding an obliging “O.K.”

Domo’s looks are just barely humanoid, but that probably works to its advantage. Scientists believe that the more a robot looks like a person, the more favorably we tend to view it, but only up to a point. After that, our response slips into what the Japanese roboticist Masahiro Mori has called the “uncanny valley.” We start expecting too much of the robots because they so closely resemble real people, and when they fail to deliver, we recoil in something like disgust.

If a robot had features that made it seem, say, 50 percent human, 50 percent machine, according to this view, we would be willing to fill in the blanks and presume a certain kind of nearly human status. That is why robots like Domo and Mertz are interpreted by our brains as creaturelike. But if a robot has features that make it appear 99 percent human, the uncanny-valley theory holds that our brains get stuck on that missing 1 percent: the eyes that gaze but have no spark, the arms that move with just a little too much stiffness. This response might be akin to an adaptive revulsion at the sight of corpses. A too-human robot looks distressingly like a corpse that moves.

This zombie effect is one aspect of a new discipline that Breazeal is trying to create called human-robot interaction. Last March, Breazeal and Alan Schultz of the Naval Research Laboratory convened the field’s second annual conference in Arlington, Va., with presentations as diverse as describing how people react to instructions to “kill” a humanoid robot and a film festival featuring videos of human-robot interaction bloopers.

To some observers, the real challenge is not how to make human-robot interaction smoother and more natural but how to keep it from overshadowing, and eventually seeming superior to, a different, messier, more complicated, more flawed kind of interaction — the one between one human and another. Sherry Turkle, a professor in the Program in Science, Technology and Society at M.I.T., worries that sociable robots might be easier to deal with than people are and that one day we might actually prefer our relationships with our machines. A female graduate student once approached her after a lecture, Turkle said, and announced that she would gladly trade in her boyfriend for a sophisticated humanoid robot as long as the robot could produce what the student called “caring behavior.” “I need the feeling of civility in the house,” she told Turkle. “If the robot could provide a civil environment, I would be happy to help produce the illusion that there is somebody really with me.” What she was looking for, the student said, was a “no-risk relationship” that would stave off loneliness; a responsive robot, even if it was just exhibiting scripted behavior, seemed better to her than an unresponsive boyfriend.

The encounter horrified Turkle, who thought it revealed how dangerous, and how seductive, sociable robots could be. “They push our Darwinian buttons,” she told me. Sociable robots are programmed to exhibit the kind of behavior we have come to associate with sentience and empathy, she said, which leads us to think of them as creatures with intentions, emotions and autonomy: “You see a robot like that as a creature; you feel a desire to nurture it. And with this desire comes the fantasy of reciprocation. You begin to care for these creatures and to want the creatures to care about you.”

If Lijin Aryananda, Brooks’s former student, had ever wanted Mertz to “care” about her, she certainly doesn’t anymore. On the day she introduced me to Mertz, Aryananda was heading back to a postdoctoral research position at the University of Zurich. Her new job is in the Artificial Intelligence Lab, and she will still be working with robots, but Aryananda said she wants to get as far away as possible from humanoids and from the study of how humans and robots interact.

“Anyone who tells you that in human-robot interactions the robot is doing anything — well, he is just kidding himself,” she told me, grumpy because Mertz was misbehaving. “Whatever there is in human-robot interaction is there because the human puts it there.”

Nagging, a Killer App

The building and testing of sociable robots remains a research-based enterprise, and when the robots do make their way out of the laboratory, it is usually as part of somebody’s experiment. Breazeal is now overseeing two such projects. One is the work of Cory Kidd, a graduate student who designed and built 17 humanoid robots to serve as weight-loss coaches. The robot coach, a child-size head and torso holding a small touch screen, is called Autom. It is able, using basic artificial-voice software, to speak approximately 1,000 phrases, things like “It’s great that you’re doing well with your exercise” or “You should congratulate yourself on meeting your calorie goals today.” It is programmed to get a little more informal as time goes on: “Hello, I hope that we can work together” will eventually shift to “Hi, it’s good to see you again.” It is also programmed to refer to things that happened on other days, with statements like “It looks like you’ve had a little more to eat than usual recently.”

Kidd is recruiting 15 volunteers from around Boston to take Autom into their homes for six weeks. They will be told to interact with the robot at least once a day, recording food intake and exercise on its touch screen. The plan is to compare their experiences with those of two other groups of 15 dieters each. One group will interact with the same weight-loss coaching software through a touch screen only; the other will record daily food intake and exercise the old-fashioned way, with paper and pen. Kidd said that the study is too short-term to use weight loss as a measure of whether the robot is a useful dieting aid. But at this point, his research questions are more subjective anyway: Do participants feel more connected to the robot than they do to the touch screen? And do they think of that robot on the kitchen counter as an ally or a pest?

Autom Your next demanding weight-loss coach?

Breazeal’s second project is more ambitious. In collaboration with Rod Grupen, a roboticist at the University of Massachusetts in Amherst, she is designing and building four toddler-size robots. Then she will put them into action at the Boston Science Museum for two weeks in June 2009. The robots, which will cost several hundred thousand dollars each, will roll around in what she calls “a kind of robot Romper Room” and interact with a stream of museum visitors. The goal is to see whether the social competencies programmed into these robots are enough to make humans comfortable interacting with them and whether people will be able to help the robots learn to do simple tasks like stacking blocks.

The bare bones of the toddler robots already exist, in the form of a robot designed in Grupen’s lab called uBot-5. A few of these uBots are now being developed for use in assisted-living centers in research designed to see how the robots interact with the frail elderly. Each uBot-5 is about three feet tall, with a big head, very long arms (long enough to touch the ground, should the arms be needed for balance) and two oversize wheels. It has big eyes, rubber balls at the ends of its arms and a video screen for a face. (Breazeal’s version will have sleek torsos, expressive faces and realistic hands.) In one slide that Grupen uses in his PowerPoint presentations, the uBot-5 robot is holding a stethoscope to the chest of a woman lying on the ground after a simulated fall. The uBot is designed to connect by video hookup to a health care practitioner, but still, the image of a robot providing even this level of emergency medical care is, to say the least, disconcerting.

Does It Know It’s a Robot?

More disconcerting still is the image of a robot looking at itself in the mirror and waving hello — a robot with a primitive version of self-awareness. A first step in this direction occurred in September 2004 with reports from Yale about Nico, a humanoid robot. Nico, its designers announced, was able to recognize itself in a mirror. One of its creators, Brian Scassellati, earned his doctorate in 2001 at M.I.T., where he worked on Cog and Kismet — to which Nico bears a family resemblance. Nico has visible workings, a head, arms and torso made of steel and a graceful tilt to its shoulders and neck. Like the M.I.T. robots, Nico has no legs, because Scassellati, now an associate professor of computer science at Yale, wanted to concentrate on what it could do with its upper body and, in particular, the cameras in its eyes.

Here is how Nico learned to recognize itself. The robot had a camera behind its eye, which was pointed toward a mirror. When a reflection came back, Nico was programmed to assign the image a score based on whether it was most likely to be “self,” “another” or “neither.” Nico was also programmed to move its arm, which sent back information to the computer about whether the arm was moving. If the arm was moving and the reflection in the mirror was also moving, the program assigned the image a high probability of being “self.” If the reflection moved but Nico’s arm was not moving, the image was assigned a high probability of being “another.” If the image did not move at all, it was given a high probability of being “neither.”

Nico spent some time moving its arm in front of the mirror, so it could learn when its motor sensors were detecting arm movement and what that looked like through its camera. It learned to give that combination a high score for “self.” Then Nico and Kevin Gold, a graduate student, stood near each other, looking into the mirror, as the robot and the human took turns moving their arms. In 20 runs of the experiment, Nico correctly identified its own moving arm as “self” and Gold’s purposeful flailing as “another.”

One way to interpret this might be to conclude that Nico has a kind of self-awareness, at least when in motion. But that would be quite a leap. Robot consciousness is a tricky thing, according to Daniel Dennett, a Tufts philosopher and author of “Consciousness Explained,” who was part of a team of experts that Rodney Brooks assembled in the early 1990s to consult on the Cog project. In a 1994 article in The Philosophical Transactions of the Royal Society of London, Dennett posed questions about whether it would ever be possible to build a conscious robot. His conclusion: “Unlikely,” at least as long as we are talking about a robot that is “conscious in just the way we human beings are.” But Dennett was willing to credit Cog with one piece of consciousness: the ability to be aware of its own internal states. Indeed, Dennett believed that it was theoretically possible for Cog, or some other intelligent humanoid robot in the future, to be a better judge of its own internal states than the humans who built it. The robot, not the designer, might some day be “a source of knowledge about what it is doing and feeling and why.”

But maybe higher-order consciousness is not even the point for a robot, according to Sidney Perkowitz, a physicist at Emory. “For many applications,” he wrote in his 2004 book, “Digital People: From Bionic Humans to Androids,” “it is enough that the being seems alive or seems human, and irrelevant whether it feels so.”

In humans, Perkowitz wrote, an emotional event triggers the autonomic nervous system, which sparks involuntary physiological reactions like faster heartbeat, increased blood flow to the brain and the release of certain hormones. “Kismet’s complex programming includes something roughly equivalent,” he wrote, “a quantity that specifies its level of arousal, depending on the stimulus it has been receiving. If Kismet itself reads this arousal tag, the robot not only is aroused, it knows it is aroused, and it can use this information to plan its future behavior.” In this way, according to Perkowitz, a robot might exhibit the first glimmers of consciousness, “namely, the reflexive ability of a mind to examine itself over its own shoulder.”

Robot consciousness, it would seem, is related to two areas: robot learning (the ability to think, to reason, to create, to generalize, to improvise) and robot emotion (the ability to feel). Robot learning has already occurred, with baby steps, in robots like Cog and Leonardo, able to learn new skills that go beyond their initial capabilities. But what of emotion? Emotion is something we are inclined to think of as quintessentially human, something we only grudgingly admit might be taking place in nonhuman animals like dogs and dolphins. Some believe that emotion is at least theoretically possible for robots too. Rodney Brooks goes so far as to say that robot emotions may already have occurred — that Cog and Kismet not only displayed emotions but, in one way of looking at it, actually experienced them.

“We’re all machines,” he told me when we talked in his office at M.I.T. “Robots are made of different sorts of components than we are — we are made of biomaterials; they are silicon and steel — but in principle, even human emotions are mechanistic.” A robot’s level of a feeling like sadness could be set as a number in computer code, he said. But isn’t a human’s level of sadness basically a number, too, just a number of the amounts of various neurochemicals circulating in the brain? Why should a robot’s numbers be any less authentic than a human’s?

“If the mechanistic explanation is right, then one can in principle make a machine which is living,” he said with a grin. That explains one of his longtime ultimate goals: to create a robot that you feel bad about switching off.

The permeable boundary between humanoid robots and humans has especially captivated Kathleen Richardson, a graduate student in anthropology at Cambridge University in England. “I wanted to study what it means to be human, and robots are a great way to do that,” she said, explaining the 18 months she spent in Brooks’s Humanoid Robotics lab in 2003 and 2004, doing fieldwork for her doctorate. “Robots are kind of ambiguous, aren’t they? They’re kind of like us but not like us, and we’re always a bit uncertain about why.”

To her surprise, Richardson found herself just as fascinated by the roboticists at M.I.T. as she was by the robots. She observed a kinship between human and humanoid, an odd synchronization of abilities and disabilities. She tried not to make too much of it. “I kept thinking it was merely anecdotal,” she said, but the connection kept recurring. Just as a portrait might inadvertently give away the painter’s own weaknesses or preoccupations, humanoid robots seemed to reflect something unintended about their designers. A shy designer might make a robot that’s particularly bashful; a designer with physical ailments might focus on the function — touch, vision, speech, ambulation — that gives the robot builder the greatest trouble.

“A lot of the inspiration for the robots seems to come from some kind of deficiency in being human,” Richardson, back in England and finishing her dissertation, told me by telephone. “If we just looked at a machine and said we want the machine to help us understand about being human, I think this shows that the model of being human we carry with us is embedded in aspects of our own deficiencies and limitations.” It’s almost as if the scientists are building their robots as a way of completing themselves.

“I want to understand what it is that makes living things living,” Rodney Brooks told me. At their core, robots are not so very different from living things. “It’s all mechanistic,” Brooks said. “Humans are made up of biomolecules that interact according to the laws of physics and chemistry. We like to think we’re in control, but we’re not.” We are all, human and humanoid alike, whether made of flesh or of metal, basically just sociable machines.