文章题目“Fears killer bots on the way that may wipe us out”,第一个词就是恐惧,想想有朝一日机器人把我们都干掉的确是恐惧的。
首先,我们的世界发生了什么?我们看看原文怎么说的。
ON WEDNESDAY, over 50 artificial intelligence researchers reportedly co-signed a letter announcing that they will be boycotting (抵制) a university in the Republic of Korea until it pledges to refrain from (保证停止) developing AI weapons without "meaningful human control".
According to reports, a laboratory, co-sponsored by Korea Advanced Institute of Science and Technology and an unidentified enterprise, plans to develop AI-based weapons.
一个“boycott” (抵制) ,一个“until” (直到……才不再抵制) 让我们感受到了很强的情感。
“AI weapons”也是一个关键词,我们暂且把它称之为人工智能武器 (智能武器另有含义,见下文),开篇就强调了人工智能武器的恐怖之处-- without meaningful human control。
再次,为了我们更好的区分 “AI weapons” 和 “smart weapons” 文章首先向我们解释了什么叫做智能武器。原文如下:
Smart weapons (智能武器) have long been in existence (存在很久了) . For example, a smart missile launcher (导弹发射器) might be able to automatically locate its target, a military Unmanned Aerial Vehicle (无人驾驶飞行器) might identify its route with the help of sensors (传感器) and positioning systems (定位系统).
However, no matter how smart they are, they are still under human control. It is a person that decides when to pull the trigger (推动扳机).
斜线部分是“AI weapons” 和 “smart weapons” 区分的重点,而“under human control”最是重点。
最后,文章向我们阐述了为什么需要禁止人工智能杀手机器人。几个关键点如下:一、人工智能杀手机器人完全是自发性的寻找攻击目标;二、如果机器都有权利决定人类的命运,那么,人的尊严何在呢?三、人工智能杀手机器人并没有道理或者伦理观,这一点想想都觉得可怕。
Autonomous weapons are different. It is the AI that decides whether to launch a strike upon a military target.
That is, of course, extremely dangerous. Most military targets are humans or have humans inside, and it is deep violation (违背) of human dignity if a machine is empowered to decide a human fate.
Worse, AIs lack moral or ethical obligations (道德和伦理观). In case a total war breaks out, they might totally destroy humankind. That's why the majority of AI researchers oppose the development of autonomous weapons.
欢迎大家收藏微信公众号Lucy的理想国,或者查找微信号LUCYANDABC。