|
Today I'm going to talk about technology and society. The Department of Transport estimated that last year 35,000 people died from traffic crashes in the US alone. Worldwide, 1.2 million people die every year in traffic accidents. If there was a way we could eliminate 90 percent of those accidents, would you support it? Of course you would. This is what driverless car technology promises to achieve by eliminating the main source of accidents -- human error. 今天我想談談技術(shù)和社會。據(jù)交通部的估算,在美國,僅去年就有 3萬5千人死于交通事故。而在全世界,每年則有 120萬人死于交通事故。如果有一種方法能減少90%的交通事故,你會支持它嗎?答案絕對是肯定的。這就是無人車技術(shù)所承諾實現(xiàn)的目標,通過消除造成事故的主要原因——人為過錯。 Now picture yourself in a driverless car in the year 2030, sitting back and watching this vintage TEDxCambridge video. All of a sudden, the car experiences mechanical failure and is unable to stop. If the car continues, it will crash into a bunch of pedestrians crossing the street, but the car may swerve, hitting one bystander, killing them to save the pedestrians. What should the car do, and who should decide? What if instead the car could swerve into a wall, crashing and killing you, the passenger, in order to save those pedestrians? This scenario is inspired by the trolley problem, which was invented by philosophers a few decades ago to think about ethics. 現(xiàn)在想象一下你在2030年中的一天,坐在一輛無人車里悠閑地觀看我這個過時的 TEDxCambridge視頻。突然間,車子出現(xiàn)了機械故障,剎車失靈了。如果車繼續(xù)行駛,就會沖入正在穿越人行道的人群中,但是車還可能轉(zhuǎn)向,撞到路邊一個不相干的人,用他的生命來換那些行人的生命。這輛車該怎么做,又該是誰來做這個決定呢?再如果,這輛車會轉(zhuǎn)向并撞墻,連你在內(nèi)人車俱毀,從而挽救其他人的生命,這會是個更好的選擇嗎?這個場景假設是受到了 “電車問題”的啟發(fā),這是幾十年前由一群哲學家發(fā)起的對道德的拷問。 Now, the way we think about this problem matters. We may for example not think about it at all. We may say this scenario is unrealistic, incredibly unlikely, or just silly. But I think this criticism misses the point because it takes the scenario too literally. Of course no accident is going to look like this; no accident has two or three options where everybody dies somehow. Instead, the car is going to calculate something like the probability of hitting a certain group of people, if you swerve one direction versus another direction, you might slightly increase the risk to passengers or other drivers versus pedestrians. It's going to be a more complex calculation, but it's still going to involve trade-offs, and trade-offs often require ethics. 我們?nèi)绾嗡伎歼@個問題非常關(guān)鍵。我們也許壓根兒就 不應該去糾結(jié)這個問題。我們可以辯稱這個場景假設不現(xiàn)實,太不靠譜,簡直無聊透頂。不過我覺得這種批判沒有切中要害, 因為僅僅是停留在了問題表面。當然沒有任何事故會出現(xiàn)這種情況;沒有哪個事故會同時出現(xiàn)2-3種選擇,而每種選擇中都會有人失去生命。相反,車輛自身會做些計算,比如撞擊一群人的可能性,如果轉(zhuǎn)向另一個方向,相對于行人來說,你可能略微 增加了乘客,或者其他駕駛員 受傷的可能性。這將會是一個更加復雜的計算,不過仍然會涉及到某種權(quán)衡,而這種權(quán)衡經(jīng)常需要做出道德考量。 We might say then, 'Well, let's not worry about this. Let's wait until technology is fully ready and 100 percent safe.' Suppose that we can indeed eliminate 90 percent of those accidents, or even 99 percent in the next 10 years. What if eliminating the last one percent of accidents requires 50 more years of research? Should we not adopt the technology? That's 60 million people dead in car accidents if we maintain the current rate. So the point is, waiting for full safety is also a choice, and it also involves trade-offs. 我們可能會說,“還是別杞人憂天了。不如等到技術(shù)完全成熟,能達到 100%安全的時候再用吧?!?假設我們的確可以在 未來的10年內(nèi)消除90%,甚至99%的事故。如果消除最后這1%的事故 卻需要再研究50年才能實現(xiàn)呢?我們是不是應該放棄這項技術(shù)了?按照目前的死亡率計算,那可還要犧牲 6千萬人的生命啊。所以關(guān)鍵在于,等待萬無一失的技術(shù)也是一種選擇,這里也有權(quán)衡的考慮。 People online on social media have been coming up with all sorts of ways to not think about this problem. One person suggested the car should just swerve somehow in between the passengers and the bystander. Of course if that's what the car can do, that's what the car should do. We're interested in scenarios in which this is not possible. And my personal favorite was a suggestion by a blogger to have an eject button in the car that you press just before the car self-destructs. 社交媒體上的人們想盡了辦法去回避這個問題。 有人建議無人車應該把握好角度, 剛好從人群和路邊的 無辜者之間的縫隙穿過去。當然,如果車輛能做到這一點,毫無疑問就應該這么做。 我們討論的是無法實現(xiàn)這一點的情況。我個人比較贊同一個博主的點子,在車里加裝一個彈射按鈕在車輛自毀前按一下就行了。 So if we acknowledge that cars will have to make trade-offs on the road, how do we think about those trade-offs, and how do we decide? Well, maybe we should run a survey to find out what society wants, because ultimately, regulations and the law are a reflection of societal values. 那么如果我們認同車輛 將不得不在行駛中做出權(quán)衡的話,我們要如何考量這種權(quán)衡 并做出決策呢?也許我們應該做些調(diào)查問卷 看看大眾是什么想法,畢竟最終,規(guī)則和法律應該反映社會價值。 So this is what we did. With my collaborators, Jean-Fran?ois Bonnefon and Azim Shariff, we ran a survey in which we presented people with these types of scenarios. We gave them two options inspired by two philosophers: Jeremy Bentham and Immanuel Kant. Bentham says the car should follow utilitarian ethics: it should take the action that will minimize total harm -- even if that action will kill a bystander and even if that action will kill the passenger. Immanuel Kant says the car should follow duty-bound principles, like 'Thou shalt not kill.' So you should not take an action that explicitly harms a human being, and you should let the car take its course even if that's going to harm more people. 所以我們做了這么件事兒。跟我的合作者朗·弗朗索瓦·伯尼夫和阿米滋·謝里夫一起, 我們做了一項調(diào)查問卷,為人們列舉了這些假設的場景。受哲學家杰里米·邊沁(英國)和 伊曼努爾·康德(德國)的啟發(fā),我們給出了兩種選擇。邊沁認為車輛應該遵循功利主義道德:它應該采取最小傷害的行動——即使是以犧牲一個無辜者為代價,即使會令乘客身亡。伊曼努爾·康德則認為車輛應該遵循義不容辭的原則,比如“不可殺人?!?因此你不應該有意去傷害一個人,應該讓車順其自然行駛,即使這樣會傷害到更多的人。 What do you think? Bentham or Kant? Here's what we found. Most people sided with Bentham. So it seems that people want cars to be utilitarian, minimize total harm, and that's what we should all do. Problem solved. But there is a little catch. When we asked people whether they would purchase such cars, they said, 'Absolutely not.' 你會怎么選擇?支持邊沁還是康德?我們得到的結(jié)果是這樣的。 大部分人贊同邊沁的觀點。 所以人們似乎希望車輛是功利主義的, 將傷害降到最小, 我們都應該這么做。 問題解決了。 不過這里還有個小插曲。當我們問大家他們會不會買這樣一輛車時,他們不約而同地回答,“絕對不會。” They would like to buy cars that protect them at all costs, but they want everybody else to buy cars that minimize harm. We've seen this problem before. It's called a social dilemma. And to understand the social dilemma, we have to go a little bit back in history. In the 1800s, English economist William Forster Lloyd published a pamphlet which describes the following scenario. You have a group of farmers -- English farmers -- who are sharing a common land for their sheep to graze. Now, if each farmer brings a certain number of sheep -- let's say three sheep -- the land will be rejuvenated, the farmers are happy, the sheep are happy, everything is good. Now, if one farmer brings one extra sheep, that farmer will do slightly better, and no one else will be harmed. But if every farmer made that individually rational decision, the land will be overrun, and it will be depleted to the detriment of all the farmers, and of course, to the detriment of the sheep. 他們更希望買能夠 不顧一切保障自己安全的車,不過卻指望其他人 都買能將傷害降到最低的車。 這個問題以前就出現(xiàn)過。叫做社會道德困境。為了理解這個概念,我們要先簡單回顧一下歷史。 在19世紀, 英國經(jīng)濟學家威廉·福斯特·勞埃德 出版了一個宣傳冊, 里面描述了這樣一個場景。 有一群農(nóng)場主,英國農(nóng)場主,共同在一片地里放羊。 如果每個農(nóng)場主都 帶了一定數(shù)量的羊, 比如每家三只, 這片土地上的植被還可以正常再生,農(nóng)場主們自然高興,羊群也自在逍遙, 一切都相安無事。 如果有一個農(nóng)場主多放了一只羊, 他就會獲益更多,不過其他人也都沒什么損失。但是如果每個農(nóng)場主 都擅自增加羊的數(shù)量,土地容量就會飽和,變得不堪重負,所有農(nóng)場主都會受損,當然,羊群也會開始挨餓。 We see this problem in many places: in the difficulty of managing overfishing, or in reducing carbon emissions to mitigate climate change. When it comes to the regulation of driverless cars, the common land now is basically public safety -- that's the common good -- and the farmers are the passengers or the car owners who are choosing to ride in those cars. And by making the individually rational choice of prioritizing their own safety, they may collectively be diminishing the common good, which is minimizing total harm. It's called the tragedy of the commons, traditionally, but I think in the case of driverless cars, the problem may be a little bit more insidious because there is not necessarily an individual human being making those decisions. So car manufacturers may simply program cars that will maximize safety for their clients, and those cars may learn automatically on their own that doing so requires slightly increasing risk for pedestrians. So to use the sheep metaphor, it's like we now have electric sheep that have a mind of their own. 我們在很多場合都見到過這個問題:比如過度捕撈的困境,或者應對氣候變化的碳減排。 而到了無人車的制度問題,公共土地在這里指的就是公共安全,也就是公共利益,而農(nóng)場主就是乘客,或者車主,決定乘車出行的人。 通過自作主張把自己的安全凌駕于 其他人的利益之上,他們可能共同損害了 能將總損失降到最低的 公共利益。 傳統(tǒng)上把這稱為 公地悲劇, 不過我認為對于無人車來說,問題可能是更深層次的,因為并沒有一個人去做決策。 那么無人車制造商可能會 簡單的把行車電腦程序 設定成最大程度保護車主的安全,而那些車可能會自主學習, 而這一過程也就會略微增加 對行人的潛在危險。跟羊群的比喻類似,這就好像換成了一批 可以自己思考的機器羊。 它們可能會自己去吃草,而農(nóng)場主對此毫不知情。 And they may go and graze even if the farmer doesn't know it. So this is what we may call the tragedy of the algorithmic commons, and if offers new types of challenges. Typically, traditionally, we solve these types of social dilemmas using regulation, so either governments or communities get together, and they decide collectively what kind of outcome they want and what sort of constraints on individual behavior they need to implement. And then using monitoring and enforcement, they can make sure that the public good is preserved. So why don't we just, as regulators, require that all cars minimize harm? After all, this is what people say they want. And more importantly, I can be sure that as an individual, if I buy a car that may sacrifice me in a very rare case, I'm not the only sucker doing that while everybody else enjoys unconditional protection. 這就是我們所謂的算法共享悲劇,這會帶來新的類型的挑戰(zhàn)。通常在傳統(tǒng)模式下,我們可以通過制定規(guī)則來解決這些社會道德困境, 政府或者社區(qū)共同商討決定 他們能夠接受什么樣的后果,以及需要對個人行為施加什么形式的限制。通過監(jiān)管和強制執(zhí)行,就可以確定公共利益得到了保障。那么我們?yōu)槭裁床幌窳⒎ㄕ吣菢樱?讓所有無人車把危險降到最?。慨吘惯@是所有人的共同意愿。 更重要的是,作為一個個體,我很確定如果我買了一輛會在極端情況下犧牲我的利益的車,我不會是唯一一個自殘, 讓其他所有人都受到無條件保護的人。 In our survey, we did ask people whether they would support regulation and here's what we found. First of all, people said no to regulation; and second, they said, 'Well if you regulate cars to do this and to minimize total harm, I will not buy those cars.' So ironically, by regulating cars to minimize harm, we may actually end up with more harm because people may not opt into the safer technology even if it's much safer than human drivers. 在我們的調(diào)查問卷中確實也問了人們,是否會支持立法,調(diào)查結(jié)果如下。首先,人們并不贊同立法,其次,他們認為, “如果你們要制定規(guī)則保證這些車造成的損失最小,那我肯定不會買?!?諷刺的是, 讓無人車遵循最小損失原則, 我們得到的反而可能是更大的損失,因為人們可能放棄使用這種更安全的技術(shù),即便其安全性遠超過人類駕駛員。 I don't have the final answer to this riddle, but I think as a starting point, we need society to come together to decide what trade-offs we are comfortable with and to come up with ways in which we can enforce those trade-offs. 對于這場爭論我并沒有得到最終的答案,不過我認為作為一個開始,我們需要團結(jié)整個社會 來決定哪種折中方案是大家都可以接受的,更要商討出可以有效推行這種權(quán)衡決策的方法。 As a starting point, my brilliant students, Edmond Awad and Sohan Dsouza, built the Moral Machine website, which generates random scenarios at you -- basically a bunch of random dilemmas in a sequence where you have to choose what the car should do in a given scenario. And we vary the ages and even the species of the different victims. So far we've collected over five million decisions by over one million people worldwide from the website. And this is helping us form an early picture of what trade-offs people are comfortable with and what matters to them -- even across cultures. But more importantly, doing this exercise is helping people recognize the difficulty of making those choices and that the regulators are tasked with impossible choices. And maybe this will help us as a society understand the kinds of trade-offs that will be implemented ultimately in regulation. 以此為基礎,我的兩位出色的學生, Edmond Awad和Sohan Dsouza, 創(chuàng)建了“道德衡量器”網(wǎng)站, 可以為你設計出隨機的場景—— 簡單來說就是一系列 按順序發(fā)生的隨機困境, 你需要據(jù)此判斷無人車應該如何抉擇。我們還對不同的(潛在)受害者設置了年齡,甚至種族信息。 目前我們已經(jīng)搜集到了 超過5百萬份決定,來自于全世界超過1百萬人在網(wǎng)上給出的答案。 這幫助我們 形成了一個概念雛形, 告訴了我們對人們來說 哪些折中方案最適用, 他們最在意的是什么—— 甚至跨越了文化障礙。 不過更重要的是, 這項練習能幫助人們認識到 做出這些選擇有多難, 而立法者更是被要求做出 不現(xiàn)實的選擇。 這還可能幫助我們整個社會去理解那些最終將會被 納入法規(guī)的折中方案。 And indeed, I was very happy to hear that the first set of regulations that came from the Department of Transport -- announced last week -- included a 15-point checklist for all carmakers to provide, and number 14 was ethical consideration -- how are you going to deal with that. We also have people reflect on their own decisions by giving them summaries of what they chose. I'll give you one example -- I'm just going to warn you that this is not your typical example, your typical user. This is the most sacrificed and the most saved character for this person. 誠然,我很高興聽到第一套由交通部批準的法規(guī)——上周剛剛公布——囊括了需要所有無人車廠商 提供的15點清單, 而其中第14點就是道德考量——要如何處理道德困境。 我們還通過為大家 提供自己選擇的概要,讓人們反思自己的決定。 給大家舉個例子——我要提醒大家 這不是一個典型的例子,也不是典型的車主。 這個人有著最容易犧牲(兒童),也最容易被保護的特征(寵物)。 Some of you may agree with him, or her, we don't know. But this person also seems to slightly prefer passengers over pedestrians in their choices and is very happy to punish jaywalking. 你們中有人可能會贊同他,或者她,我們并不知道其性別。不過這位調(diào)查對象也似乎更愿意保護乘客,而不是行人,甚至相當支持嚴懲橫穿馬路的行人。 So let's wrap up. We started with the question -- let's call it the ethical dilemma -- of what the car should do in a specific scenario: swerve or stay? But then we realized that the problem was a different one. It was the problem of how to get society to agree on and enforce the trade-offs they're comfortable with. It's a social dilemma. 那么我們來總結(jié)一下。我們由一個問題開始——就叫它道德困境問題——關(guān)于在特定條件下 無人車應該如何抉擇:轉(zhuǎn)向還是直行?但之后我們意識到這并不是問題的核心。關(guān)鍵的問題在于如何讓 大眾在他們能夠接受的權(quán)衡方案中 達成一致并付諸實施。這是個社會道德困境。 In the 1940s, Isaac Asimov wrote his famous laws of robotics -- the three laws of robotics. A robot may not harm a human being, a robot may not disobey a human being, and a robot may not allow itself to come to harm -- in this order of importance. But after 40 years or so and after so many stories pushing these laws to the limit, Asimov introduced the zeroth law which takes precedence above all, and it's that a robot may not harm humanity as a whole. I don't know what this means in the context of driverless cars or any specific situation, and I don't know how we can implement it, but I think that by recognizing that the regulation of driverless cars is not only a technological problem but also a societal cooperation problem, I hope that we can at least begin to ask the right questions. 在20世紀40年代,艾薩克·阿西莫夫就寫下了他那著名的機器人三大法則:機器人不能傷害人類, 機器人不能違背人類的命令, 機器人不能擅自傷害自己——這是按重要性由高到低排序的。但是大約40年后, 太多事件不斷挑戰(zhàn)這些法則的底線, 阿西莫夫又引入了第零號法則,凌駕于之前所有法則之上, 說的是機器人不能傷害人類這個整體。我不太明白在無人車和 其他特殊背景下 這句話是什么意思,也不清楚我們要如何實踐它,但我認為通過認識到 針對無人車的立法不僅僅是個技術(shù)問題,還是一個社會合作問題,我希望我們至少可以 從提出正確的問題入手。 Thank you. 謝謝大家。 |
|
|