Skip to main content

Should your self-driving car kill you to save a school bus full of kids?

should your self driving car kill you to save a school bus full of kids autonomous 2 970x0
It’s the near future and you’re reading this on your way to work in your self-driving car. The human driver of the car in front of yours slams on the brakes. Your car’s reaction time is roughly the speed of light, so it has time to realize that the stopping distance is too short and to see that the lane next to you is empty.

A quick swerve barely interrupts your morning browse of the headlines. The system works.

Related Videos

Now it’s ten years later. Human-driven cars have been banned from the major commuter routes because they’re unsafe at any speed. Wouldn’t you know it, but exactly the same situation comes up. This time, though, your car accelerates and slams itself into a nearby abutment, knowing full well that the safety equipment isn’t going to save you.

Your car murdered you. As it should have.

In this second scenario, not only are all the cars on the highway autonomous, they are also networked. The cars know one another’s states and plans. They can – and should – be programmed to act in such a way that the overall outcome is the best possible: more humans saved, fewer injured. It’s just like a simulation in which a computer is given some distressing multi-car situation and has to figure out what combined set of actions would be best. But now, it’s real.

Unfortunately for you, the networked cars figured out that to save the busload of children, you had to be sacrificed.

The two scenarios represent programs embodying different moral philosophies, a topic scientists and philosophers are now beginning to notice: The MIT Technology Review cites a study about whether people are ok with their cars making such decisions. (Result: Yes, so long as the respondents are not the ones sacrificed.) This summer a workshop at Stanford considered some of these questions, as did an Oxford University Rhodes Scholar, Ameen Barghi. In fact, I posed some of these questions a year ago). But this process has just begun.

Meanwhile, the problems get complex quickly.

In the first near future scenario, each self-driving car is designed to maximize the safety of its occupants. That’s all the cars can do because they don’t know what any other car is going to do.

2010-Autonomous-Audi-TTS-Pikes-Peak-10

The engineers who wrote the first scenario’s car program thought of it as a set of accident-avoidance routines. But it embodies a moral imperative: prioritize preserving the life of this car’s passengers. That’s really all that the designers of the first generation of self-driving cars can do, even though focusing only on one’s own welfare without considering the effect on others is what we would normally call immoral.

But once cars are networked, it would be immoral and irresponsible to continue to take self-preservation as the highest value.  If a human acted that way, we well might sympathize, explaining it as a result of what we think of as genetic wiring. But we also admire those who put themselves at risk for the sake of others, whether they’re medical personnel flocking to ebola sites, teachers who step in front of a gunman entering a classroom, or soldiers who throw themselves on hand grenades. We recognize their ultimate sacrifice while wondering if we would manage to do the same.

Self-driving cars will have two moral advantages over us: when networked they can see more of a situation than any individual human can, and they can be hard-wired to steel their nerves when it comes time to make the ultimate sacrifice…of their passengers.

Networked self-driving cars can in these ways get over weaknesses in the moral decisions made by human drivers. But, this will require their human programmers to make moral decisions based on values about which humans will not, and perhaps cannot, agree.

For example, perhaps the networked results show that either of two cars could be sacrificed with equal overall results. One has a twenty-five year old mother in it. The other has a seventy-year old childless man in it. Do we program our cars to always prefer the life of someone young? Of a parent? Do we give extra weight to the life of a medical worker beginning a journey to an ebola-stricken area, or a renown violinist , or a promising scientist or a beloved children’s author? Or, should we simply say that all lives are of equal worth? That may well be the most moral decision, but it is not one we make when deciding which patients will get the next available organ for transplantation. In fact, should we program our cars so that if they have to kill someone, they should do it in the way least likely to damage their transplantable organs? Do we prefer to sacrifice the person who by speeding, or by failing to get her brakes inspected, caused the accident?

autonomous driving

These same questions arise in every situation in which we require our machines to make decisions. An autonomous killing drone, especially when networked with other drones, can know more about a complex situation than human pilots can. But how are we going to decide what counts as an “acceptable risk” of civilian casualties? It’s entirely plausible that the moral answer is “zero,” even though that is not the answer we give when it’s a human finger on the bomb release trigger. And that, of course, ignores the inevitability of failures of the programming, a risk that has been highlighted recently by Stephen Hawking, Elon Musk, Bill Gates, and quite clearly by the Terminator and Robocop series.

The behavior of programmable machines is an extension of human desires, will, and assumptions. So of course the programs themselves express moral preferences. As more of our lives are wrapped into autonomous machines, we’ll have to take the moral dimension of our programmed devices more seriously. These decisions are too important to be left to the commercial entities that are doing the programming. It’s just not clear who should be settling these difficult questions of morality.

This was originally a post on our brother site, Digital Trends.

David-shorensteinDavid Weinberger writes about the effect of technology on ideas. He is the author of Small Pieces Loosely Joined and Everything Is Miscellaneous, and is the co-author of The Cluetrain Manifesto. His most recent book, Too Big to Know, about the Internet’s effect on how and what we know.

Dr. Weinberger is a senior researcher at the Berkman Center. He has been a philosophy professor, journalist, strategic marketing consultant to high tech companies, Internet entrepreneur, advisor to several presidential campaigns, and a Franklin Fellow at the US State Department. He was for four years the co-director of the Harvard Library Innovation Lab, focusing on the future of libraries.

The 10 best Amazon Prime original movies to stream today
These incredible Amazon Prime original movies are worth a watch (or two)
best amazon prime original movies report

If you know anything about Amazon (and let's be honest, who doesn't), you probably know it as one of the biggest retailers in the world. As that retail business continues to grow, though, Amazon has also gotten into the streaming game in a major way. There are lots of movies on Amazon Prime movies as well as plenty of shows, but this list is focusing on the streaming films that have been produced by Amazon as a studio. Amazon clearly wants to be a dominant player in Hollywood, in addition to the dominance it already has in the world of retail, and based on this list, it's well on its way.

Because Amazon is such a new studio, all of the movies on this list are quite recent. Hopefully, that means that they feel vibrant and relevant, and will continue to hold up for years to come.

Read more
Easy, Intuitive Cleaning with DEEBOT T10 OMNI: Hey YIKO, Vacuum and Mop My Home
ECOVACS DEEBOT T10 OMNI cleaning floors

This content was produced in partnership with ECOVACS.
Picture this, you've just sat down after a hard day's work, barely have a moment of respite, and your phone rings. It's your mother-in-law and she's going to pay an impromptu visit. Before even hanging up you're wracked by a state of panic. The house is a mess. There's clutter everywhere. It will take forever to organize, vacuum, mop, clean, and get everything else done before she arrives. It's not a fun situation but let's rewind here. If you had ECOVACS' newest state-of-the-art smart vacuum and all-in-one cleaning system, the DEEBOT T10 OMNI, the situation would be barely an inconvenience.

The DEEBOT T10 OMNI has an all-in-one station that handles just about everything you'd need it to, like auto-emptying the dustbin, auto-cleaning the mop attachments, and auto-hot-air-drying the mop. It's the most advanced docking station on the market, but AI is baked into this system at nearly every touchpoint. Intuitive AI voice controls allow you to issue voice commands without moving a muscle. It could be as easy as calling out, "Okay YIKO, start cleaning here!" It knows exactly where "here" is thanks to a built-in voice locator. AI is also leveraged to prevent the spread of granular particles while it's vacuuming, so it won't make a bigger mess. Coupled with sophisticated mapping and object detection, plus a large battery, this thing will just keep going as long as it needs to. If all of that sounds like something you might need, keep reading.
Buy Now
 
The Best Features First

Read more
The best horror podcasts (listen to at your own risk)
Want a good chill? Check out the best horror podcasts of 2023
A scary sketched Frankenstein monster.

A good scary story doesn't just come to life. It takes great narrative ability and some out-there content. Fortunately, there's a podcast for just that.

Sure, we love a good music podcast, but sometimes you want that eye-opening rush otherwise known as fear. The best horror podcasts reawaken that sense and remind us that not only are humans strange, but some things are just really, really hard to explain. So as the days get a little longer and the darkness of winter subsides, hang on to a bit of that darkness with a good old-fashioned terrifying tale.

Read more