Technology advances at breakneck speed. That’s exciting to early adopters, who can’t wait to get their hands on the latest piece of tech. For some, the rapid onslaught of technology is frustrating. But there are bigger issues that need our attention.

Economic pressures often move new technologies into the consumer space before people get a chance–or make the effort–to weigh the pros and cons. Time and again, society addresses the ethics of a new technology and makes new rules only after it’s in place and problems have emerged. There are examples in the news every day, like facial-recognition systems, gene editing, biobanking and data harvesting via social media. But we want to focus here on the problem of self-driving vehicles.

Artificial intelligence and advanced sensors are making self-driving vehicles a reality. There could be benefits. Self-driving vehicles would free up time for work, texting and talking on the phone. They could be safer if the technology is robust. But there may be downsides as well. For example, according to the American Trucking Associations, there are over 3.5 million truck drivers, and they stand to lose their jobs when self-driving trucks appear.

Self-driving vehicles are on the road now being field-tested, doing work and–sometimes–having rocks thrown at them! In December, police in Chandler, Arizona, reported 21 cases of adults throwing rocks, slashing tires and even pointing guns at self-driving cars. Citizens were angered that the company Waymo was testing cars in their neighborhoods, potentially putting them at risk, and developing machines that could replace them.

But beyond economics–and emotions–there’s a centrally important moral question with self-driving vehicles. Whom will they be programmed to protect?

But beyond economics–and emotions–there’s a centrally important moral question with self-driving vehicles. Whom will they be programmed to protect? Two people have already been killed by self-driving cars during road-testing, and there will certainly be more fatalities. Even if we assume self-driving vehicles will be more predictable and reliable than humans, that predictability makes them seem, well, insensitively cold. In circumstances where an accident is unavoidable, the computer has to “choose” between putting its passengers at risk, or risking other drivers, and even pedestrians. And by “choose” we mean calculate. So how do programmers decide who becomes a casualty?

The ethical dilemma of self-driving cars represents what philosophers know as a Trolley Problem. These problems have endless variation, but the gist is something like this: Imagine a trolley carrying five people on a track heading toward a gorge, but the bridge is out. There is a switch that can redirect the trolley safely onto a second track. Unfortunately, a person is tied to the second track. Pulling the lever to switch the tracks will save five people from certain death, but kill the person tied to the second track.

What would you do? These trolley cases are problems because they set up conditions where an agent is forced to select between what seems like two bad choices. Either the agent allows several people to die (which seems immoral), or they intentionally cause someone to die (which seems differently but equally immoral).

This thought experiment is powerful because of its flexibility. If you tweak the problem a little, the answers change. For example, people are less likely to switch the trolley if you say the person tied to the track is young and vibrant, whereas the five on the trolley are very old and terminally ill. Or if you say the person on the track is a close relative, people are much more apprehensive to pull the lever.

The technology of self-driving vehicles shifts the Trolley Problem from the abstract to the eerily real. How should a self-driving vehicle respond in a situation where rapidly swerving to avoid a crowd would save many lives, but kill the passenger?

How should a self-driving vehicle respond in a situation where rapidly swerving to avoid a crowd would save many lives, but kill the passenger?

Writing for Science in 2016, psychologist Joshua Greene discusses what he calls “our driverless dilemma.” But beyond economics–and emotions–there’s a centrally important moral question with self-driving vehicles. Whom will they be programmed to protect?

So how will self-driving vehicles be programmed to handle accidents? Who decides how they are programmed? Is it ethical for a company to offer two versions of the software – say, a gold package that saves the most lives, or a platinum package that saves the passenger? That is a moral dilemma for both the manufacturer and the purchaser.

Some will argue the Trolley Problem is moot because self-driving vehicles could communicate with one another and avoid no-win situations. But for that to work, we have to share personal data about where we are, when we travel, and where we are going. Advancing technologies like self-driving vehicles and DNA testing set up unexpected trade-offs between public safety and privacy rights. These issues are complex, but also rich in their potential to force us to reflect on and define our social values.

Ethics and moral philosophy provide ways to navigate the murky waters churned by advancing technology. And many companies and organizations do look to ethicists for answers to these questions. But our firm belief is that the public needs to participate in the discussion.

We have our say when we elect politicians who legislate public policy, and when we purchase or do not purchase products with new technologies. But we need to be more proactive, thinking about and weighing in on the ethics of new technologies before they hit showroom floors. We need to be engaged stakeholders in a technology-driven society, and not just consumers awaiting the next version of a phone.

As a society, we need to cultivate ethical literacy and be proactive in deciding how technologies are implemented–before they run us over.

Stephen M. Kuebler is an associate professor of chemistry and optics in the University of Central Florida’s Department of Chemistry and the College of Optics and Photonics. He can be reached at Stephen.Kuebler@ucf.edu.

Jonathan Beever is an assistant professor of ethics and digital culture in the University of Central Florida’s Department of Philosophy and the Texts & Technology doctoral program. He can be reached at Jonathan.Beever@ucf.edu.