Can Self-Driving Cars Make it In The Real World?

For some people, it’s the future: a car that you can climb in, tell it where you want to go, and it takes care of the rest. In fact, Google has invested a lot of time and money into building the car of the future: the video above lays out how Google is designing it, what technology is going into it, and what they’ve learned from having robo-cars drive 190,000 miles.

But can these self-driving cars ever actually work on our roads? To answer that, we need to answer a few more questions.

Question: If The Car Makes A Mistake, Are You At Fault?

Answer: First and foremost, if the car is the one driving, are you responsible? To some degree, yes, you are. It’s your property, after all, and you’re responsible for what it does, even if you’re not doing anything, just like if a friend wrecks your car, it’s still coming out of your insurance.

The legal arguments, though, become about to what degree you’re responsible. For example, if you can prove that the car’s systems failed, does your auto insurance pay out? Does the manufacturer? Does the programmer who designed the system? It’s a tricky issue.

Question: If Police Officers Want To Pull You Over, How Do They Do It?

Answer: Here’s the tricky thing about a self-driving car that needs to pull over: can you do it? Or can the officer do it for you?

In theory, it’s a simple thing to install a device that, say, lets an officer of the law send a code to a robotic car and have it pull over. In practice, however, that’s inviting abuse, and worse: malicious hackers could easily figure out a way to keep your robot car off the road entirely by, for example, convincing it that it was being pulled over every mile or so. And that’s assuming a corrupt officer doesn’t decide to start handing out tickets that you can’t avoid.

There’s also the civil liberties question: you’re ceding a certain amount of control to the police, and not everybody is comfortable doing that. It might not even be legal under the Constitution: it may qualify as illegal search and seizure.

Question: How Do I Defend Myself Against Somebody Being an Idiot?

Answer: This is arguably the biggest wild card of all: how, precisely, robotic cars deal with human drivers who, well, are enormous jerks. And this alone might keep self-driving cars off the market for a long, long time.

The old joke about computers is absolutely true: they do precisely what you tell them to do. But how will they deal with people who tailgate, cut them off, take a left turn out of a right turn lane, and otherwise make driving such a misery that some of us would rather just hand the chore off to robots?

The big problem is that in order to avoid a wreck, a lot of us need to break the law: swing into the breakdown lane, brake suddenly on the highway, swerve into the next lane, and so on, precisely the kind of behavior a robotic car probably won’t be allowed to do.

In other words, can a robot break the rules to save your neck? And if it does so, does that mean the owner is liable for any laws the robot may break?

Nobody really knows: we haven’t really had a legal case that establishes it. One thing we do know is that getting robots to improvise is almost shockingly hard: they’re not, ironically, that good at thinking flexibly. It might be an all-or-nothing proposition: everybody gets a robotic car, or nobody gets one.

For now, it’s unlikely you’ll see self-driving cars outside of controlled environments. For example, in suburban tract housing, which has low speed limits and clearly designed roads, robotic cars might be implemented as a form of public transit: call a car and it’ll take you to, say, another part of the development, or to a specific point like a grocery store or a bus station. But the days of climbing in, saying your destination, and opening a book are a long way away, at least until the lawyers hash everything out.

If you need car insurance, check out SafeAuto.com.

Add Comment