Legal Infrastructure For Driverless Cars, And Comparisons Between The Law And Ethics Of Self-Driving Cars And Autonomous Weapon Systems
CIS Lecturer Bryant Walker Smith is quoted by The Volokh Conspiracy's Kenneth Anderson on the speed at which driverless cars are developing and how these cars will be compatible with today's existing technologies.
Driverless cars are coming faster than most observers would have thought. One big reason, according to Bryant Walker Smith in a recent article in Slate, is that people predicting the driverless car future assumed that they would have to be part of centrally-run systems, with corresponding changes to physical infrastructure, such as special roads embedded with magnets. Or for that matter, we can add, centralized computers to take control of all the vehicles in the system. The changeover has to be centralized and take place for a given area all at once; it doesn't scale incrementally. That was the thought, anyway, and Smith (who is a fellow at Stanford's Center for the Internet and Society) says that as a consequence, ever "since the 1930s, self-driving cars have been just 20 years away."
Smith’s real point, however, is to go on from physical infrastructure to include the rules of the road. Infrastructure also includes, he says,
...Smith’s real point, however, is to go on from physical infrastructure to include the rules of the road. Infrastructure also includes, he says,
laws that govern motor vehicles: driver licensing requirements, rules of the road, and principles of product liability, to name but a few. One major question remains, though. Will tomorrow’s cars and trucks have to adapt to today’s legal infrastructure, or will that infrastructure adapt to them?Smith’s real point, however, is to go on from physical infrastructure to include the rules of the road. Infrastructure also includes, he says,
Smith takes up the most basic of these questions – are self-driving vehicles legal in the US? They probably can be, he says – and he should know, as the author of a Stanford CIS White Paper that is the leading analysis of the topic. Self-driving vehicles
must have drivers, and drivers must be able to control their vehicles—these are international requirements that date back to 1926, when horses and cattle were far more likely to be “driverless” than cars. Regardless, these rules, and many others that assume a human presence, do not necessarily prohibit vehicles from steering, braking, and accelerating by themselves. Indeed, three states—Nevada, Florida, and most recently California—have passed laws to make that conclusion explicit, at least to a point.
Still unclear, even with these early adopters, is the precise responsibility of the human user, assuming one exists. Must the “drivers” remain vigilant, their hands on the wheel and their eyes on the road? If not, what are they allowed to do inside, or outside, the vehicle? Under Nevada law, the person who tells a self-driving vehicle to drive becomes its driver. Unlike the driver of an ordinary vehicle, that person may send text messages. However, they may not “drive” drunk—even if sitting in a bar while the car is self-parking. Broadening the practical and economic appeal of self-driving vehicles may require releasing their human users from many of the current legal duties of driving.
For now, however, the appropriate role of a self-driving vehicle’s human operator is not merely a legal question; it is also a technical one. At least at normal speeds, early generations of such vehicles are likely to be joint human-computer systems; the computer may be able to direct the vehicle on certain kinds of roads in certain kinds of traffic and weather, but its human partner may need to be ready to take over in some situations, such as unexpected road works. A great deal of research will be done on how these transitions should be managed. Consider, for example, how much time you would need to stop reading this article, look up at the road, figure out where you are and resume steering and braking. And consider how far your car would travel in that time. (Note: Do not attempt this while driving your own car.)
Technical questions like this mean it will be a while before your children are delivered to school by taxis automatically dispatched and driven by computers, or your latest online purchases arrive in a driverless delivery truck. That also means we have time to figure out some of the truly futuristic legal questions: How do you ticket a robot? Who should pay? And can it play (or drive) by different rules of the road?
In any case, the lesson of self-driving cars is that these technologies are advancing incrementally, and the proper regulatory response is to regulate them incrementally. Return to Smith’s Slate article and White Paper. As he says, today that means, in the case of self-driving cars, allowing someone to text while the car is under automated control, but not to be drunk in the car because, after all, one still might have to assume the wheel. But tomorrow the technology might be much improved, and the tradeoff of not risking a drunk person driving against the risks of machine control might go very differently. Regulation has to take into account the technological state of automation; it’s a matter of degree, not an on-off switch of autonomous or not. But the same will be true in the case of automated weapons.