Honk if you're not driving

UofSC law professor discusses emerging technologies and the law



As an internationally recognized expert on the law of self-driving vehicles, Bryant Walker Smith is frequently asked to weigh in on legal issues related to automated driving. But the UofSC law professor’s expertise isn’t limited to cars and the people not driving them. His insights into tort law and product liability, and his broader interest in what he terms “the law of the newly possible,” are helping prepare USC law students for an evolving legal landscape.

Broadly speaking, what areas of law and technology are on your radar today?

Everyone is thinking about data and automation, and how the two combine. Specifically, I’m thinking about questions of trust and trustworthiness — earning trust and evaluating trustworthiness. I used to think that simply providing information was sufficient. That is, if a government agency released data, or a court released a judgement, or a company released information about the performance of a product, that was the extent of the obligation, and that in itself was sufficient for whatever public policy goals exist. I’ve realized that is not enough. Everybody needs to think about the way that information is used and received — and we need to understand that information as part of a narrative.

What specific challenges does your interdisciplinary approach present?

There's a field of engineering called human factors that is concerned with how humans actually use the things that engineers design. So you’ve made toaster, but are people going to electrocute themselves with it? You’ve made a road, but are people going to feel so comfortable on it that they drive too fast? For years, human factors experts have warned us, “You need to see the human in the system, you need to think about how what you do is going to work in the real world…”

So much of the dialogue between engineering and law, between technology and society, is incremental. We don’t have a chance to step back and say, “Well, what should be?” rather than just, “What is?” It’s useful to set out benchmarks, to say, “This is what we want the world to look like,” and then to evaluate those benchmarks in a year, five years, ten years, and say either, “Our vision of the world has changed, and that’s OK,” or, “We’ve gone astray.”

The human factor is pretty big variable, though.

It is, and it’s one that tort law deals with all the time — think about consumer misuse and abuse, warnings and instructions — but human factors have not been fully appreciated in either the engineering or the judicial realms. I think we’re coming to a greater appreciation for the point these specialists have been making for a half century — that the same expertise needs to be applied in the domain of data and information.

But humans are not all the same. How do you adjust for such a squishy variable?

Everyone from designers to regulators is puzzling over how to treat non-deterministic systems, when you have machines where you’re not sure what the outputs will be, even when you know the input. With most machines, if you know the inputs, you know the outputs. Press on the gas, and you know how fast you’re going to go. Increasingly complex systems are effectively non-deterministic: there are so many inputs that you can’t understand them, or the interaction between inputs within the system can’t be fully understood. So you say, “How do we possibly regulate these non-deterministic systems?” Well, the law has been doing that for millennia. The human is the ultimate non-deterministic system — and that may actually offer a useful analogy for how we regulate these new, complex non-human systems. This is where law and engineering so desperately need the insights of psychology and neurology and social science. This is one reason why the humanities are so important. We should really be pulling from those domains that don’t have perfect answers but probably have better answers.

You're well known for your work relating to law and self-driving cars. We've already started down that road — we have cars that help us park, for example. At what point do we even call them self-driving?

Thank you for that question. People talk about the self-driving car as a technology, singular, but, really, it’s a diverse set of technologies, applications of those technologies and business cases for those applications When I dove deep into automated driving in 2011, I joined the Society of Automotive and Aerospace Engineers and became very involved in their efforts to define levels of automation — six years later, those efforts still continue. But as a technical matter, not a legal matter, once you can take your eyes off the road, we say the system is highly automated. People imagine our cars driving us anywhere, everywhere, while we're asleep in the back. That’s fully automated driving, and that’s a ways off, but there is a lot of lower-hanging fruit.

There are three paths toward this ideal of fully automated driving, The first is increasing driver assistance. Those systems will do more and more of the real-time driving tasks. The human can take their hands of the wheel, and their feet off the pedal, and then both hands off the wheel — and then maybe they can take their eyes off for a little bit. That takes us to the mushy middle of automation that engineers and lawyers have to deal with — imperfect people misusing these systems in all kinds of ways. Another path is increasing safety systems, introducing intervention systems that won’t act until a crash is imminent. These are going to get more sophisticated and start intervening earlier, so people might still think they’re driving, but actually they won’t be because these systems will be introduced so often. The third path is truly driverless sytems  that operate under limited conditions in certain communities and then over time expand the operating domain.

What fascinates you about the law as it relates to emerging technologies?

It’s an exciting way to examine all the ways that life has changed and is changing, and to be a part of that excitement. Think about all of the innovation, for good and bad, and all of the people who have had the opportunity to shepherd those or navigate those. That’s a privilege. I’m both intellectually fascinated by how that happens, and also, practically, as a lawyer and an engineer, very interested in being part of that process. Also, I’ve always been interested in complex relationships and understanding systems, recognizing that everything is ultimately part of one system, even though we draw artificial boundaries. New technologies give me a chance to see those relationships, the way that new ones emerge, or are strengthened, or are challenged.