Skip to Content

Businesses Plan for Digital Age Regulation

It’s becoming increasingly clear that, despite Silicon Valley’s distaste for regulation, more regulation for the digital age will likely be required at some point. Not only that, but fairly sophisticated relays of regulation are likely in the offing, prompted by events as disparate as the political mining of Facebook data and the recent failure of a self-driving car to stop before hitting and killing an Arizona pedestrian.

Those injured by a vehicle driven in a reckless way, which has led to an injury of some description, may want to use the services of a car accident attorney Houston in the followup to the injury.

But what would regulation in the digital age look like? How could it be developed?

More Data, More AI?

The Arizona case presents an occasion to see just how divergent the answers to these questions can be. It’s clear that the incident raises a minimum of two related concerns: 1) why the car didn’t stop automatically when a pedestrian walked in front of it and 2) why the human driver, meant to be a safety backup if the car didn’t work, didn’t serve the safety backup function. Could regulation have helped avoid the fatality? And if so, how?

One potential answer is given by Zac Townsend, the former Chief Data Officer of California, in Slate.

His answer is that more data and more data testing could have helped and that artificial intelligence (AI) ought to be more empowered in the regulation process.

Townsend argues that AI-driven robots could test autonomous cars for multiple events the cars are likely to encounter and be developed to correct any problems.

He feels a problem with current regulation is that it’s insufficiently data-driven. Currently, government regulations consider a ruling, discuss it, solicit expert opinion, and open a proposed rule to the public. But AI is fast enough, and sophisticated enough, that they could open the doors to much more data.

A problem with this argument, though, is that real-world driving has to occur at some point. There is no guarantee that robotic testing could have covered every eventuality. And there’s no guarantee that every eventuality testing did cover would exactly replicate real-world conditions, either.

Does more AI testing need to be done, or more human?

Humans All the Time?

An article in Nature embraces a view somewhat the opposite of Townsend’s. To quote the authors: “Some form of human intervention will always be required” in autonomous vehicles.

They believe that laws regulating self-driving cars should be somewhat like those regulating planes. Planes are largely flown via technology, but no one ever thinks of doing away with the pilot entirely, or flying planes with only AI data governing the regulation.

If one uses the Arizona autonomous vehicle accident as a template for thinking about regulation of digital technology, Nature’s argument seems like the better path. While results of investigation into the accident are still pending, it’s clear that the vehicle neither stopped nor swerved.

But the human driver may be somewhat at fault as well. Authorities in the wake of the accident seemed to suggest that the human driver may face charges. Footage from an interior dashcam shows that the driver was not looking at the road until immediately before the accident.

It’s possible that a more alert human driver couldn’t have prevented the accident either, certainly. But the inattentiveness suggests that the driver was relying on the technology alone to get it right.

So how will regulation be developed in the digital age? It’s possible that more data does need to be added to the existing method. But more humans, and more human discussion needs to be added as well.