Fragile regulatory processes may prove as much cause for concern as fragile software systems when it comes to autonomous vehicle development, argued two Duke professors at a recent conference on the future of artificial intelligence. As Americans hear about the upcoming artificial intelligence revolution, the world of transportation seems most prone to quick changes.
The half-day conference for congressional and agency staff May 31 focused on the policy considerations for human-A.I. collaboration in transportation.
Mary "Missy" Cummings, professor in the Department of Electrical and Computer Engineering and director of the Humans and Autonomy Lab (HAL) at Duke, led the program with a quick presentation on the A.I. and transportation landscape and then joined back in for a panel afterward.
As one of the U.S. Navy’s first female fighter pilots and one of the world’s leading researchers in artificial intelligence and human-autonomous system collaboration, Cummings is well-positioned to lead a discussion on the safety, military and civilian uses of new technologies.
The difference between autonomy and automation, according to Cummings, is that autonomy involves probabilistic reasoning. Whereas automation operates in a fixed set of parameters and rules where their given situation does not change, in autonomy, the situation does change and the program must make decisions for itself.
Makers of autonomous vehicles such as Waymo (Google) and Tesla, even though more advanced than their competition, are still unable to make the leap into full autonomy. They remain stuck in the automation stage, said Cummings.
Miroslav Pajic, Nortel Networks Assistant Professor in the Pratt School of Engineering, joined Cummings in presenting at the conference. Pajic’s research focuses on the design and analysis of cyber-physical systems.
Autonomous vehicles face a variety of weaknesses, according to Pajic and Cummings. Passive hacking such as strategically placing a few physical stickers on a “STOP” sign can cause an autonomous vehicle to ‘see’ a speed limit sign and thus be more likely to cause an accident. Autonomous vehicles routinely mistake trees for roads and fences for people.
When asked about what kind of artificial intelligent vehicles one may see in the near future, Cummings offered two types: airport shuttles and platooning trucks. Any vehicle that operates in a limited set of situations and with strictly defined rules will likely see artificially intelligent software applications.
Cummings also noted the potential for autonomous airline co-pilots. The world faces a serious dearth of commercial aviation pilots at the moment, she said, and there has been promising research and prototyping on robotic autopilot controls.
Research and development is also shaped by the regulatory environment. Cummings explained the differences between how the Food and Drug Administration (FDA), the Federal Aviation Administration (FAA), and the National Highway Transportation Administration (NHTSA) interact with device manufacturers. The FAA traditionally connects with aeronautics companies at the earliest point of design, whereas the NTSB, which theoretically governs the world of ground-based transportation, involves itself well after a product has been developed, tested and deployed.
In a post-event interview, Pajic noted the need “to be more proactive in addressing safety and security concerns in modern vehicles compared to the more reactive approach that the government is doing today.”
Cummings added “we need to review how regulatory agencies look at equivalence [the fast-track process of approving new vehicles] when they approve technologies going to market because autonomous vehicle technologies really have no equivalent.”