AI systems for safety critical tasks

David Perez

2018-06-08 09:30:00
Reading Time: 2 minutes

After accomplishing difficult tasks like translation, playing go or amazingly calling a hair saloon for an appointment, the industry of artificial intelligence is ready to take on safety critical tasks.

For instance, in car autopilot systems, the progress is sound, but the path and pace towards implementation is not yet clearly established. General Motors, Audi and others are reportedly delaying their systems, while accidents on Tesla and Uber platforms do not seem to impact their deployment. Drive.ai will start operations in July and Andrew Ng has announced that driverless cars are coming, to name few data points.

Some of the challenges seem to be related to those systems that are not fully automated. Ultimately, though humans prove to have difficulty monitoring partially automated systems, some of the tools that are provided are only “safe” when humans constantly supervise them. Consumers groups are complaining about how some of the autopilot features are marketed and even Tesla is studying how to implement systems that monitor how humans actually interact with autopilot systems, which seems a bit odd and  seems to defeat the purpose of autopilot features. Additionally, the systems are not very good at enforcing human monitoring – In 2017, in one accident investigation, Federal safety investigators revealed that the driver had put his hands on the wheel for a total of 25 seconds during the 37 minutes. Though autopilot was on and sent 13 warnings to the driver asking him to keep his hands on the wheel, the driver ignored said warnings and the autopilot never deactivated.

Of course, as one of the first safety critical wide-impact applications, automobile autopilot programs have many challenges that still need to be solved. In some ways, these visionaries deserve their applause in continuing to pursue applicable artificial intelligence despite its challenges. The industry is making a great effort in exploring ways for adoption and using vast amounts of data to analyse the effectiveness of the tools. It continues to explore new paradigms of human involvement in automation of systems and the liability models.

The aviation community needs to start considering the AI introduction roadmap for some critical functions in addition to the performance assessment requirements for those tools in assisting safety functions, like TCT or STCA. Currently, in air traffic control, certain tools operate without any performance monitoring and are known to be frequently inaccurate. For those tools to evolve in the world of AI, strong accountability needs to be put in place through analytic frameworks that monitor how often said tools perform well and when they should be trusted.

Progress, in terms of clarifying liability, is also needed. Currently, for instance, the “operational use of STCA will depend on the controller’s trust in the system”, which leaves the controllers unsupported in their decisions of using or trusting said tools. While this ambiguity has helped in the past to introduce some tools of unclear reliability and accuracy, the age of data will be unforgiving to these adoption approaches.

Author: David Perez

© datascience.aero