While most aviation guidelines for applicability reference some kind of “human-centricity” or human interaction, these guidelines lack specific references to techniques that will actually implement that human-machine relationship. These potential systems aim to support and assist humans (e.g. air traffic controllers) in making decisions alongside current paradigms until said systems learn enough to become autonomous. At that point, most guidelines require these systems to be “explainable” in a way that allows humans to supervise the operation of the systems and manage the risk of the machines taking incorrect decisions or even taking over (sometimes called “exception management”) if the solution fails. If these systems are black-boxed and difficult to understand, humans might be incapable of understanding some of the decision-making process, as famously happened to DeepMind when seeing AlphaGo beating Go Master Le Sedol.
One of the key challenges with several of these roadmaps is the difficulty in building “a robust dataset”. This is the process of ETL, including cleaning and preparing data but also labelling the data samples for supervised machine learning tasks, which could be extremely expensive. Collecting and labelling a good dataset is always an enormous task; for instance, Andrew Ng reminds us that training a “smart speaker” might require 5 full years of annotated audio data, which is entirely out of scope for most companies. In our experience, correctly training a predictor to forecast unstable aircraft approaches might require 4 years of Flight Data Monitoring data taking into account the rarity of this event. The training of this model would require labelling all the unstable approaches of the past, according to airlines criteria, which would consume significant resources. This is even taking into account the use of cluster computing to parallelize the data processing pipelines to process huge volumes of data.
In the case of machine learning methods taking over human operators (e.g. automation of driving or automation of air traffic control), collecting and labelling a dataset for training such an automated system comes with its share of additional difficulties, and high-level behaviours might not be immediately translatable to actuator commands. Interpreting a scenario and then understanding how a human reacts (or not) and why will be one of the biggest challenges in the introduction of AI in ATC.
Collecting expert demonstrations is, however, doable through an approach called imitation learning (IL). Tesla famously uses IL as Andrej Karpathy, Director of artificial intelligence and Autopilot Vision at Tesla explains in this talk. While we may not be able to duplicate this exactly, an IL-based approach aims to deploy a massive collection of data into an infrastructure that can be automatically label certain scenarios of interest. Starting by the most critical scenarios makes sense, and Tesla started with scenarios such as cut-ins. By focusing on concrete scenarios, automated infrastructures can automatically label scenarios with predictive model moments and test if the labelling was correct afterwards. Additionally, we can collect data to see how humans react in comparison with those scenarios. By deploying such an automated system capable of comparing human reactions with the automated labeled process, we can better understand how a system correctly predicts a scenario.
IL will be key to ATC automation. Starting with similar simple scenarios (e.g. ATC in en-route sectors), we will be able to automatically label situations where human operators are expected to provide ATC instruction. A shadow mode system will be able to test if the system is predicting both the situation and the reaction to be provided by the ATCO and then compare both, collecting data in the process while using the human operator input.