The curse of IROPs (A gentle overview, part 1)

Damir Valput

2020-01-14 09:29:13
Reading Time: 4 minutes

What are IROPs?

IROPs, abbreviating the term IRregular OPerations, refer to extraordinary situations in which a flight does not operate as scheduled. There is no clear-cut definition, however, and that umbrella term generally includes delays, cancellations, diversions of flights and similar events. According to some analyses, irregular operations can cost airlines as much as 8% of their revenue. However, up until this decade, there hasn’t been a lot of research nor development efforts allocated towards mitigating IROP effects. One major obstacle to implementing efficient systems for solving IROPs is their complexity.

What makes IROP solving such a complex problem?

Assessing the costs of IROPs is a much harder problem than it might seem. Analysts usually attempt to quantify them via two categories: hard and soft costs. Hard costs are generally straightforward to calculate, albeit computationally exhaustive when attempting to adequately capture all the (potential) effects of a disruption, such as a flight cancellation’s effect on the whole network.

On the other hand, soft costs, including customer behaviour and reactions, such as the shift in their loyalty due to an experienced disruption, are very difficult to assess. What complicates this task further is the airlines’ traditional way of measuring disruptions in terms of flight-centred metrics. In order to assess soft costs more adequately, a shift to passenger-centred metrics observed over the whole door-to-door travel chain is necessary. After all, passengers’ trips don’t end right as they disembark.

The complexity of the problem stems largely from the air traffic system’s intricacies and inherent uncertainties. Traditionally, disruptions are solved in parts, with operation managers focusing separately on finding solutions for the aircraft, crew and, lastly, passengers. The effects of a recovery solution on the network as a whole are rarely taken into account. Understandably, humans can’t process such large quantities of data all at once. This process results, in most cases but the simplest ones, in suboptimal solutions. Ideally, in order to deliver an optimal solution, we would need an omniscient agent who “knows the state of the whole network at all times” and thus can deliver an overall cost-minimising solution at every time step. Finding such a solution is a challenging task from mathematical and technological points-of-view.

The trouble called the “knock-on effect”

To further illustrate the complexity of the problem, imagine introducing an artificial agent into the system that can suggest recovery actions for dealing with IROPs. Let us now assume the agent is debating cancelling a flight (one of the actions available in its set). If it bases its decision only on the current state of the network, it needs to analyse two worlds in its decision-making process. In one world, events unfold as if the flight was cancelled; in the other one, as if it wasn’t. As we can observe, the agent benefits from being able to predict the future outcomes of its choices in the moment of decision-making, as if asking: what effect will cancelling this flight have?

However, this is quite a simplification of the real-world problem, as in most cases it is unsound to assume the present conditions are not going to change in the future. As a matter of fact, as soon as the agent makes a decision that yields an optimal solution, some of the conditions of the environment might have changed already (e.g., another flight may have been cancelled, a passenger may have changed their flight, etc.). Therefore, the scenario becomes much more complicated if we account for at least some of the uncertainties in the network. This can lead to exponential growth of the state space size, yielding quite an intractable problem.

The world that each decision creates can be naturally mapped to a cost (cost of the choice “creating” that world), which is again composed of hard costs and soft costs. Cost modelling in itself is, as we already established, a difficult task. Further adding to the difficulty is the fact that we are dealing with the dynamic system and the cost calculated at the moment of making the decision might change even as soon as it has been calculated. Nevertheless, a good IROP mitigating tool should have a predictive module that is able to fairly accurately forecast knock-on effects. How far away are we from a tool that would satisfyingly address these issues?

(Why) Should we pay more attention to the IROPs?

For many years, airlines boosted their revenues mostly via ancillary packages. Investing into improving IROP-solving systems was not a priority due to the above mentioned complexity of the problem and estimated negligible return on investment. Traditional, manual responses to disruptions were good enough. However, with businesses turning to more data driven environments and immense technological advances, it may be time to rethink the approach to solving IROPs. In the last several years, a number of solutions have emerged, but none have proved to be clearly superior in paving the way towards the next generation of operation management. Are any airlines already developing a solution that may soon change the game and give them a huge competitive advantage on the market? Or do senior managements need more convincing in order to invest in the development of such a system? If so, how might one approach this issue?

To many of those questions, we will only receive answers in the upcoming years. But if you are curious now, join me in the sequel to this post in which I delve further into the tangled world of IROPs!

Did you like it? So read the part 2 of this post!:

The curse of IROPs (part 2)

Author: Damir Valput

© datascience.aero