A sideways look at economics

“Prediction is very difficult, especially if it’s about the future”, Niels Bohr[1]

The future is hard to predict and the sensible thing would simply be to live in the present. But even so I often cannot help but wish for an omniscient crystal ball. If I go left will I collide and spill coffee down my front? Are property prices going to be sufficiently low for a Lifetime ISA by the time I have saved a deposit? Or is the weather that week I’ve booked on holiday going to be splendid or not?

Uncertainty is ever present in our lives and decisions – this cannot be helped. We can only seek to go with the flow, and either adapt as needs be or plan to mitigate any consequences. But might we be starting to get a better grip on uncertainty? Developments in computing have enabled us to deeply analyse data and model different scenarios. Consequently, we can garner some insight into potential scenarios, and make informed decisions that account for the uncertainty inherently attached to the future.

Last year I completed a Masters in Economics for Business Intelligence and Systems, culminating in a project for a mobile health app start-up this summer. The aim was to investigate and construct machine learning (ML) algorithms to predict the risk of injury, in order to identify those app users in need of preventative plans.

Injury risk prediction is undeniable in its value. More than a fifth of surveyed[2] British Olympians said that they retired early due to injury, while as long ago as 2016-17, injuries were estimated[3] to cost each Premier League team an average of £45 million per season. I’m sure there are many who wish they had been able to foresee that they were more vulnerable to an injury before they sustained an ankle sprain or tendon tear, thus sparing them hours of physiotherapy and possible surgery.

Understandably some injuries are unpredictable, but in many other cases the risk has been heightened by an individual’s decisions, equipment, actions and inherent attributes. Injury aetiology is widely recognised as multi-faceted and highly complex; the variables tend to be interdependent, making it harder to understand and quantify the magnitude and path through which a single variable influences injury risk.

This problem is not too dissimilar from forecasting the economy, which also has a wealth of metrics and numerous unseen connections and interactions, alongside some inherent ‘biomarkers’. Decisions based off the predictive models constructed on these characteristics have real-world effects, whether that is in preventing an athlete’s career from ending prematurely, or providing a positive environment for investment and confidence.

Researchers have begun to investigate the quality of ML algorithms for prediction, and to compare them with incumbent statistical methods. The deeper pattern recognition and dynamic nature they offer are attractive, allowing greater consideration of the interdependent nature of inputs and their indirect effects, whether on injury risk, GDP, inflation and more.

A relatively early comparison[4] against one of the more popular forecasts, the IMF’s World Economic Outlook, found ML algorithms offered improved forecast accuracy for one quarter and one year ahead, even for the GFC period. Other papers report similar findings, with different algorithms performing better over different forecast horizons.

Yet, as ever, there is a trade-off. Greater accuracy from ML algorithms is accompanied by a loss of interpretability, commonly known as the black box problem.

Is the loss of interpretation important?

Yes and no. It depends on the intended audience and use of the ML predictions. Policymakers and investors tend to have differing needs when it comes to precision, understanding, and the value they place on different forecast horizons.

For policymakers, it is important to understand the drivers of forecasted change, in order to identify if and where preventative or supportive measures should be introduced to provide optimal outcomes. If models are forecasting a recession, policymakers will want to know why, to determine how best to mitigate damage and to return to growth targets. Thus, knowledge of the drivers is needed. With the direction of forecasted change probably consistent between informed statistical methods and ML algorithms, extra numerical precision does not necessarily validate the loss of understanding of why.

By contrast, for some investors, numerical precision  may be more important than understanding the aetiology of a forecasted event, allowing them better to assess the profitability of futures contracts, or to capture more information allowing exploitation of market inefficiencies and potential arbitrage opportunities. Although it is beneficial to understand which industries are affecting the forecast most, investors can still trade equities and bonds for profit.

The need for precision over understanding is even more dominant in the case of life insurance, where the accuracy of risk-of-death predictions is more important for pricing than understanding the underlying reasons for mortality.

ML models have been trialled for prediction across a vast range of areas: stock market, medical diagnosis, natural disasters, financial stress and more. These are still early days, and yet they are already providing improvements — such as the superior capabilities of the ECMWF’s Artificial Intelligence/Integrated Forecasting System (AIFS) in predicting[5] Hurricane Milton’s landfall. We can only ask where next ML models will be used, and how great the benefits could possibly be.

Methods and theories often rely on predecessors and explored routes in order to evolve towards new optimums; so I put forward two questions along the lines of those most asked about AI – will it replace what has gone before, or be used in tandem to enhance it?

If we decide to rely on pattern recognition and investigation by algorithms, could there be a loss of modelling and critical-thinking skills, like when sat nav replaced map-reading? Or do ML algorithms represent simply another tool to complement incumbent statistical models, providing better predictions and a means of quality assurance?

In my opinion, ML algorithms greatly increase accessibility and ease of constructing good predictive models. Yet, I believe that the real added value will probably continue to come from those that can understand the subject-specific theory and consequently tailor and improve the models further – serving to maintain the value of econometricians as they seek to combine decades of theory with further computational power.

Forecasting - best done by humans or machines?

[1] https://blogs.cranfield.ac.uk/cbp/forecasting-prediction-is-very-difficult-especially-if-its-about-the-future/#:~:text=Niels%20Bohr%2C%20the%20Nobel%20laureate,model%20out%2Dof%2Dsample

[2] https://bjsm.bmj.com/content/bjsports/55/Suppl_1/A72.3.full.pdf

[3] https://pubmed.ncbi.nlm.nih.gov/32537241/

[4] https://www.imf.org/en/Publications/WP/Issues/2018/11/01/An-Algorithmic-Crystal-Ball-Forecasts-based-on-Machine-Learning-46288

[5] https://www.newsweek.com/hurricane-miltons-path-predicted-florida-unbelievable-accuracy-heres-why-1968429

 

More from Thank Fathom it’s Friday

Keep an eye on the money

The economic value of sleep training kids

Cycle-cal trends in commuting