In a discussion we had at a conference a little over a year ago, a data scientist lamented the fact that their predictive model had identified a clear shift in the loss trend for a particular risk category that ended up costing that particular insurer a material amount of money.
Unfortunately for them, their business executives had trouble interpreting the analysis, and the derived output that ended up on decision makers' desks was mostly out of date by the time a serious discussion could occur. This is a clear example of failed machine intelligence (MI), which our recent SRI sigma on enterprise-scale MI explores in detail. This example illustrates several root causes of failure including lack of a robust, dynamic data engineering architecture, poor data visualization, and haphazard education of business leaders or processes for executive consumption of output.
Most insurers already use some kind of conventional MI such as generalised linear models (GLM). Many companies have been experimenting with newer MI such as machine learning (ML) and artificial intelligence (AI). Yet despite these investments of time and effort, almost all MI-enabled system deployments are currently failing to profitability transform insurance companies.
However, the SRI sigma research noted above argues that insurers should not be tempted to walk away from MI-related investment or disband future MI development plans as a result of these lackluster initial results. Rather, we concluded that the focus should shift to what the industry can do to realise the potential of MI-enabled transformation. Learning from the few successful MI-enabled system deployments within the insurance industry and the many successes in other sectors such as Big Tech, we found that, first, investment should shift from the current focus on models/algorithms to data engineering. In parallel, companies deploying MI-enabled systems should also spend time redesigning their organisation and processes to leverage MI, matching use cases to particular MI categories, and finding ways to retain better prepared talent.
In almost all cases, addressing the under-investment and wrong investment in data engineering is the first necessary step to materially improve the productivity of MI-enabled systems. Moreover, every link in the insurance value chain—product development, marketing, underwriting, pricing, client servicing, claims management, portfolio analysis, asset-liability matching, and capital & liquidity management—are increasingly data driven. Growing use of ML models with a greater parameter capacity than their linear counterparts require ever greater data volumes to function, which in turn compounds this issue. We estimate that successfully implementing enterprise-scale MI systems has the potential to improve insurer's profit margins between 200 and 400 basis points leading to combined ratio improvements in the range of 6-9% in 2-3 years’ time according to Galytix Analysis . Data needs to be seen as a strategically important asset that is managed across the organisation. Regardless of where an insurer is today on its data engineering path, the time is rapidly coming when data backbones will be recognized as necessary to retain a basic level of competitiveness.
As we highlight in our sigma, insurers are already investing heavily in data-related systems & projects. Due to poor or missing enterprise-wide data strategies and insufficient numbers of qualified data-system architects, data value chain management (i.e., identify, ingest, curate, credential, process, transform, analyse, visualize, use—and store for future processing) is still woefully inefficient. As a result of these circumstances, we have identified specific inefficiencies—and thus, opportunities for improvement:
Peering into the future, we see competitively successful insurance companies shifting their data management paradigm from one of disconnected and stagnating data lakes to networked data rivers. Such automated flows can dynamically update data-communication tools that enable understanding and action far better than static PowerPoint slides.
In the past few years, myriad powerful tools, systems, open source algorithms, and vendors have appeared to address many (if not all) of these challenges. Even so, any organisation that haphazardly attempts to implement a mix of new capabilities runs the risk of becoming lost in the thicket of choices leading to Frankenstein systems that only breed more inefficiencies. What can well informed institutions do?
Following all these recommendations is a tall order and impractical for many institutions in the near term. However, some of them can be quickly realised. For example, almost all insurers already invest heavily in "conventional" MI such as generalised linear models. This conventional MI informs risk selection, risk pricing, capital allocation, and risk management—albeit at too high of a cost due to poor data-system architectures. Thus, well placed data engineering investment will almost immediately improve returns to this conventional MI investment that has already been made by many firms. Even if comprehensive re-engineering of data architectures is not possible now, targeted efforts on improving data ingestion and curation using cost-effective, end-to-end systems (a few particularly good ones are now available) and investing in well-designed data visualisation are two practical recommendations almost all institutions can follow today. Comprehensive data strategy development and data engineering efforts will likely require more time and budget.
As previously commented, and perhaps counter-intuitively, the path to MI success in the risk-transfer industry does not focus on the algorithms and models that ultimately generate value-adding insights and predictions. Rather today's challenge is efficiently fuelling that growing analytical capability and enabling business leaders to digest its growing output. This introductory blog will be followed by a three-part series offering our more detailed views on tackling those engineering and consumption challenges.