Artificial intelligence (AI) drives a growing share of decisions that affect every aspect of our lives, from where to take a vacation to healthcare recommendations that could affect our life expectancy. As AI’s influence grows, market research firm IDC expects spending on it to reach $98 billion in 2023, up from $38 billion in 2019. But in most applications, AI performs its magic with very little explanation for how it reached its recommendations. It’s like a student who displays an answer to a school math problem, but when asked to show their work, simply shrugs.
This “black box” approach is one thing on fifth-grade math homework but quite another when it comes to the high impact world of commercial insurance claims, where adjusters are often making weighty decisions affecting millions of dollars in claims each year. The stakes involved make it critical for adjusters and the carriers they work for to see AI’s reasoning both before big decisions are made and afterward so they can effectively audit their performance and optimize business operations.
Concerns over increasingly complex AI models have fired up interest in “explainable AI” (sometimes referred to as XAI,) a growing field of AI that asks for AI to show its work. There are a lot of definitions of explainable AI, and it’s a rapidly growing niche — and a frequent subject of conversation with our clients. (Read: AI's Got Some Explaining to Do.)
At a basic level, explainable AI describes how the algorithm arrived at the recommendation, often in the form of a list of factors that it considered and percentages that describe the degree that each factor contributed to the decision. The user can then evaluate the inputs that drive the output and decide on the degree to which it trusts the output.
Transparency and Accountability
This "show your work" approach has three basic benefits. For starters, it creates accountability for those managing the model. Transparency encourages the model’s creators to consider how users will react to its recommendation, think more deeply about them, and prepare for eventual feedback. The result is often a better model.
Greater Follow-Through
The second benefit is that the AI recommendation is acted on more often. Explained results tend to give the user confidence to follow through on the model’s recommendation. Greater follow-through drives higher impact, which can lead to increased investment in new models.
Encourages Human Input
The third positive outcome is that explainable AI welcomes human engagement. Operators who understand the factors leading to the recommendation can contribute their own expertise to the final decision — for example, upweighting a factor that their own experience indicates is critical in the particular case.
How Explainable AI Works in Workers Comp Claims
Now let’s take a look at how explainable AI can dramatically change the game in the area of workers compensation claims.
Workers comp injuries and the resulting medical, legal, and administrative expenses cost insurers over $70 billion each year and employers well over $100 billion — and affect the lives of millions of workers who file claims. Yet a dedicated crew of less than 40,000 adjusters across the industry is handling upwards of 3 million workers comp claims in the U.S., often armed with surprisingly basic workflow software.
Enter AI, which can take the growing sea of data in workers comp claims and generate increasingly accurate predictions about things such as the likely cost of the claim, the effectiveness of providers treating the injury, and the likelihood of litigation. (Read: INFOGRAPHIC: 6 InsureTech Trends to Know.)
Critical to the application of AI to any claim is that the adjuster managing the claim see it, believe it, and act on it — and do so early enough in the claim to have an impact on its trajectory.
Adjusters can now monitor claim dashboards that show them the projected cost and medical severity of a claim, and the weighted factors that drive those predictions, based on:
- the attributes of the claimant,
- the injury, and
- the path of similar claims in the past
Adjusters can also see the likelihood of whether the claimant will engage an attorney — an event that can increase the cost of the claim by 4x or more in catastrophic claims.
Let’s see how this drives better decisions with an example. Let's say the claimant injured their knee but also suffers from rheumatoid arthritis, which merits a specific regimen of medication and physical therapy.
If adjusters viewed an overall cost estimate that took this into account but didn’t call it out specifically, they may think the score is too high and simply discount it or spend time generating their own estimates.
But by looking at the score components, they can now see this complicating factor clearly, know to focus more time on this case, and potentially engage a trained nurse to advise them. They can also use AI to help locate a specific healthcare provider with expertise in rheumatoid arthritis, where the claimant can get more targeted treatment for their condition.
The result is likely to be:
- more effective care,
- a faster recovery time,
- cost savings for the insurer, the claimant, and the employer
Explainable AI can also show what might be missing from a prediction. One score may indicate that the risk of attorney involvement is low. Based on the listed factors, including location, age, and injury type, this could be a reasonable conclusion.
But the adjuster might see something missing. They might have picked up a concern from the claimant that they may be let go at work. Knowing the fear of termination can lead to attorney engagement, the adjuster can know to invest more time with this particular claimant, allay some of their concern, and thus lower the risk they’ll engage an attorney.
Driving Outcomes Across the Company
Beyond enhancing outcomes on a specific case, these examples show how explainable AI can help the organization optimize outcomes across all claims. (Read: How AI and IoT are Affecting the Insurance Industry.) Risk managers, for example, can evaluate how the team generally follows up on cases where risk of attorney engagement is high and put in place new practices and training to address the risk more effectively in the future. Care network managers can ensure they bring in new providers that help address emerging trends in care.
By monitoring follow-up actions and enabling adjusters to provide feedback on specific scores and recommendations, a cycle of improvement is accelerated that leads to better models, more feedback, and still more fine-tuning — creating an ongoing conversation between AI and adjusters that ultimately transforms workers compensation.
Workers comp, though, is just one area poised to benefit from explainable AI. Models that show their work are being adopted across finance, health, technology sectors, and beyond.
Explainable AI can be the next step that increases user confidence, accelerates adoption, and helps turn the vision of AI into real breakthroughs for businesses, consumers, and society.