I recently had the opportunity to hear Sam Altman, CEO of OpenAI, talk about the future of artificial intelligence. The one-hour discussion in Tel Aviv, which was so sought after that tickets were gone in under three minutes, focused significantly on regulation - unsurprising considering he’s on a world tour to shape AI’s regulatory environment while it’s in its infantry.
As a chief technology officer of Faye, a young, modern insurtech for travel, this got me thinking about regulation in the insurance space and what we can learn from how OpenAI is thinking about the risks and rewards of AI in a moment of explosive growth and adoption, as well as what AI regulation might mean for insurance.
The genie’s out of the bottle
There's no going back with AI. You can't make a technology this powerful available to almost every consumer on earth and expect them to willingly surrender it. In some ways it would be like asking people to agree to hand back their smartphones or disconnect their internet.
Thinking of AI and the internet in the same vein is a helpful way of understanding a technology that has the power to transform lives and improve economies, but simultaneously affect jobs and makes the work of criminals easier. You simply have to accept things are changing and mitigate the risks.
Remember, when AI was blocked in Italy, the number of consumers purchasing VPNs went through the roof. When we couldn't travel for two years, we all crammed onto planes the second restrictions were lifted. With the threat of TikTok being banned across America, it still remains hugely popular as one of the most downloaded apps, with its user base growing.
We don't like giving things back that we've already experienced and that we know enrich our lives. So is the case with AI. Altman, during his talk, shared dozens of examples of people who are using ChatGPT, from education, to health, to human connections between parents and kids - people whose lives have improved for the better thanks to access to this technology so much so that losing it would actually make their quality of life worse.
Insurance companies are already experimenting with AI, with Zurich recently announcing that they’re testing ChatGPT on claims data. Not to mention, insurance companies have a long history of trying to make chatbots work. The first company to really make this work is going to win over consumers and start an arms race as everyone else races to catch up.
It’s only up from here
During his talk, Altman shared that he thinks from this point onwards AI doesn't necessarily improve in leaps and bounds, but on an ever-increasing curve where the technology is better year after year in a way that will make the ever-popular ChatGPT of today eventually feel like an iPod compared to an iPhone.
Any company not at least experimenting with this technology is going to be left in the dust as smaller competitors find themselves able to do more and more with less and less.
Even Altman, when talking about the current OpenAI technology, speaks about it in the past tense, as if the current state of it is as new as a typewriter.
These technological innovations are going to come to legacy-dominated spaces like travel insurance one way or another, with them leading to improvements in customer experience, changes in the way insurance is offered and sold, how policies are underwritten and claims are handled.
For example, imagine a world where most insurance claims are processed in hours, instead of days (well, weeks) because a computer system can go out and collect all the documentation for you and prepare the paperwork perfectly, ready for a human to make a quick decision on whether to approve it or not.
Or where the perfect policy, just for you, is prepared in seconds based on being able to tell a computer in your own words what coverage you need, by having it translated into “insurance paperwork speak” for you.
To regulate or not to regulate
Well, even Altman admits that AI is on a path that will eventually lead to a superintelligence, something which has the power to transform humanity for the better or propose an existential threat. There's no shortage of new commentary and letters signed by well-regarded scientists and technologists to confirm how plausible that risk is.
This is an excellent argument for regulation. We regulate many things that have the capacity to harm people.
For insurance, regulation is typically focused on protecting consumers in three ways: that they won't be sold a policy from a company that doesn't have the means to pay out a claim; that they won't be ripped off or otherwise tricked when making a purchase; and that they won't be unfairly discriminated against when trying to buy insurance.
Subscribe to our newsletter below
That last example can be particularly challenging both for AI, and for insurance, because right now it’s unclear what data AI was trained on and if there are hidden biases. That hidden bias could mean some groups of people will have a different experience or a different result when using AI than others. And in insurance, this can lead to poor relationships with your customers, potential lawsuits and your brand losing credibility.
Additionally, consumers often need to share sensitive information with insurance companies like health or financial information, and they have a right to feel that information isn’t about to be inadvertently packaged up and sent to Microsoft and Google where it will be shared and monetized without their knowledge or consent.
To be clear, Altman mentioned that he is in favor of regulation and the current proposal they’re championing is to create the IAEA (International Atomic Energy Agency) of AI of sorts, so countries keep each other honest in how we develop and test AI moving forward. We’re nowhere near the point that the best AI is going to be built in a basement, and for now we need either a mega-corporation or government levels of computer power to build the best models.
Regulation in both insurance and AI is a good idea to safeguard both consumers and businesses. That said, it's an incredibly slow process in an age of rapid change. Case in point: Many insurance companies are working with years-old rate files as they strive to get newer, more accurate rates approved.
In fact, in the news over the last couple of weeks is the number of companies refusing to sell home insurance in California because they can't get rates approved that correctly account for risks like wildfires.
And travel insurance providers can't sell travel insurance for a post-COVID world using pre-COVID data, and though new ideas and products are being dreamed up every day, it could be years before some of them get the green light.
Now imagine a world where a machine can correctly update the rates for the current risks, accurately, fairly, reasonably, in around two seconds, and then imagine the insurance company that will spend so long getting them approved that they’ll already be out of date.
Change is here to elevate substandard industries and products. Change can be scary, but if we can't adapt, people will be left with substandard products across industries.
Altman admitted that most people are rightfully concerned about the risk of AI causing economic displacement but that this fear can’t come at the expense of embracing economic opportunity. He’s hyper-focused on what he calls “taste,” which is figuring out what people want the most and focusing the next version of his AI on that, and not just speed or size. If AI can give us what we want, then it’s mostly going to be to our benefit.
As we look ahead, we must admit that large language models like ChatGPT and other forms of AI are going to change the world. Consumers should want the benefits (of which there are many), and insurance companies should want to provide them rather than resisting them.
Working with the regulators is going to be a challenge, but not an insurmountable one, so long as we start from the assumption that we want things to improve for both businesses and consumers alike, and that this isn’t going to happen unless we make peace with the fact that AI is here to stay, and it’s time for us to catch up.
About the author ...
Daniel Green is co-founder and CTO of
Faye.