Big Data is a hot insurance topic these days. The capability to use Big Data could even be a fundamental anchor for future profitability. But early forays are suggesting that there are big risks around getting lost in the process.
Executives need to create a way to put the cart and horse in the right order.
Where are we now?
It’s hard to gauge the penetration and impact of big data and analytics in the P&C insurance industry. Responses tend to be a bit ambiguous. Insurance 2020 & Beyond, is a recent PWC report based on a worldwide analysis of insurance practitioners and consumers. On the topic of Big Data and Analytics, the authors noted:
Both traditional and big data availability is exploding, with the resulting insights providing a valuable aid to greater customer-centricity and associated revenue growth. Yet many insurers are still finding it difficult to turn this data into actionable insights.
But that doesn’t mean there aren’t Big Data projects underway …
In order to get a leg up on competitors, organizations will undertake pilot initiatives and assign some resources. Big Data is a key component, and data sources are relatively easy to find. The problem becomes what to do with the data.
A recent Fortune article reviewed Big Data usage, commenting that “the process of analyzing data doesn’t come cheap, and companies of all sizes need to invest in the software that does the grunt work of crunching numbers. That’s why the market for big data software is expected to grow by 50% by 2019, according to new research issued by [technology adviser] Ovum.”
Organizations which are not prepared for this investment find themselves in a conundrum: If we can’t buy the software to get the results we want, what can we do?
That’s where things turn ugly….
Novarica’s Jeff Goldberg recently posted on LinkedIn summary results of Novarica’s Analytics and Big Data at Insurers report. Of the organizations which indicated that they are using Big Data sources, Goldberg reports that seventy percent are “using traditional computing, storage, database, and analytics technology, meaning they’re working with SQL databases in their existing environment.”
In regards to specialized Big Data technologies – such as Hadoop, NoSQL, etc. – Goldberg says “only a small percentage of insurers use them and almost no insurer uses them extensively.”
What does that mean? According to Goldberg, “Most likely, a few key elements are pulled from the data and the rest is ignored, allowing the so-called big data to be treated like ‘small data,’ processed along with all the policy, claims, and customer data already being stored.”
This is like buying a Lamborghini and engineering it so it never goes beyond second gear.
There is a patch, but it comes with conditions
Goldberg notes that third-party vendors should be prepared to enter the breach. He suggests that these vendors would do well by offering this as a full turn-key solution. “Big data in a box”, Goldberg calls it. He also suggests that core admin systems vendors could adopt similar strategies.
I agree, to a point. And that point is the fine line between information and knowledge. I would be happy to outsource most of the management of data and maintenance of tools, but I would want to keep everything relating to the queries I need to run, the benchmarks that I would set, the conclusions/recommendations that emerge, and the linkages to operational systems.
In other words, I better be able to understand what the data contain, the information and insights the data are providing, the qualifications that the data are working under, and the automated processes that could be triggered.
The executive mandate
This is critical stuff, and executives need to have the best information possible to create and execute clear strategy. At the 2015 Insurance-Canada.ca Executive Forum, on August 31, 2015, we will have several sessions that will help, including one specifically on Distinctive Analytics Supporting Data-First Enterprises, with Cindy Maike from Hortonworks.