By Philippe Torres
Montreal, QC (June 11, 2014) – With increasingly vast amounts of data coming online from sources more diverse and granular than ever, insurers are naturally keeping tabs on big data. The era of steadily improving data management capabilities, bringing legacy systems up to spec, and bridging data silos seems almost quaint in comparison to what is now on the horizon.
But as much as the dawning of the era of big data brings with it technological challenges of an altogether different magnitude, it is worthwhile keeping an eye on the ball. Extracting knowledge from these new data sources and taking advantage of larger-than-life datasets represents and an industry-wide inflexion point, one that is to be embraced, in fact.
For example, Usage-Based Insurance (UBI) – with its classic micro-segmentation of markets – stands to be amongst the biggest benefactors of larger and more granular data volumes. In a similar vein, big data presents big potential for improvement in loss-prevention programs. From the analysis of social media streams, to video streams of commercial site locations and actual “hard” data from devices connected to the Internet of Things, big data stands to greatly assist in loss prevention.
In the realm of “industrial” big data there will be opportunities to develop new commercial insurance products by leveraging high-speed communication infrastructures and aggregating the data streams produced by industrial machinery. It is worth monitoring the possibilities for staying on top of potential claims-generating failures represented by developments such as these.
Big data can also provide additional “lift” to fraud detection models. Social media feeds may enhance the situational and time contexts of a claim event, revealing patterns and pointing to potentially willful misrepresentations. Police forces are already finding this kind of data helpful during their own procedures. Anecdotal evidence suggests social media can nudge claims investigations, too, in the right direction.
Customer service portals can also take advantage of big data sources. For example, a personal property insurance carrier could incorporate publicly available data on public services by geographical area, to help clients select a neighborhood for their next home. This data, combined with criminality rates and a carrier’s own claims data offers a perspective for clients that will certainly create a positive impression of the carrier, driving retention or attracting new clients in a very natural manner.
But big data brings big questions. What from these seemingly endless streams of new data is pertinent and worthy of analytics? How long does an insurer need to retain the tweets, posts, audio and video streams? Given that perceptions can change dramatically in a very short time, we really need to examine the value of big data over the long-term.
Consider telematics devices that interface directly with onboard vehicle control systems. It is not uncommon to find hundreds of variables in these “raw” data sources at a very fine level of granularity. Determining what is pertinent requires careful aggregation of the data without obscuring the most important data relationships. Suffice to say that assigning enterprise value to every big data element is a nearly impossible task.
As is known, predictive analytics can suffer from model degradation if the underlying data sources change too rapidly. It remains to be seen what “lift” is provided by models using rapidly aging data. Big data sources have a greater dynamic range and volatility than legacy insurance data sources – thus present greater challenges for accuracy in modeling. Strong predictions become questionable.
Privacy issues, already at the fore, are additionally compounded by big data. Previously, most data was created internally as a byproduct of business processes. Ensuring privacy and confidentiality was under the direct control of the carrier. In big data, data flows into the carrier from outside sources. Guaranteeing privacy and security under these circumstances is a whole new kettle of fish. Irrespective of data volume, velocity or source, carriers seek innovation and assurance with regards to security and privacy.
It is clear that the onset of a data deluge in the form of new, nimble, but massive big data sources cannot be ignored by the insurance industry. It is also understood that processes and applications with a more sharply defined ROI will take precedence, for now. User analytics, policy administration system replacement and underwriting platforms remain areas where the ROI is clear.
To be of interest, investing in, harvesting, and processing big data sources must yield a significant ROI. It is noteworthy that as big data enters into its growth stage for the insurance industry, it has yet to cause any major disruptions. But clearly, big data means big changes. Consequently – and even as the industry still searches for its first big data “killer app” – the opportunities for positive ROI in big data abound.
For more information, read the White Paper: .
About the author
Philippe Torres is a founding partner of InEdge, a consultancy specialized in Analytics for the insurance industry. He has been working in the industry for close to 25 years. Prior to co-founding InEdge, Philippe was employed by Sybase and then Sun Microsystems as a Solutions Architect. Philippe has unique expertise in the areas of Analytical Solutions, Property & Casualty and Life Insurance industry, as well as R&D in the field of Data Warehousing and Business Intelligence.
About InEdge
InEdge is a leader in Insurance Analytics solutions. Expert at quickly leveraging data, InEdge seamlessly and powerfully creates business advantage for its clients. Since its creation in 1994, InEdge has designed and implemented some of the most sophisticated analytical applications available today. Our clients add up to an impressive roster of Property & Casualty and Life insurance companies. Our Analytics solutions improve and make easier decision-making at all levels for our clients.
Source: InEdge