Insurance-Canada.ca - Where Insurance & Technology Meet

Big Data Secrets, Including How to ‘Fail Correctly’

Big Data is a huge topic for insurers these days. A number of projects are being contemplated, with high expectations. However, Big Data projects differ from other IT projects in several respects. Some useful advice is emerging from several sources, including the value of scoped failure. The Insurance 2023 Forum will welcome several experts to its program in October to discuss Big Data with insurers and brokers.

What Are Big Data?

First, let’s define what we mean by big data.  These data are not only much bigger than regular data, they  are complex (dealing with many elements), not structured (no ordering), continually being updated, and are really important to decision makers.  Practitioners like to refer to the four ‘Vs’ of Big Data: Volume, Variety, Velocity, and Value.

Data produced by Social Media is a common example.  Think of looking the full Twitter Feed or all Facebook updates and asking, ‘What does that mean?”

So, how can an insurance executive start to think about Big Data?  Some advice is emerging that sounds like real common sense.

Failure Can Be a ‘Good Start’ for Big Data Projects

This is complex stuff with lots of promise.  We can’t approach Big Data in the same way as we do other insurance-Technology projects.  Writing in the GigaOM blog, Ron Bodkin, CEO  at Think Big Analytics, notes that failure can be an important starting point for projects:

“Big data, in its raw form, allows for a ‘test and learn’ approach. Companies have to create many small “failures” by developing hypotheses and examining them against the data. ….

“These ‘failures,’ part of the process of uncovering good unbiased analysis, create tremendous opportunities for companies in a number of areas: customer recommendations, risk measurements, device failure predictions and streamlining logistics to name a few.”

 Lessons From the Past for Insurers

While Big Data projects are different animals, we can take lessons from past initiatives.  Writing in Insurance Networking News, Joe McKendrick notes that there are people and technology challenges that we have encountered before, but not with the same risk,  import and urgency:

“In big data scenarios, you have managers not trained in statistics making bet-the-business decisions based on data of unknown quality originating from unvetted sources. You have MBAs that think they can push a button to have insights auto-generated for them. What could possibly go wrong with that?”

McKendrick suggests common sense steps that can mitigate the risks while allowing expeditious progress, including:

  • Start with small pilot projects to measure and attempt to capture some of this data
  • Work closely with business units, determine what kind of data analysis or access will solve particular problems,
  • Build business analysis capabilities.
  • Encourage critical thinking among business users of the data.

More Common Sense on Big Data Available @ Insurance 2023

Both McKendrick and Bodkin note there are a lot of organizations offering help with Big Data, but that users should be judicious as there is a fair amount of mis-information going around.  There is a unique opportunity coming up at the Insurance 2023 Forum in Toronto On October 3, 2013 to have a conversation with two of the recognized Big Data experts in the Canadian P&C Community.

Greg Purdy,  Managing Partner in getClarity will be presenting on the importance and correct use of data in the P&C industry in Canada. Colin Smith, Senior Vice President at OPTA will be discussion the use of a variety of tools for measurement, monitoring, and prediction.

Details on the forum and registration information can be found at the Insurance 2023 Agenda Page.

Meanwhile, we’d like your thoughts on and experiences with Big Data and Insurance.

 

 

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *