The Cost of Ignoring the Metadata Opportunity

By Roy Pollack, X by 2

Toronto, ON (Nov. 3, 2017) – Let’s get this out up front. The point of this article is not to sell anyone’s Metadata Tool Suite. Nor is it to try to sell anyone’s Metadata Consulting Services. The point of this article is to sell something, however, and that something is an idea. The idea that some of the costs associated with maintaining metadata are not only justifiable but also, in fact, constitute a worthy investment.

So let’s clarify what we mean by “Metadata.” There is the classic definition of metadata that says it is “data about data.” Now that is not very helpful, is it? However, this may sound more  familiar: “Get me a report on Gross Sales by Fiscal Quarter, and also include Net Sales by Calendar Quarter.” The expert knowledge needed to produce these reporting metrics accurately can be maintained as “Metadata.”

Notice the part of the statement indicating that it “can be maintained.”  If that’s the case—and it is—why don’t more companies invest in doing just that? Why do they choose to reinvent the wheel each time someone asks for these reports?

Perhaps the reaction is that everyone involved knows what is meant here, and there is no need to explain how to go about building this report. And maybe in a two-person company that would be true, and any effort to maintain metadata would be a waste of time and resources. But most companies have hundreds or thousands of people in different roles having varied responsibilities. There is employee turnover, new hires and transfers of duties. There are likely various permutations of the old way, the new way, and anything in between regarding how that data is being maintained in multiple systems. And there are the two, three or more synonyms used that all mean the same thing when it comes to this data—or do they—nobody is quite sure.

Perhaps the pain of getting an inaccurate report that had to be redone, or worse yet, an inaccurate report that was shared and used to make inappropriate business decisions, hasn’t been severe enough as yet. But this scenario probably has happened in your company and will happen again. The cost to the business can vary greatly. The cost may only be the effort to rework the report numbers, so they are correct. But the cost could be a global embarrassment and associated financial hit in the millions.

On the other hand, maybe your company is very cautious and thorough in testing any report numbers, and the above almost never happens. What does inevitably occur though, is that the business grows and the systems evolve with new technologies. Mergers and acquisitions happen, and the resulting new data is integrated with the company’s existing data. And as mentioned, new employees need to learn and understand all this data and IT teams need to build new system integrations and convert old systems to the new. All of these data actions require the resident expert knowledge to be transferred to the resources doing all of this data conversion work. And this transfer of knowledge does have quite a cost.

But what is that cost and how might it be quantified? The numbers shared below are a way to begin the metadata justification and quantification process, but of course, in the real world, any company’s actual numbers may differ. That’s why it’s critical that any company that has a PMO or project managers that can provide more representative numbers for that company should do so and apply them to the sample framework and ratios that follow.

These numbers are representative of a typical IT project. What is typical? Well, there is IT industry standards that we can use as average ratios for the cost and effort to implement data integration and reporting system projects. The ratios can be pulled from numerous sources including Toolbox.com, where I have used such figures in the following sample tables.

For example, the $700 cost-per-day-rate (aka the burn rate) is a bit arbitrary, and any individual company’s mileage may vary; this is where any particular company can substitute daily project expense per project type. Extending the estimated person days by the projected daily rate provides a ballpark cost for each phase.

Table 1

The particular focus here is on the “Analysis” phase. This is where companies tend to reinvent the wheel over and over again. Many data and analytics projects spend fifty percent or more of their efforts and budget in this phase researching, identifying, clarifying and otherwise normalizing standard business terms from existing processes and implementations. However, with an investment in a metadata program, a vast majority of the costs in this phase could be avoided in perpetuity. In the above example, a metadata program would produce an estimated cost savings of $87,500, or half the budgeted costs for this phase.

Implementing a metadata program and the costs to do so will, of course, vary from one company to the next. However, assuming an average cost of $50,000 for metadata tool/repository enterprise license, plus another $50,000 to configure and initialize, and extrapolated over multiple projects over multiple years, the result is a pretty easy cost justification that pays for itself after just a few projects, maybe sooner.

Additionally, for those interested in softer cost justifications, a metadata program that provides consistent and reliable business data definitions will improve the quality of each project going forward. Acknowledging that any quality improvement measurement is subjective, most would still agree that there is value provided by such a program. To that end, the following table provides some industry standard justifications that the industry aggregated data bears out:

Table 2

And this is just one small example of the kind of experiential data the industry could leverage to more effectively and efficiently justify a metadata approach to data. It is an example that could be applied to almost any other program or solution for which a decent amount of analytic data exists. For the insurance industry, that’s nearly every program an insurer might conceivably need. So the next time your business and/or financial groups start to push back on your request for some necessary technology programs, don’t overreact and don’t get drawn into a subjective and emotional back and forth about the benefits and value provided. Instead, take a deep breath and start working the data.

About the Author

Roy Pollack is a Data Solution Architect for X by 2, who assesses and implements complex data integration, conversion and optimizations that provide improved understanding, analytics and visualizations of enterprise information. Pollack understands the uniqueness of each organization and is adept at utilizing best practice design patterns to expedite and achieve quality solutions by using the most optimal and pragmatic implementation approach. Over the past 25 years, Pollack has provided data solutions across many industries, including insurance, finance, retail, and healthcare, as well as various challenging system conversions.

About X by 2

Established in 1998 and based in Metro Detroit Michigan, X by 2 is a technology consultancy focused on the practice of architecture in the insurance and healthcare industries. Whether P&C, Life, or Health, X by 2 knows the insurance business and has proven experience planning and delivering core insurance systems, business applications, and enterprise integrations. For more information, visit xby2.com.

Source: X by 2

Tags: ,