Three Key Misconceptions Of Data Quality

Increasingly more corporate boards and executives understand the importance of data for improved business performance. However, the majority of the data in enterprises is of poor quality. According to a report in Harvard Business Review, just 3% of the data in a business enterprise meets quality standards. Joint research by IBM and Carnegie Mellon University found that 90% of data in an organization is never successfully used for any strategic purpose.

But what is the impact of improved data quality for business? At the core, quality data results in improved business performance. Research found that digitally mature firms are 26% more profitable than their peers. Mckinsey Consulting found that companies that are insight-driven report above-market growth and EBITDA (earnings before interest, taxes, depreciation and amortization) increases of up to 25%.
So, what exactly is data quality? Data is considered to be of high quality if they are fit for use in operations, compliance and decision-making. However, there are many myths associated with realizing data quality and this is creating misconceptions on data quality. Against the backdrop, this article looks at three important data quality myths and their corresponding realities.

Myth 1: Data is a business asset.

Data is a business asset only if it is managed well—otherwise, it is a liability. Data has the potential to improve the company’s revenue, reduce expenses, mitigate risk and become a valuable business asset. But it has some serious limitations.
There are four common scenarios where data can become a liability for the business from excessive data volume: complexity, increased carbon footprint, data security and data privacy. Overall, data is a valuable, measurable and monetizable business asset only when it is managed and processed well. Basically, data must be refined or processed to archive value for consumers, like refining crude oil to make products such as gasoline and diesel.

Myth 2: 100% data quality is essential for analytics.

In analytics, perfection is the enemy of progress. The truth is that 100% quality data simply doesn’t exist for analytics. Data is often originated and captured for operations and compliance in a defined and deterministic manner. But when data is used in analytics to derive insights for decision-making, the focus shifts from operations and compliance to improvement, innovation, experimentation, productivity and more.
All these initiatives are based on hypothesis, and often the data is not always available. In other words, the more powerful or futuristic (predictive) or prescriptive (what-if) your questions are, there is the likelihood that data is not available, given that data is always a record or evidence of a historical event. For this reason, it is often said in analytics programs that perfection impairs progress.

Myth 3: All data is important and valuable.

Data value depends on the data type and the data lifecycle (DLC) stage. In other words, the value or purpose of data is contextual and depends on time and location. Firstly, data value depends on the data type. While data can be classified from various perspectives or views, data can also be classified into reference data, master data and transactional. The transactional data is what has tangible business value.
Secondly, data value depends on the DLC stage. The DLC in a business enterprise typically involves four stages: data capture, data integration, data science and decision science. When data is captured, there is little value as the purpose is mainly for operations and compliance. As the data moves across the DLC, the purpose and consumption of data expand to analytics and decision-making. Forrester found that organizations using data to derive insights for decision-making are almost three times more likely to achieve double-digit growth. The more value added, the greater the chance that data will create a lasting and competitive edge.

Conclusion:
The practices for improving data quality vary from one company to the next as the data quality factors are dependent on a host of diverse variables, such as the industry type, size, operating characteristics, competitive landscape, associated risks, stakeholder groups and more. However, creating and managing a data catalog, maintaining critical data in the system of record (SoR) for standard business processes, implementing robust controls over spreadsheets and other unstructured data, maintaining sound data integration solutions, carrying out regular data literacy training programs and instituting a data governance program, including the right roles and responsibilities, are some of the best practices that will go a long way toward leveraging data for improved business performance.

About the author:

Prashanth Southekal

Founder and Managing Principal of DBP Institute. I help companies transform technology and data into a valuable business asset. Read Prashanth Southekal’s full executive profile here.

Posted in