Computers and artificial intelligence have come along at an exponential rate over the past few decades, from being regarded as oversized adding machines to the point where they have played integral roles in some legitimately creative endeavours.
Bad data costs your organisation money. Last year, GS1, the company that is responsible for barcoding and product identification systems, released a report that found bad data costs grocery retailers $675 million in lost sales every year.
GS1 was only looking at data housed in barcodes – static images that simply convey purchase information to the retailer. The human customers, that most businesses have a relationship with, are more complex and more valuable than any barcode.
When dealing with human beings, there are many more opportunities for errors to arise in the process of collecting and collating data for the collection and analysis of data can produce bad data, and the costs to the business as a result are far more severe.
A lost sale isn’t the only negative outcome of bad customer data. There’s also the risk of losing a loyal customer, or worse, reputational damage, as customers angered by a poor experience resulting from bad customer data take to the social networks. Most predictions estimate bad data is costing organisations between 10 and 25 per cent of their total revenue.
Bad data also occupies an increasing amount of an IT team’s time and budget. With corporate data growing at about 40 per cent each year, IT teams already find themselves spending up to 50 per cent of their available budget on information scrap and rework projects, according to Software AG research.
Just in eDM campaigns alone, the average company wastes $180,000 each year on mail that doesn’t reach the right person, all thanks to inaccurate data, the report adds.
Managing data quality does not happen by accident. Nor should it be the exclusive role of the IT team. The best way to minimise the bad data that exists in an organisation is to understand the lifecycle of customer data and identify the critical points where quality issues can arise.
The typical business has three key elements of data management. The first includes business applications such as CRM, enterprise resource planning and customer information systems. The second is data processing where data is extracted, transformed and loaded, and the third is the storage where data is warehoused, integrated and analysed.
Organisations often struggle at the intersection of these elements, particularly when human interaction with data occurs, and that’s where the bad data can emerge. For instance, while most organisations we speak to have invested heavily in systems that deal with data, in almost every one of those organisations we observe users manually manipulating data in excel before use in business critical applications. The capacity for human error and a lack of understanding of the lifecycle of data can create significant quality issues, which often are not noticed until it’s far too late.
Once bad data is discovered within an organisation, the immediate next step tends to be to implement a data quality solution. These point solutions will often simply kick the proverbial can down the road; a fix for any one application is a temporary solution that will, in an absolute best case scenario, fix all the existing bad data, but will not save the organisation from future waves of bad data. It’s often the only data quality initiative that an organisation will undertake because there is a persistent attitude within business that managing data is the role of the IT team.
How can an organisation prevent this massive cost running havoc through the business? The answer is to ensure their staff are tackling data integrity end-to-end and for all parts of the business that touch data through its lifecycle are responsible for maintaining data quality.
If you’re managing people who deal with data such as analysts and campaign managers, a useful starting point is to understand how much time they’re spending manipulating data versus generating insights from that data. If there’s a disproportionate amount of time being spent manually transforming data the chances are you have a data quality issue.
With data increasingly being exploited as an asset to drive competitive advantage, we shouldn’t be leaving it’s management to the IT team. Marketing efforts can be harmed significantly as a result of bad data – from poor response campaign response rates to brand and reputational damage.
So it’s the marketing team that needs to be the driver behind understanding what causes bad data to work into the organisation, and take a lead in seeking to minimise its impact wherever possible.
James Forbes is the head of marketing and digital for Infoready. The pure-play information management and business analytics consultancy specialises in helping organisations transform data into actionable intelligence.
More from Infoready:
Follow CMO on Twitter: @CMOAustralia, take part in the CMO Australia conversation on LinkedIn: CMO Australia, join us on Facebook: https://www.facebook.com/CMOAustralia, or check us out on Google+: google.com/+CmoAu