Data Quality – Garbage in Garbage out

While reviewing the content of this course “Data Management and Analytics”, and considering my next report topic, it occurred to me that there is a very strong central theme throughout the course – “Data”.  Ok, so this is stating the blindingly obvious but it does underpin nearly everything in the business world.  But it not just Data though is it?  Data is simply a series of charagarbage in out imagecters, a mixture of alphanumeric digits until we put some context to it.  Ultimately, it’s what we do with data, how and where we do it that gives us any form of realistic meaning.  Buzzwords of the decade include Big Data, Data Analytics and Business Intelligence are all reliant on data.  However, all of these trends would be useless without data, but more importantly meaningful data.

The quality of the data we use determines and underpins the success, of lack thereof in our daily decisions.  It is for this reason, I believe data quality should be front and centre of the buzzwords for the decade.

Data Quality

There are well recognised papers by industry experts that advocate 4 core dimensions of Data Quality.  Nancy Couture in her paper on “Implementing an Enterprise Data Quality Strategy” (2013) suggested “fitness for use” as a broad definition when considering a data quality assessment programme.  In this article, it is suggested, rather than trying to focus on every dimension, start by focusing on the basics of completeness and timeliness and then move on to validity and consistency.


The below is a simple illustration of the dimensions of data quality.


As illustrated above, there are 6 core dimensions to data quality.

Completeness can be described as the expected comprehensiveness.  Data can be complete even if optional data is missing.  For example, customer contact information should hold name, address and phone number as mandatory fields but potentially have customer name middle initial as optional.  Remember though that data can be complete but not accurate.

planet field cartoon

Timeliness “Delayed data is data denied”.  Timeliness is really about having the right information at the right time.  User expectation drives timeliness.  For example, income tax returns are due on a certain date, filing late returns incurs a penalty.    In the good old days, we went to a travel agent to book a holiday. Nowadays, the user expectation is to be able to see real time availability and price. We suffer real frustration in decision making when occasionally we come across a system where real time information is not available.  According to Jim Harris of Information-Management, due to the increasing demand for real-time data-driven decisions, timeliness is the most important dimension of data quality.

Consistency of data refers to data across the organisation being in sync with each other.  Identical information available across all processes and departments in an organisation.  This can be difficult to achieve where there are multiple processing systems taking information from potentially different sources. A Master Data Management (MDM) strategy seeks to address inconsistency.  In database parlance, consistency problems may arise during database recovery situations.  In this case it is essential to understand the back-up methodologies and how the primary  data is created and accessed.

Validity – Is the data itself valid?  Validation rules are required to ensure the capturing of data in a particular manner ensure that the detail is valid.  Ensuring that the same fields are used consistently for the same information capture.  Nancy Couture describes validity as “correctness” of the actual data content.  This is the concept that most data consumers think about when they envision data quality.

Integrity refers to Data that has a complete or whole structure i.e. overall completeness, accuracy and consistency.    The business rules define how pieces of data relate to each other in order to define the integrity of the data.  Data integrity is usually built into database design with the use of entity and referential integrity rules.

Accuracy.  Data values stored for an object are the correct values.  It may seem an obvious component of the data quality dimension but the data that is captured needs to correct i.e. accurate.  There are two aspects, one is that the recording of the information is correctly recorded as in without typo and data entry error.  The second is that data needs to be represented in a consistent and unambiguous form.  For example, the manner in which a date of birth is recorded, US style 12/10/1972 or European style 10/12/1972.  So when is the birthday?  Good database design should resolve issues on this nature.

cartoon - metadata

Business Benefits

Data Quality as a subset of Data Management is aligned with Master Data Management (MDM) and Data Governance.  They all focus on Data as an asset to the business.  Modern business parlance seeks to find a Return on Investment (ROI) from their Data Management strategies.

Data Analytics: With quality data, we can undertake sound analysis of the business and improve the quality of decision making which in turn improves business performance.  The business can investigate potentially new areas of revenue not previously considered.

Timeliness of good data and analytics affords new opportunities to reach the market with new offerings ahead of the competition.  Further competitive edge can be achieved with rapid decision turnaround, rapid reaction to market conditions.  Predictive analytics can lead to a proactive position in the marketplace.

Customer satisfaction ratings can be improved through improved accurate interaction with the business.

Customer trust in the information and how it is stored is likely to be important in the future.

“Gartner predicts that 30 percent of businesses will have begun directly or indirectly monetizing information assets via bartering or selling them outright by 2016”.

Compliance: Knowing your organisational data i.e. who, what, where, how, why and when goes a long way towards achieving compliance.  Whether it’s compliance with Data Protection requirements, Financial regulations, compliance with Sarbanes-Oxley (SOX), PCI Security (Payment Card Industry) or seeking to achieve ISO 8000, the International Standard for Data Quality.

This is by no means an exhaustive list of the business benefit of good Data Quality.  What about the cost to business of poor data quality?  It depends on the business.

Customers: Poor data, leading to poor marketing, sales, support or service experience will cost your business customers and revenue.

Shareholders: Data accuracy, auditability, transparency are crucial to stakeholder’s trust.  Loss of trust will mean downgrading of shares and weak stock market performance.

Employee Productivity and Retention: Endless hours spent scrubbing data for report input reduces employee performance and leads to poor morale and ultimately staff churn.

The list of impacts on the business of poor quality data is endless.


Taking a step back, it is a matter of perspective.  Some aspects of Data Quality are critical to the business, others less so.  It is a matter of prioritisation and understanding the impact / risk and/ or advantage to the business of seeking to pursue Data quality.  But therein lies the Catch 22, if your data quality is not good enough how can you make balanced informed decisions?


Continue reading Data Quality – Garbage in Garbage out

Fusion Tables

Population Data of Ireland

Pop by County
Population Data by County Heatmap

The exercise initially calls for the creation of an Irish population heatmap.  A couple of pieces of information are required for this task.  Firstly, a dataset containing the population of Ireland broken down by county and secondly some geographical data providing the county boundaries in the form of a KML (Keyhole Mark-up Language) file.  “KML is a file format used to display geographic data in an Earth browser such as Google Earth.” (Google Developers, 2016).

The population of Ireland dataset can be found at: It should be pointed out that the Central Statistics Office (CSO) is a great resource with a wealth of information ready and waiting for interpretation.

The dataset was copied into a spreadsheet to prepare it for a fusion table.  It is important to familiarise oneself with the dataset to ensure a good understanding of the information being observed.  At this point, there is some scrubbing (data clean up) to be done.  The dataset not only contains county information but also includes provinces, cities breakdown and county breakdown (Tipperary – North / South).  There are a couple of other errors which are fixed in order to achieve the most accurate result.  Logging in to Google Fusion tables, click add new table on the file menu.  This loads your data to a table which identifies your counties on the map by way of a marker or dot.  We now need to provide some geographical information to create a visualisation of our population data.

The second dataset is the county boundary geographical information, downloaded from:   Open this file in a fusion table to examine the data.  Some errors in the data were identified and corrected, such as counties being labelled incorrectly.  From the population data table, select merge table and from here select the map table. This dataset is then merged with the population data to create a population map by county.  To create the heatmap effect the spread of population needs to be split into number ranges (buckets), in this case 6 were chosen.  With a lighter colour applied to smaller population number and a darker colour to the highest population area.

Looks very pretty – So What

Population by Density
Population by Density
Being familiar with the geography of Ireland, it may be interesting to see the population density by county.    Taking the population of each county and dividing by the county (Km/2) give the density of each county.  Merging this data set with the County geographic information changes how our map looks.

Now we can see that the density of the Irish population appears to be at its highest along the east and south east coast.  However, Cork which has the second highest county population has a relatively low county density because it is the largest county by land mass.  A way to combat this anomaly, could be to include city density figures which would provide a more accurate image of population density.

What else can we learn?

I’m curious to know more about the population of Ireland, say for example – the index of disposal income across all counties.   The disposable income index is useful as one can see which counties are above and below the state average of 100.   The resulting visualisations of all counties above the state average and counties below the state average paint a very interesting picture.  This analysis is supported by the CSO in their County Incomes and Regional GDP report for 2011, whereby they find the following:

Greater uncertainty at county level.”  While the county figures involve uncertainty they do provide useful indication of the degree of variability at county level. Dublin, Kildare, Limerick and Cork are the only counties where per capita disposable income exceeds the state average in Disp Inc Index Below 1002011 similar to 2010.” (CSO, 2016)

But there are no real answers here so it may be useful to take the analysis a step further by examining the employment levels across the country at that time.

Disp Inc Indes Above 100







Final Push for Answers

In order to examine employment levels across the county, the number of people in the labour force age 15 or over, Census 2011 was examined.  The expectation was, there would be a wide differential of the number of people in the labour force as a percentage of county population. However, the results are surprising, in that the number of people in the labour force per county ranges from 45% for Donegal to 51% for Dublin.

% of Labour Force of Population

In many respects, no clear answer has emerged from the data as to why Dublin, Kildare, Limerick and Cork exceed the state average per capita income index of over 100%.   One could theorise, that these locations are well served by motorway networks, such as M7 Dublin/Limerick route and the M7/M8 Dublin/Cork route, both of which pass through Kildare.AA Motorway Routemap


This exercise as a whole has clearly demonstrated the power of Fusion Tables as a tool for data visualisation and analysis.  Its free, relatively simple to use and yields quite powerful visual results.  Of course a tool is only as good as the data that is inputted.  A strong understanding of the data subject matter one is working with is essential in order to take account of the many contributing factors that shape that data.  A useful aphorism to bear in mind, usually attributed to Mark Twain or Benjamin Disraeli : “There are lies, damned lies and statistics”,


Continue reading Fusion Tables