7th February 2020

Data Management: Errors of Omission and the ‘Green Automatic’ Effect

<span id="hs_cos_wrapper_name" class="hs_cos_wrapper hs_cos_wrapper_meta_field hs_cos_wrapper_type_text" style="" data-hs-cos-general-type="meta_field" data-hs-cos-type="text" >Data Management: Errors of Omission and the ‘Green Automatic’ Effect</span>

Our Product Marketing Manager, Andrew Lowerson, discusses the data management process within our organisation. With a background in the automotive industry, Andrew also goes into detail about the Omission errors and the effect of 'Green Automotive' and how this relates to Data Management within the Financial Services sector. 


It was the Eureka moment.  Charts had been studied.  Volumes had been analysed, patterns looked at.  The data was clear, it was inarguable.  That periodic uptick in the sales chart - Every few years, we sold hundreds more in a sudden rush – and we were not going to miss such an opportunity.  So the organisation swung into action - “Get some more ordered!” 

But all is not always as it seems.  We’ll return to our example in a moment, a real-life example from my own background, that revealed to me that data in itself has no motivation, but is subject to outside stresses and interpretations.  In short, data will always tell a story, what story always depends as much on the reader as it does on the narrator. 

It is, however, clear we are addicted to these stories.  It’s hardly surprising; from the TED Talks that tell us that data drives decisions to the Amazon algorithm that suggests we might like a new laptop; the omnipresence of data collection is a facet of our lives. The challenge this is creating is one quite different to those of history, where a broad direction may be defined by a hypothesis and supported by a few clearly relevant data sets. Today, any enterprise of almost any size is, by its very nature collating and processing so much information that the real issue is in deciding what to define as relevant, prior to even taking an action.  Within our company, we talk about the three steps to successful data management: 

  1. Consolidation of data points 

When you are working at scale there will inevitably be requirements to sync up data types, remove errors, understand formats, analyse size – this is the basics of what we need to do.  Once this is completed, then we have something to work with. 

  1. Assessment of information 

Once the consolidation is completed, we have so much that decisions often need to be moved backwards until we can clearly define the data sources that will allow us the clearest picture of the situation.  In the financial services industry, the implications in obtaining a fully centralised view based on all accessible data are clear – a hugely improved macro level view of an organisations operational efficiency. However the sheer scale of touchpoints that a typical consumer now has with the digital landscape means that often deciding what not to review is more critical.  This in itself presents the difficulty in approaching a quantity of data, from diverse origins that are unified- or “unstructured” 

  1. Unstructured data sets 

If there is a true competitive advantage that remains untapped in the Trust and Corporate Services marketplace, it is in mining the quantity of unstructured data that is collected, and turning this into a useful, relational objects.  Gartner have estimated that only 20% of data within an enterprise is actually linked, and as such able to provide operational outputs.  The remainder is in unlinked files, databases, images, which require additional time to analyse, cost to store – and struggle to influence decisions.  By taking steps to migrate as much of this data into a structured form, insights can be enhanced exponentially. 

So how does this relate back to that eureka moment at the beginning of this blog?  You see, the item in question was cars.  Early in my career I was involved in the forecast of factory production volumes, specifically vehicle volumes based on customer sales.  Every few years there was a clear spike in a particular combination of engine/gearbox/colour – or more precisely here Green Automatics.  The ordering algorithm therefore continually suggested placing this order.  Digging deeper however, it became clear, that the data was incomplete. While we had successfully consolidated Volume and Sales data, and believed that we had accurately assessed the information.  But there was a problem - we had been defeated by the data.  Not necessarily it’s presence, but rather failure to accommodate it into a workflow.   

The demand for green automatics was not market led, instead it was driven by incentives placed on aged stock that had been unable to be sold.  This information was held in numerous locations but did not feed the ordering algorithm – hence every few years the “Green Automatic” effect would re-appear.  This issue prompted an active effort to identify and integrate key information in the decision-making process, and to introduce a single source of truth across the organisation. 

When talking about data, this would be my takeaway; data represents the ability to represent the truth -but that truth is not inherent.  Influencing factors must be identified and Interpretation from industry experts remains key, factors which help to avoid the “Green Automatic” effect. 


Share your thoughts or email us for more information.

Follow Andrew on LinkedIn