, Jul 11, 2017
The article below was published in the Sunday Business Post on the 25th of June.
Big data is everywhere and nowhere: despite the breathless reporting, most businesses are a long way away from making use of the data they already have – and without unified data sources, it will be expensive for them to even dip a toe in the water.
Edward Charvet, director of information insights at Logicalis, said that this need not be the case. Data virtualisation, he said, can be an important first step toward making decisions about how to use data – and to what end.
“Most of the organisations we have worked with – retail, banks, construction, mining – everyone has taken a data virtualisation layer from us,” said Charvet.
The point of data is not merely to collect it; it is to use it, and data virtualisation allows for data to be managed and manipulated without the need for the application to concern itself with that data’s particular format or its location.
Charvet said that this can result in a qualitative leap.
“We practically take on the conversation with organisation. They often say: ‘This is all fine, but it’s complicated’. And it is; there’s no integration, so even if they have a data warehouse, and even if that data warehouse is extremely well organised, they’re getting answers by tailoring the questions,” he said.
Data virtualisation allows for the first step to be taken: asking if the data can be put to practical use in a way that expands a business and allows for the discovery of new potential profit centres and modes of operation.
“Say HR data is kept separately from other sources and the business wants to experiment with combining them. With virtualisation the structure of the data is held within each repository and the business can very quickly create views to see if it can create any value, and whether or not to operationalise it,” said Charvet.
Without virtualisation, this task would not only be slow and cumbersome, but also a major leap of faith requiring not only a gut-level inspiration, but also a willingness to follow up that inspiration with significant investment in technology and processes – and the freedom to walk away if the newly-combined data does not produce viable results.
Business needs must come first, Charvet said, and data virtualisation gathers data together for decisions made by humans, not merely algorithms that have a tendency to efficiently produce limited results.
“Technology makes the data as available as possible, but the limitation is in how people interpret it,” he said.
“There are people who, historically, called themselves data analysts, but these days may call themselves data scientists, and they will have an active interest in business need; they’re expecting IT to be able to provide them with data structure that they can use.”
This human element is what allows for the expansion of business through data, rather than simply the driving of efficiencies railroaded along existing business operations lines.
It also means that failure can become part of the process as costs are lowered and the speed of recovery is vastly improved: dead ends become learning opportunities, rather than months or years-long white elephants.
“Data has to be worked with and failure is part of the process. With the new technologies, you can get things wrong without it costing you very much. Working with data isn’t cheap, and you have to have enough of it, but if you do the numbers becomes quite small if you get it right,” Charvet said.