We are now standing at the edge



by Rob Tribe, Regional SE Director, Western Europe, Nutanix

Phrases such as ‘data gravity’ and words like `latency’ are coming to have increasing significance for businesses as they struggle to keep their competitive edge and at least maintain their levels of productivity. They are important because to achieve those goals, it becomes necessary to invert one of the standard models of computer architectures.

Since the days when computers were invented, the standard approach has been to always move the data to where the processor(s) sat.

After all, the processors were the really expensive components. Now they are a dime a dozen, and at the same time, networking efficiencies mean that the data – the point where it is generated – is often thousands of miles away. So it now makes both economic and operational sense to move the processing to where the data is.

That location is out at the edge of the network, the `coalface’ if you like. It is any place where the work is now being done: office block, oil well, factory, hospital, even whole cities these days.

These places can not only involve serious physical distance but also be the cause of the on-going exponential growth in data volumes.

Datafiles are now big: Gigabyte ones are run-of-the-mill and Terabyte ones increasingly the norm, and physics determines the rate at which it can be moved, both in terms of volume and brute speed, with the speed of light ultimately governing all.

That latency, the time between launching a file from a physically distant edge location and the receipt of all of it at the company data centre, can be significant and it can weigh the business processes down.

That is the effect of data gravity; it increasingly requires a serious effort to overcome it. That, in turn, requires an equally serious expenditure on energy to get it moving at all. Perhaps worst of all, across all industry sectors, this situation is only ever going to get worse.

Computing at the edge, therefore, is fast becoming the obvious alternative. Essentially, this revolves around that notion of moving processing to the data.

What users end up with is, in effect, a virtualised datacentre, distributed across its entire environment. The traditional datacentre is still there, but it will have become, in effect, the `back office’.

Wherever work is being done, the required processing resources and the relevant applications are near at hand.

This has several advantages, not least that a large amount of the data traffic is increasingly concerned with the real time management of the processes being run at that location – for example a production line in a factory, a housing estate in a smart city, or a regional office campus for a major enterprise.

This means that just the applications appropriate to those tasks need to be available, and much of the data is stored locally. It will only be the final data set that makes the journey to the central, back office data centre.

A large proportion of that generated data, especially in real-time systems, only has a short life span that is, for example, related specifically to a production process.


Newsletter

Time is precious, but news has no time. Sign up today to receive daily free updates in your email box from the Data Economy Newsroom.


Once that process is completed successfully, much of it is no longer needed and need not be retained: so holding it locally to both the production process and the compute systems makes far more sense than sending it to a central data centre some distance away.

Such an approach will, of course, put more responsibility on those local applications to operate as autonomously as possible.

This will open up new opportunities for the use of industrial strength analytics tools working hand in glove with AI and machine learning services.

These will then be collaborating with local implementations of core business management and ERP services that will be effectively exercising the policies set out for the individual edge systems. So autonomous and highly collaborative operations between those applications will be required.

Edge computing will become crucial in almost every application, from consumers to the heaviest of heavy industries.

But some obvious early adopter marketplaces are opening up. These include the energy utilities, the oil, gas and raw materials conglomerates, healthcare, autonomous systems, every aspect of smart cities.

One of the biggest sectors across all of these, however, is the Internet of Things (IoT), that vast multitude of different devices that monitor and control any process – from turning on a light bulb through to complete energy generation plants.

All of these devices will need to be managed and their data outputs collected, collated, analysed, processed and the results reported on to the back office datacentre. And the numbers involved are already huge.

The worldwide IoT spend is expected to reach $745 billion this year, with the number of connected IoT devices expected to total 30 Billion.

At this point, businesses that have not yet made any move to build out their edge-computing environment will be contemplating the investment required. But by the very dispersed nature of edge computing, the most economic and flexible answer will be to go the cloud route.

Its many advantages, such as flexible up and down scaling as required, and the opportunity for tight operational and economic control, can also bring with them a wide and growing range of services and resources.

Smart businesses should be looking to harness platforms that come with a range of business applications and management services from which businesses can select the tools needed to build the distributed, virtual data services that each edge point requires.

That depth of service provision, coupled with an ability to build and operate multi-cloud environments, will give businesses the flexibility to implement the precise distributed virtual data service environments they require.

Read the latest from the Data Economy Newsroom: