How to Build Infrastructure for Successful Enterprise AI

Patrick Lastennet, director of enterprise, Interxion Patrick Lastennet, director of enterprise, Interxion

By Patrick Lastennet, director of enterprise, Interxion

Interxion director of enterprise Patrick Lastennet, on the infrastructure businesses need for successful and effective AI implementation.

In recent years, artificial intelligence (AI) has become a game-changer for enterprises in highly competitive markets like robotics, financial services, autonomous vehicles, and so on, due to its ability to automate repetitive processes, deliver new strategic insights, and accelerate innovation. Research from McKinsey shows that 63% of enterprises increased their revenue as a result of incorporating AI into their business processes. And, as society continues to shift even more towards digital, adopting AI will only become that much more important for businesses across industries and geographies.

But even as AI adoption grows, building the infrastructure needed to support AI at scale is a growing challenge. In fact, 40% of enterprises agree that lack of IT infrastructure is the primary barrier to AI implementation, and 45% of enterprises say their current infrastructure is not capable of meeting future demands for AI workloads.

Traditional AI methods such as machine learning don’t necessarily require a ton of data. But with deep learning and the emergence of IoT/5G, there’s an increasingly massive amount of data being generated from factories, smart cities, driverless cars, edge devices, and so on. Designing an infrastructure capable of leveraging that data for AI is complex. But it’s important to get it right from the start, since there are significant costs – time, money and resources – involved in having to rearchitect or move the AI deployment.

Let’s take a look at what the ideal infrastructure looks like for enterprises to enable AI at scale.

The Ideal Infrastructure for AI Workloads

In order to leverage the growing volume of data for AI, enterprises need two essential capabilities: the ability to access the data, and the ability to compute and process the large amounts of data quickly, in near real-time.

In terms of accessing data, enterprises need high connectivity to bring data from the edge into data centres to analyse and build models, and send models and data back to the edge to optimise inference. This requires proximity to nodes to bring back data from devices in the field, offices, and manufacturing facilities. Some AI workloads and use cases will be optimised for cloud, and direct cloud access needs to be managed in a secure, performant way. Geographic scalability is also important, allowing enterprises to support AI workloads in different locations and reduce latency for faster delivery.

Once the data enters the data centre, enterprises need high-density support to enable computation for training models. Most enterprise data centres today are unable to manage densities high enough to support computation, and this will remain a challenge going forward, as densities will continue to accelerate alongside the increase in data creation. Perhaps just as important, the infrastructure needs to be highly scalable. Scalability is a key factor in the success of AI initiatives, since the ability to run hardware (GPU) at scale enables the effect of large-scale computation that provides valuable insights.

Options for AI Infrastructure Deployment

Based on these requirements around connectivity, power density, and scalability, there are a few options for deployment, including an on-premises solution, public cloud or colocation.

Many companies today don’t have the resources needed to build this kind of environment on-premises, nor the expertise needed to manage it. Though an on-premises solution may be relatively inexpensive, most enterprise data centres simply aren’t capable of handling the scale and power density required for successful AI programmes.

The public cloud route also presents significant challenges when training models at scale or deploying them in production, because of latency issues and high costs. Training AI models at scale requires compute to constantly process large data sets over a long period of time. This kind of high utilisation, coupled with data egress, is not an optimum use case for the cloud.

As a result, enterprises are turning to third-party colocation data centres for their AI applications and infrastructure. Colocation provides the scalable, flexible environments enterprises need to develop and scale their AI programmes, as well as support for high-performance computing, and the connectivity needed for AI to thrive at scale.

By investing in the necessary infrastructure upfront, enterprises will be able to focus on gaining insights from their data, providing value to customers and staying ahead of the competition.