Thursday, November 23, 2017


Fujitsu CTO: Don’t expect dedicated data centres for AI



Dr Joseph Reger, Chief Technology Officer EMEIA at Fujitsu, talks to João Marques Lima about the future of data centres powered by non-human intelligence.

The age of artificial intelligence (AI) is approaching the data centre at speed. Globally, investment in AI is predicted to top $36.8bn by 2025, up from $643.7m in 2016, according to intelligence firm Tractica.

Data centres are set to also jump on the multi-billion Dollar bandwagon over the coming years as non-human intelligence enters data halls across the world.

The use of machine learning, automation, special algorithms such as genetic algorithms, neural networks, deep learning and other AI technologies are believed to make data centres run better, both from an infrastructure and a workload perspective.

However, “there will be no dedicated data centres for AI, but all data centres will run AI”, warns Dr. Reger.

“Currently, the cloud data centres are essentially Intel processors, Windows and largely Linux operating system (OS).

“There will, however, be special hardware coming because the above is the standard hardware and the standard OS. However, there are applications today that are just becoming very important and need more compute power than what the standard architectures can do, and these are the applications of AI.”

 

Putting machine learning algorithms work

The AI applications will need compute power beyond what is out there in the market today. A common technology already deployed in data centres is machine learning, which is now being advanced through the use of neural networks, computing systems that imitate biological nervous systems such as the human brain to absorb and process data.

Dr. Reger said: “The way most of the machine learning algorithms work, using neural networks, is that there is a phase in the beginning where you do the training of the system yourself.

“When that is done, you just run it [it takes a bit of human work to train the network in the beginning, in some cases they are totally automated systems as well].”

In a second phase, the technology is put to service, what some people refer to as inference “in the sense that in that case you inference what the decision is or should be using the system”.

“Inference is actually not that demanding. It can be done on standard hardware, but the learning is very demanding, a lot of computation and the bigger you make your system or the deeper you make it [deep learning], the computation requirements grow exponentially, so time has come to build special hardware.”

 

Future lies in gaming

Unsurprisingly, the pillars for the next wave of innovation are in gaming and 3D graphics, more precisely in graphic processing units (GPU) which have been found to be a good tool for the training phase of AI.

“One of the reasons why that is possible, is because graphics are done by several units such as an Nvidia processor, with many execution units that can work in parallel,” explained Dr. Reger.

Parallel execution enables one task to be divided into multiple smaller tasks which are worked upon at the same time, therefore making the processing of data faster.

“In AI for neural networks you can do the same thing. They can run in parallel and therefore there is a movement to put large boxes filled with GPUs into data centres so that a cloud service can be established offering that kind of superior performance, for example, for AI purposes.”

Adding the cloud to this scenario will accelerate the training phase of these systems even more.

However, Dr. Reger advices that if such process is conducted, people need to worry about the parallel execution of the whole learning experience and there is a problem with parallel processing.

“The problem is that nothing is perfectly parallel. If you have two units instead of one, you are likely to get the double of the performance, if you have four, you might get 3.8. If I give you 128 and you are not getting more than 60.

“This is because the curve does not go in a linear fashion as there are always parts of any problem to be computed that cannot run in parallel and that will slow the system down.”

 

IT doesn’t stop here

The fact the delivery of healthy results decreases as the number of units increase brings to the table yet another issue.

To mitigate that, Fujitsu has built libraries that run on these many processors for AI. The company believes this will help partly solve the delivery issue because it uses the parallel nature of the hardware.

“We call this the General-Purpose GPU (GP-GPU). These GP-GPUs are, however, not enough, and Fujitsu is developing Fujitsu software that runs those in a very parallel fashion.”

With that, Dr. Reger was referring to Deep Learning Units (DLU), hardware accelerators for AI that the company claims to outperform even the parallel execution on GPUs.

DLUs were invented by Fujitsu, which has also trade marked the name. The solutions are set to launch in Q3 or Q4 2017.

“That will change the face of the data centre.”

 

This article originally appeared in the Data Economy magazine. To read more on data centres, cloud and data, visit here