Wednesday, August 23, 2017


Build faster machine learning applications



If you already work with machine learning (ML) applications and deep learning (DL) models and find the size of your training data sets a challenge, there is new processor from Intel that can help.

The second-generation Intel® Xeon Phi™ Processor x200, code named Knights Landing, has several key features which make it well suited to handling ML workloads:

–          The Intel Xeon Phi chip is a massively multicore processor available in a self-boot socket. This eliminates the need to run an OS on a separate host and pass data across a PCIe slot.

–          Its 72 processor cores, each with two Intel® Advanced Vector Extensions 512 SIMD processing units, improve per-core floating-point performance.

–          The high-bandwidth integrated memory (up to 16 GB of MCDRAM) helps to quickly feed data to the cores and supplements platform memory of up to 384 GB of commodity DDR4. This lets programmers manage memory by specifying how much data they want and when they want it. 

To find out more about the Intel Xeon Phi Processor x200, take a look at the article here.

You can also learn how to migrate applications from the previous generation Knights Corner processor to Knights Landing Self-Boot platforms.

To discover more resources to help you get the best performance for your data centre applications, visit the Intel® Modern Code part of Intel® Developer Zone.