Intel revealed new details of upcoming high-performance AI accelerators
Intel unveiled its latest processor that will be its first chipset that uses artificial intelligence and machine learning that’s designed for large computing centers.Intel Nervana NNP-I, code-named Spring Hill, is purpose-built specifically for inference and is designed to accelerate deep learning deployment at scale, introducing specialized leading-edge deep learning acceleration while leveraging Intel’s 10nm process technology with Ice Lake cores to offer performance per watt across all major datacenters.
Intel confirmed that Facebook has already begun to use the processor.
“To get to a future state of ‘AI everywhere,’ we’ll need to address the crush of data being generated and ensure enterprises are empowered to make efficient use of their data, processing it where it’s collected when it makes sense and making smarter use of their upstream resources” said Naveen Rao, Intel vice president and general manager, Artificial Intelligence Products Group
Data centers and the cloud need to have access to performant and scalable general purpose computing and specialized acceleration for complex AI applications. In this future vision of AI everywhere, a holistic approach is needed—from hardware to software to applications.” he added.
This is a milestone for Intel. Turning data into information and then into knowledge requires hardware architectures and complementary packaging, memory, storage and interconnect technologies that can evolve and support emerging and increasingly complex use cases and AI techniques.
Dedicated accelerators like the Intel Nervana NNPs are built from the ground up, with a focus on AI to provide customers the right intelligence at the right time.