Today we continue the insideBIGDATA Executive Round Up, our annual feature showcasing the insights of thought leaders on the state of the big data industry, and where it is headed.
In today’s discussion, our panel of experienced big data executives – Ayush Parashar, Co-founder and Vice President of Engineering for Unifi Software, Robert Lee, Vice President & Chief Architect, Pure Storage, Inc.
, and Oliver Schabenberger, COO & CTO at SAS – discusses how AI-optimized hardware solves important compute and storage requirements in support of AI, machine learning, and deep learning.
The conversation is moderated by Daniel D.
Gutierrez, Managing Editor & Resident Data Scientist of insideBIGDATA.
insideBIGDATA: How will AI-optimized hardware solve important compute and storage requirements for AI, machine learning, and deep learning?.Ayush Parashar, Co-founder and Vice President of Engineering for Unifi Software Ayush Parashar: Compute requirements are very important for any AI, ML and deep learning tasks.
GPUs have made a huge impact on the compute aspect, however, intelligence is moving toward the edge and by definition that is AI.
It’s going to make rapid innovations possible when it’s additive to the chip.
As AI and ML move towards edge compute models, specialized hardware will soon disrupt and play a big role.
It’s easy to envision the use of AI specialized chips in smartphones and home consumer devices of every form including refrigerators, ovens and cars.
It’s exciting to see innovation around AI chips including Neural Network Processors, FPGAs, Neuromorphic Chips.
From a storage perspective, the biggest impact has been made by use of SSDs.
Oliver Schabenberger, COO & CTO at SAS Oliver Schabenberger: This is an exciting area.
For many years, analytics and data processing has followed behind advances in computing.
The importance of AI has now changed the equation.
Hardware is being designed and optimized for AI workloads.
One avenue is to increase the performance and throughput of the systems to speed up training and to enable the training of more complex models.
Graphics Processing Units (GPUs) are playing an important role to accelerate training and inference because of their high degree of parallelism and because they can be optimized for neural network operations.
So do FPGAs, ASICs, and chip designs optimized for tensor operations.
New persistent memory technology moves the data closer to the processor.
A second exciting route is the development of computing architectures that enable constant training with low power consumption, for example neuromorphic chips.
This is an important step to bring learning and adaptability to edge devices.
Robert Lee, Vice President & Chief Architect for Pure Storage Robert Lee: AI is highly performance oriented and any performance system is only as fast as its slowest link.
For AI, more performance = more iterations = ability to train and refine on more data = better results faster.
The nature of deep-learning benefits greatly from specialized compute hardware (GPUs) that can drive incredible parallelism for these specific types of calculations.
Modern GPUs (led by NVIDIA) have broken through previous CPU performance limitations, freeing them up to process data faster.
This creates a need for optimized storage (and networking) to be able to feed data quickly to those GPUs and keep them busy and extract all of the performance they are capable of.
Without optimized solutions that address each of these pillars (compute, storage and networking), you are potentially left with an unbalanced and underperforming system – like putting an F1 engine in a Toyota without changing the gearbox, tires, etc.
Sign up for the free insideBIGDATA newsletter.