Accelerating Training for AI Deep Learning Networks with “Chunking”

This technique could accelerate training time for deep neural networks by two to four times over today’s 16-bit systems.

In IBM Research’s new paper, titled “Accumulation Bit-Width Scaling For Ultralow Precision Training of Deep Networks,” researchers explain in greater depth exactly how the concept of chunk-based accumulation works to lower the precision of accumulation from 32-bits down to 16-bits.

“Chunking” takes the product and divides it into smaller groups of accumulation and then adds the result of each of these smaller groups together, leading to a significantly more accurate result than that of normal accumulation.

This allows researchers to study new networks and improve the overall efficiency of deep learning hardware.

googletag.

cmd.

push(function() { googletag.

display(div-gpt-ad-1439400881943-0); }); Although this approach was previously considered infeasible to further reduce precision for training, IBM expects this 8-bit training platform to become a widely adopted industry standard in the coming years.

Sign up for the free insideBIGDATA newsletter.

.

. More details

Leave a Reply