With deep learning software getting featured in more and more applications across the globe, IBM has decided that it will provide a hardware option for deep learning as well.
What we are referring to is the TrueNorth Chip from IBM, which claims to be perfect for running all sorts of deep learning programs at an accelerated pace.
The need for the development of a chip like this comes from the fact that no matter how advanced the deep learning programs of today are, most of them are run on standard hardware.
The mismatch between the software and hardware is what’s keeping deep learning programs from reaching their full potential, as even the 1000 Core Chips can’t keep up with the specialized functions.
To understand why this divide occurs, we have to first understand the different processes involved. Most deep learning functions are based on special algorithms called Convolutional Neural Networks that consist of layers of nodes, better known as neurons.
These neurons can go through immense amount of data, which can be used to perform all sorts of tasks, such as recognizing faces or translating languages.
In contrast, the initial version of the TrueNorth chip relied on Spiking Neural Networks to conduct their operations. The reason for this choice is that these networks closely mimic the way real neurons work in biological brains.
This choice was made specifically since the chip was always supposed to be brain based. However, the clash between the approach of both these networks was the reason why up until now this chip wasn’t considered as a viable hardware solution for these programs.
Thankfully, IBM has now reworked their designs in a way that the new version of the chip features a new algorithm that allows convolutional neural networks to run on its Neuromorphic computing hardware.
By combining both these aspects, the new chip is able to truly unlock the potential of deep learning programs, resulting in high levels of accuracy in most tests.
We don’t yet know which applications/companies will adopt the use of this new chip first, but whenever that happens, and whatever form, will certainly be the beginning of a new era in computer learning.