Adesto's CBRAM used for in-memory computing

December 20, 2018 // By Peter Clarke
A team from the Jacobs School of Engineering at University of California, San Diego (UCSD), have used a 512kbit non-volatile memory from Adesto Technologies Corp. (Santa Clara, Calif.) to perform in-memory machine learning using analog synapses.

The memory is a Conductive Bridging RAM (CBRAM). This usually operates by the making and breaking of filaments of copper ions to represent 1s and 0s (see Adesto tips CBRAM as automotive embedded memory ).

The research team has shown that the CBRAM filament can be programmed to have multiple analog states and thereby emulate biological synapses in the human brain. This so-called synaptic device can be used to do in-memory computing for neural network training faster and at considerably higher energy efficiency than digital representations that repeatedly move data between memory and processors.

The work is described in a paper published in Nature Communications entitled "Neuro-inspired Unsupervised Learning and Pruning with Sub-quantum CBRAM arrays."

The approach uses a spiking neural network for implementing unsupervised learning in hardware. On top of that the team has applied software pruning of networks to make neural network training more energy efficient without sacrificing much in terms of accuracy.

Soft-pruning is a method that finds weights that have already matured and asymptoted to a particular value during training and then sets them to a constant non-zero value. This stops them from getting updated for the remainder of the training, which minimizes computing power.

The CBRAM was used as a synaptic array and trained to classify handwritten digits from the MNIST (Modified National Institute of Standards and Technology ) database.

The network achieved 93 percent accuracy when up to 75 percent of the weights were soft-pruned. In terms of energy savings, the team estimates that their neuro-inspired hardware-software co-design approach can eventually cut energy use during neural network training by two to three orders of magnitude compared to the state of the art.

Team leader Professor Duygu Kuzum, said that she and her team plan to work with multiple memory technology companies to advance this work.

Related links and articles:

News articles:  

Embedded flash memory hosts machine learning

IBM uses phase-change memory

Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.