Login / Signup

Retention-aware zero-shifting technique for Tiki-Taka algorithm-based analog deep learning accelerator.

Kyungmi NohHyunjeong KwakJeonghoon SonSeungkun KimMinseong UmMinil KangDoyoon KimWonjae JiJunyong LeeHwiJeong JoJiyong WooHyung-Min LeeSeyoung Kim
Published in: Science advances (2024)
We present the fabrication of 4 K-scale electrochemical random-access memory (ECRAM) cross-point arrays for analog neural network training accelerator and an electrical characteristic of an 8 × 8 ECRAM array with a 100% yield, showing excellent switching characteristics, low cycle-to-cycle, and device-to-device variations. Leveraging the advances of the ECRAM array, we showcase its efficacy in neural network training using the Tiki-Taka version 2 algorithm (TTv2) tailored for non-ideal analog memory devices. Through an experimental study using ECRAM devices, we investigate the influence of retention characteristics on the training performance of TTv2, revealing that the relative location of the retention convergence point critically determines the available weight range and, consequently, affects the training accuracy. We propose a retention-aware zero-shifting technique designed to optimize neural network training performance, particularly in scenarios involving cross-point devices with limited retention times. This technique ensures robust and efficient analog neural network training despite the practical constraints posed by analog cross-point devices.
Keyphrases