Login / Signup

Device-Algorithm Co-Optimization for an On-Chip Trainable Capacitor-Based Synaptic Device with IGZO TFT and Retention-Centric Tiki-Taka Algorithm.

Jongun WonJaehyeon KangSangjun HongNarae HanMinseung KangYeaji ParkYoungchae RohHyeong Jun SeoChanghoon JoeUng ChoMinil KangMinseong UmKwang-Hee LeeJee-Eun YangMoonil JungHyung-Min LeeSaeroonter OhSangwook KimSangBum Kim
Published in: Advanced science (Weinheim, Baden-Wurttemberg, Germany) (2023)
Analog in-memory computing synaptic devices are widely studied for efficient implementation of deep learning. However, synaptic devices based on resistive memory have difficulties implementing on-chip training due to the lack of means to control the amount of resistance change and large device variations. To overcome these shortcomings, silicon complementary metal-oxide semiconductor (Si-CMOS) and capacitor-based charge storage synapses are proposed, but it is difficult to obtain sufficient retention time due to Si-CMOS leakage currents, resulting in a deterioration of training accuracy. Here, a novel 6T1C synaptic device using only n-type indium gaIlium zinc oxide thin film transistor (IGZO TFT) with low leakage current and a capacitor is proposed, allowing not only linear and symmetric weight update but also sufficient retention time and parallel on-chip training operations. In addition, an efficient and realistic training algorithm to compensate for any remaining device non-idealities such as drifting references and long-term retention loss is proposed, demonstrating the importance of device-algorithm co-optimization.
Keyphrases
  • deep learning
  • machine learning
  • high throughput
  • primary care
  • virtual reality
  • healthcare
  • room temperature
  • prefrontal cortex
  • neural network
  • working memory
  • quality improvement
  • single cell
  • oxide nanoparticles