기본 정보
연구 분야
프로젝트
발행물
구성원
article|
인용수 0
·2026
Learnable Center-Based Quantization for Efficient Analog PIM with Reduced ADC Precision
Sang Heum Yeon, Jong Hwan Ko
초록

Processing-in-memory (PIM) architectures have shown significant potential for accelerating deep neural network (DNN) inference by performing matrix-vector multiplications directly within memory. However, achieving high precision often requires high-resolution analog-to-digital converters (ADCs), which can increase energy consumption and limit overall efficiency. To address this, we propose a learnable center-based quantization (LCQ) technique that minimizes the range of partial sums in PIM arrays. This reduction in the range of partial sums decreases the ADC resolution requirements, enabling accurate low-bit quantization while maintaining energy efficiency. Our framework directly models ADC precision constraints within the training process without requiring extensive retraining. Experimental results on DNN models such as ResNet20 and ResNet18 with CIFAR-10/ImageNet datasets demonstrate that LCQ significantly enhances energy efficiency while maintaining competitive accuracy compared to previous techniques for efficient analog PIM. LCQ improves both accuracy and energy efficiency, reducing ADC resolution requirements and enabling practical low-bit quantization.

키워드
Quantization (signal processing)Control theory (sociology)Signal processingNoise (video)Analog-to-digital converter
타입
article
IF / 인용수
- / 0
게재 연도
2026