기본 정보
연구 분야
프로젝트
발행물
구성원
article|
인용수 8
·2024
A Latency Processing Unit: A Latency-Optimized and Highly Scalable Processor for Large Language Model Inference
Seungjae Moon, Jung‐Hoon Kim, Junsoo Kim, Seongmin Hong, Junseo Cha, Minsu Kim, Sukbin Lim, Gyubin Choi, Dongjin Seo, Jong-Ho Kim, Hunjong Lee, Hyun Jun Park, Ryeowook Ko, Soongyu Choi, Jongse Park, Jinwon Lee, Joo-Young Kim
IF 2.9IEEE Micro
초록

The explosive arrival of OpenAI’s ChatGPT has fueled the globalization of large language models (LLMs), which consist of billions of pretrained parameters that embody the aspects of syntax and semantics. HyperAccel introduces a latency processing unit (LPU), a latency-optimized and highly scalable processor architecture for the acceleration of LLM inference. The LPU perfectly balances memory bandwidth and compute logic with streamlined dataflow to maximize performance and efficiency. The LPU is equipped with an expandable synchronization link that hides data synchronization latency among multiple LPUs. HyperDex complements the LPU as an intuitive software framework to run LLM applications. The LPU achieves 1.25 ms/token and 20.9 ms/token for the 1.3B and 66B models, respectively, which is 2.09× and 1.37× faster, respectively, than a GPU. The LPU, synthesized using Samsung’s 4-nm process, has a total area of 0.824 mm2 and power consumption of 284.31 mW. LPU-based servers achieve 1.33× and 1.32× energy efficiency over Nvidia’s H100 and L4 servers, respectively.

키워드
Computer scienceInferenceLatency (audio)ScalabilityParallel computingComputer architectureArtificial intelligenceOperating system
타입
article
IF / 인용수
2.9 / 8
게재 연도
2024