기본 정보
연구 분야
프로젝트
발행물
구성원
article|
gold
·인용수 19
·2023
Multi-Encoder Transformer for Korean Abstractive Text Summarization
Youhyun Shin
IF 3.6IEEE Access
초록

In this paper, we propose a Korean abstractive text summarization approach that uses a multi -encoder transformer. Recently, in many natural language processing (NLP) tasks, the use of the pre-trained language models (PLMs) for transfer learning has achieved remarkable performance. In particular, transformer-based models such as Bidirectional Encoder Representations from Transformers (BERT) are used for pre-training and applied to downstream tasks, showing state-of-the-art performance including abstractive text summarization. However, existing text summarization models usually use one pre-trained model per model architecture, meaning that it becomes necessary to choose one PLM at a time. For PLMs applicable to Korean abstractive text summarization, there are publicly available BERT-based pre-trained Korean models that offer different advantages such as Multilingual BERT, KoBERT, HanBERT, and KorBERT. We assume that if these PLMs could be leveraged simultaneously, better performance would be obtained. We propose a model that uses multiple encoders which are capable of leveraging multiple pre-trained models to create an abstractive summary. We evaluate our method using three benchmark Korean abstractive summarization datasets, each named Law (AI-Hub), News (AI-Hub), and News (NIKL) datasets. Experimental results show that the proposed multi-encoder model variations outperform single -encoder models. We find the empirically best summarization model by determining the optimal input combination when leveraging multiple PLMs with the multi-encoder method.

키워드
Automatic summarizationComputer scienceEncoderTransformerArtificial intelligenceNatural language processingLanguage modelBenchmark (surveying)Machine learning
타입
article
IF / 인용수
3.6 / 19
게재 연도
2023