From News to Summaries: Building a Hungarian Corpus for Extractive and Abstractive Summarization
Abstract
Training summarization models requires substantial amounts of training data. However for less resourceful languages like Hungarian, openly available models and datasets are notably scarce. To address this gap our paper introduces HunSum-2 an open-source Hungarian corpus suitable for training abstractive and extractive <PRE_TAG>summarization</POST_TAG> models. The dataset is assembled from segments of the Common Crawl corpus undergoing thorough cleaning, preprocessing and deduplication. In addition to abstractive <PRE_TAG>summarization</POST_TAG> we generate sentence-level labels for extractive <PRE_TAG>summarization</POST_TAG> using sentence similarity. We train baseline models for both extractive and abstractive <PRE_TAG>summarization</POST_TAG> using the collected dataset. To demonstrate the effectiveness of the trained models, we perform both quantitative and qualitative evaluation. Our dataset, models and code are publicly available, encouraging replication, further research, and real-world applications across various domains.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper