Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 49 tok/s
Gemini 2.5 Pro 53 tok/s Pro
GPT-5 Medium 19 tok/s Pro
GPT-5 High 16 tok/s Pro
GPT-4o 103 tok/s Pro
Kimi K2 172 tok/s Pro
GPT OSS 120B 472 tok/s Pro
Claude Sonnet 4 39 tok/s Pro
2000 character limit reached

MeetSum: Transforming Meeting Transcript Summarization using Transformers! (2108.06310v1)

Published 13 Aug 2021 in cs.CL and cs.LG

Abstract: Creating abstractive summaries from meeting transcripts has proven to be challenging due to the limited amount of labeled data available for training neural network models. Moreover, Transformer-based architectures have proven to beat state-of-the-art models in summarizing news data. In this paper, we utilize a Transformer-based Pointer Generator Network to generate abstract summaries for meeting transcripts. This model uses 2 LSTMs as an encoder and a decoder, a Pointer network which copies words from the inputted text, and a Generator network to produce out-of-vocabulary words (hence making the summary abstractive). Moreover, a coverage mechanism is used to avoid repetition of words in the generated summary. First, we show that training the model on a news summary dataset and using zero-shot learning to test it on the meeting dataset proves to produce better results than training it on the AMI meeting dataset. Second, we show that training this model first on out-of-domain data, such as the CNN-Dailymail dataset, followed by a fine-tuning stage on the AMI meeting dataset is able to improve the performance of the model significantly. We test our model on a testing set from the AMI dataset and report the ROUGE-2 score of the generated summary to compare with previous literature. We also report the Factual score of our summaries since it is a better benchmark for abstractive summaries since the ROUGE-2 score is limited to measuring word-overlaps. We show that our improved model is able to improve on previous models by at least 5 ROUGE-2 scores, which is a substantial improvement. Also, a qualitative analysis of the summaries generated by our model shows that these summaries and human-readable and indeed capture most of the important information from the transcripts.

Citations (1)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.