Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Transfer Learning from Pre-trained Language Models Improves End-to-End Speech Summarization (2306.04233v1)

Published 7 Jun 2023 in cs.CL, cs.SD, and eess.AS

Abstract: End-to-end speech summarization (E2E SSum) directly summarizes input speech into easy-to-read short sentences with a single model. This approach is promising because it, in contrast to the conventional cascade approach, can utilize full acoustical information and mitigate to the propagation of transcription errors. However, due to the high cost of collecting speech-summary pairs, an E2E SSum model tends to suffer from training data scarcity and output unnatural sentences. To overcome this drawback, we propose for the first time to integrate a pre-trained LLM (LM), which is highly capable of generating natural sentences, into the E2E SSum decoder via transfer learning. In addition, to reduce the gap between the independently pre-trained encoder and decoder, we also propose to transfer the baseline E2E SSum encoder instead of the commonly used automatic speech recognition encoder. Experimental results show that the proposed model outperforms baseline and data augmented models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Kohei Matsuura (26 papers)
  2. Takanori Ashihara (28 papers)
  3. Takafumi Moriya (30 papers)
  4. Tomohiro Tanaka (37 papers)
  5. Takatomo Kano (9 papers)
  6. Atsunori Ogawa (15 papers)
  7. Marc Delcroix (94 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.