Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Chinese Discourse Segmentation Using Bilingual Discourse Commonality (1809.01497v1)

Published 30 Aug 2018 in cs.CL and cs.AI

Abstract: Discourse segmentation aims to segment Elementary Discourse Units (EDUs) and is a fundamental task in discourse analysis. For Chinese, previous researches identify EDUs just through discriminating the functions of punctuations. In this paper, we argue that Chinese EDUs may not end at the punctuation positions and should follow the definition of EDU in RST-DT. With this definition, we conduct Chinese discourse segmentation with the help of English labeled data.Using discourse commonality between English and Chinese, we design an adversarial neural network framework to extract common language-independent features and language-specific features which are useful for discourse segmentation, when there is no or only a small scale of Chinese labeled data available. Experiments on discourse segmentation demonstrate that our models can leverage common features from bilingual data, and learn efficient Chinese-specific features from a small amount of Chinese labeled data, outperforming the baseline models.

Citations (4)

Summary

We haven't generated a summary for this paper yet.