Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 161 tok/s
Gemini 2.5 Pro 52 tok/s Pro
GPT-5 Medium 32 tok/s Pro
GPT-5 High 33 tok/s Pro
GPT-4o 108 tok/s Pro
Kimi K2 207 tok/s Pro
GPT OSS 120B 471 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

Unsupervised and Few-shot Parsing from Pretrained Language Models (2206.04980v1)

Published 10 Jun 2022 in cs.CL

Abstract: Pretrained LLMs are generally acknowledged to be able to encode syntax [Tenney et al., 2019, Jawahar et al., 2019, Hewitt and Manning, 2019]. In this article, we propose UPOA, an Unsupervised constituent Parsing model that calculates an Out Association score solely based on the self-attention weight matrix learned in a pretrained LLM as the syntactic distance for span segmentation. We further propose an enhanced version, UPIO, which exploits both inside association and outside association scores for estimating the likelihood of a span. Experiments with UPOA and UPIO disclose that the linear projection matrices for the query and key in the self-attention mechanism play an important role in parsing. We therefore extend the unsupervised models to few-shot parsing models (FPOA, FPIO) that use a few annotated trees to learn better linear projection matrices for parsing. Experiments on the Penn Treebank demonstrate that our unsupervised parsing model UPIO achieves results comparable to the state of the art on short sentences (length <= 10). Our few-shot parsing model FPIO trained with only 20 annotated trees outperforms a previous few-shot parsing method trained with 50 annotated trees. Experiments on cross-lingual parsing show that both unsupervised and few-shot parsing methods are better than previous methods on most languages of SPMRL [Seddah et al., 2013].

Citations (3)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.