Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
124 tokens/sec
GPT-4o
8 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

How Can Self-Attention Networks Recognize Dyck-n Languages? (2010.04303v1)

Published 9 Oct 2020 in cs.CL, cs.FL, and cs.LG

Abstract: We focus on the recognition of Dyck-n ($\mathcal{D}_n$) languages with self-attention (SA) networks, which has been deemed to be a difficult task for these networks. We compare the performance of two variants of SA, one with a starting symbol (SA$+$) and one without (SA$-$). Our results show that SA$+$ is able to generalize to longer sequences and deeper dependencies. For $\mathcal{D}_2$, we find that SA$-$ completely breaks down on long sequences whereas the accuracy of SA$+$ is 58.82$\%$. We find attention maps learned by $\text{SA}{+}$ to be amenable to interpretation and compatible with a stack-based language recognizer. Surprisingly, the performance of SA networks is at par with LSTMs, which provides evidence on the ability of SA to learn hierarchies without recursion.

Citations (45)

Summary

We haven't generated a summary for this paper yet.