Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Mask Attention Networks: Rethinking and Strengthen Transformer (2103.13597v1)

Published 25 Mar 2021 in cs.CL

Abstract: Transformer is an attention-based neural network, which consists of two sublayers, namely, Self-Attention Network (SAN) and Feed-Forward Network (FFN). Existing research explores to enhance the two sublayers separately to improve the capability of Transformer for text representation. In this paper, we present a novel understanding of SAN and FFN as Mask Attention Networks (MANs) and show that they are two special cases of MANs with static mask matrices. However, their static mask matrices limit the capability for localness modeling in text representation learning. We therefore introduce a new layer named dynamic mask attention network (DMAN) with a learnable mask matrix which is able to model localness adaptively. To incorporate advantages of DMAN, SAN, and FFN, we propose a sequential layered structure to combine the three types of layers. Extensive experiments on various tasks, including neural machine translation and text summarization demonstrate that our model outperforms the original Transformer.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Zhihao Fan (28 papers)
  2. Yeyun Gong (78 papers)
  3. Dayiheng Liu (75 papers)
  4. Zhongyu Wei (98 papers)
  5. Siyuan Wang (73 papers)
  6. Jian Jiao (44 papers)
  7. Nan Duan (172 papers)
  8. Ruofei Zhang (24 papers)
  9. Xuanjing Huang (287 papers)
Citations (66)

Summary

We haven't generated a summary for this paper yet.