Papers
Topics
Authors
Recent
2000 character limit reached

BERT's output layer recognizes all hidden layers? Some Intriguing Phenomena and a simple way to boost BERT (2001.09309v2)

Published 25 Jan 2020 in cs.CL and cs.LG

Abstract: Although Bidirectional Encoder Representations from Transformers (BERT) have achieved tremendous success in many NLP tasks, it remains a black box. A variety of previous works have tried to lift the veil of BERT and understand each layer's functionality. In this paper, we found that surprisingly the output layer of BERT can reconstruct the input sentence by directly taking each layer of BERT as input, even though the output layer has never seen the input other than the final hidden layer. This fact remains true across a wide variety of BERT-based models, even when some layers are duplicated. Based on this observation, we propose a quite simple method to boost the performance of BERT. By duplicating some layers in the BERT-based models to make it deeper (no extra training required in this step), they obtain better performance in the downstream tasks after fine-tuning.

Citations (5)

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.