Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Non-normal Recurrent Neural Network (nnRNN): learning long time dependencies while improving expressivity with transient dynamics (1905.12080v2)

Published 28 May 2019 in cs.LG, cs.AI, and stat.ML

Abstract: A recent strategy to circumvent the exploding and vanishing gradient problem in RNNs, and to allow the stable propagation of signals over long time scales, is to constrain recurrent connectivity matrices to be orthogonal or unitary. This ensures eigenvalues with unit norm and thus stable dynamics and training. However this comes at the cost of reduced expressivity due to the limited variety of orthogonal transformations. We propose a novel connectivity structure based on the Schur decomposition and a splitting of the Schur form into normal and non-normal parts. This allows to parametrize matrices with unit-norm eigenspectra without orthogonality constraints on eigenbases. The resulting architecture ensures access to a larger space of spectrally constrained matrices, of which orthogonal matrices are a subset. This crucial difference retains the stability advantages and training speed of orthogonal RNNs while enhancing expressivity, especially on tasks that require computations over ongoing input sequences.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Giancarlo Kerg (7 papers)
  2. Kyle Goyette (3 papers)
  3. Maximilian Puelma Touzel (10 papers)
  4. Gauthier Gidel (76 papers)
  5. Eugene Vorontsov (19 papers)
  6. Yoshua Bengio (601 papers)
  7. Guillaume Lajoie (58 papers)
Citations (55)

Summary

We haven't generated a summary for this paper yet.