Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 134 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 35 tok/s Pro
GPT-5 High 26 tok/s Pro
GPT-4o 108 tok/s Pro
Kimi K2 190 tok/s Pro
GPT OSS 120B 438 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Measuring Sharpness in Grokking (2402.08946v1)

Published 14 Feb 2024 in cs.LG

Abstract: Neural networks sometimes exhibit grokking, a phenomenon where perfect or near-perfect performance is achieved on a validation set well after the same performance has been obtained on the corresponding training set. In this workshop paper, we introduce a robust technique for measuring grokking, based on fitting an appropriate functional form. We then use this to investigate the sharpness of transitions in training and validation accuracy under two settings. The first setting is the theoretical framework developed by Levi et al. (2023) where closed form expressions are readily accessible. The second setting is a two-layer MLP trained to predict the parity of bits, with grokking induced by the concealment strategy of Miller et al. (2023). We find that trends between relative grokking gap and grokking sharpness are similar in both settings when using absolute and relative measures of sharpness. Reflecting on this, we make progress toward explaining some trends and identify the need for further study to untangle the various mechanisms which influence the sharpness of grokking.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (11)
  1. Hidden progress in deep learning: SGD learns parities near the computational limit. Advances in Neural Information Processing Systems, 35:21750–21764, 2022.
  2. Unifying grokking and double descent, 2023.
  3. Grokking in linear estimators – a solvable model that groks without understanding, 2023.
  4. Omnigrok: Grokking beyond algorithmic data. International Conference on Learning Representations, 2023.
  5. Dichotomy of early and late phase implicit biases can provably induce grokking, 2023.
  6. A tale of two circuits: Grokking as competition of sparse and dense subnetworks, 2023.
  7. Grokking beyond neural networks: An empirical exploration with model complexity, 2023.
  8. Progress measures for grokking via mechanistic interpretability, 2023.
  9. Grokking: Generalization beyond overfitting on small algorithmic datasets, 2022.
  10. Explaining grokking through circuit efficiency, 2023.
  11. Grokking phase transitions in learning local rules with gradient descent, 2022.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 3 tweets and received 11 likes.

Upgrade to Pro to view all of the tweets about this paper: