Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 63 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 14 tok/s Pro
GPT-5 High 19 tok/s Pro
GPT-4o 100 tok/s Pro
Kimi K2 174 tok/s Pro
GPT OSS 120B 472 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Measuring Sharpness in Grokking (2402.08946v1)

Published 14 Feb 2024 in cs.LG

Abstract: Neural networks sometimes exhibit grokking, a phenomenon where perfect or near-perfect performance is achieved on a validation set well after the same performance has been obtained on the corresponding training set. In this workshop paper, we introduce a robust technique for measuring grokking, based on fitting an appropriate functional form. We then use this to investigate the sharpness of transitions in training and validation accuracy under two settings. The first setting is the theoretical framework developed by Levi et al. (2023) where closed form expressions are readily accessible. The second setting is a two-layer MLP trained to predict the parity of bits, with grokking induced by the concealment strategy of Miller et al. (2023). We find that trends between relative grokking gap and grokking sharpness are similar in both settings when using absolute and relative measures of sharpness. Reflecting on this, we make progress toward explaining some trends and identify the need for further study to untangle the various mechanisms which influence the sharpness of grokking.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (11)
  1. Hidden progress in deep learning: SGD learns parities near the computational limit. Advances in Neural Information Processing Systems, 35:21750–21764, 2022.
  2. Unifying grokking and double descent, 2023.
  3. Grokking in linear estimators – a solvable model that groks without understanding, 2023.
  4. Omnigrok: Grokking beyond algorithmic data. International Conference on Learning Representations, 2023.
  5. Dichotomy of early and late phase implicit biases can provably induce grokking, 2023.
  6. A tale of two circuits: Grokking as competition of sparse and dense subnetworks, 2023.
  7. Grokking beyond neural networks: An empirical exploration with model complexity, 2023.
  8. Progress measures for grokking via mechanistic interpretability, 2023.
  9. Grokking: Generalization beyond overfitting on small algorithmic datasets, 2022.
  10. Explaining grokking through circuit efficiency, 2023.
  11. Grokking phase transitions in learning local rules with gradient descent, 2022.
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com