Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 47 tok/s
Gemini 2.5 Pro 44 tok/s Pro
GPT-5 Medium 13 tok/s Pro
GPT-5 High 12 tok/s Pro
GPT-4o 64 tok/s Pro
Kimi K2 160 tok/s Pro
GPT OSS 120B 452 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

Dynamic Activation Pitfalls in LLaMA Models: An Empirical Study (2405.09274v1)

Published 15 May 2024 in cs.LG

Abstract: In this work, we systematically investigate the efficacy of dynamic activation mechanisms within the LLaMA family of LLMs. Despite the potential of dynamic activation methods to reduce computation and increase speed in models using the ReLU activation function, our empirical findings have uncovered several inherent pitfalls in the current dynamic activation schemes. Through extensive experiments across various dynamic activation strategies, we demonstrate that LLaMA models usually underperform when compared to their ReLU counterparts, particularly in scenarios demanding high sparsity ratio. We attribute these deficiencies to a combination of factors: 1) the inherent complexity of dynamically predicting activation heads and neurons; 2) the inadequate sparsity resulting from activation functions; 3) the insufficient preservation of information resulting from KV cache skipping. Our analysis not only sheds light on the limitations of dynamic activation in the context of large-scale LLaMA models but also proposes roadmaps for enhancing the design of future sparsity schemes.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (11)
  1. A simple and effective pruning approach for large language models, 2024.
  2. Sparsegpt: Massive language models can be accurately pruned in one-shot, 2023.
  3. Lorashear: Efficient large language model structured pruning and knowledge recovery, 2023.
  4. Relu strikes back: Exploiting activation sparsity in large language models, 2023.
  5. Deja vu: Contextual sparsity for efficient llms at inference time, 2023.
  6. Relu2 wins: Discovering efficient activation functions for sparse llms, 2024.
  7. Prosparse: Introducing and enhancing intrinsic activation sparsity within large language models, 2024.
  8. Chunkattention: Efficient self-attention with prefix-aware kv cache and two-phase partition, 2024.
  9. A framework for few-shot language model evaluation, 12 2023.
  10. Dense training, sparse inference: Rethinking training of mixture-of-experts language models, 2024.
  11. Jetmoe: Reaching llama2 performance with 0.1m dollars, 2024.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube