Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 99 tok/s
Gemini 2.5 Pro 43 tok/s Pro
GPT-5 Medium 33 tok/s Pro
GPT-5 High 30 tok/s Pro
GPT-4o 110 tok/s Pro
Kimi K2 207 tok/s Pro
GPT OSS 120B 467 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

Dynamical Complexity Of Short and Noisy Time Series (1609.01924v2)

Published 7 Sep 2016 in nlin.CD, cs.IT, and math.IT

Abstract: Shannon Entropy has been extensively used for characterizing complexity of time series arising from chaotic dynamical systems and stochastic processes such as Markov chains. However, for short and noisy time series, Shannon entropy performs poorly. Complexity measures which are based on lossless compression algorithms are a good substitute in such scenarios. We evaluate the performance of two such Compression-Complexity Measures namely Lempel-Ziv complexity ($LZ$) and Effort-To-Compress ($ETC$) on short time series from chaotic dynamical systems in the presence of noise. Both $LZ$ and $ETC$ outperform Shannon entropy ($H$) in accurately characterizing the dynamical complexity of such systems. For very short binary sequences (which arise in neuroscience applications), $ETC$ has higher number of distinct complexity values than $LZ$ and $H$, thus enabling a finer resolution. For two-state ergodic Markov chains, we empirically show that $ETC$ converges to a steady state value faster than $LZ$. Compression-Complexity Measures are promising for applications which involve short and noisy time series.

Citations (29)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.