Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 168 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 37 tok/s Pro
GPT-5 High 34 tok/s Pro
GPT-4o 99 tok/s Pro
Kimi K2 214 tok/s Pro
GPT OSS 120B 466 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

A Class of Nonconvex Penalties Preserving Overall Convexity in Optimization-Based Mean Filtering (1604.06589v1)

Published 22 Apr 2016 in cs.IT, math.IT, and math.OC

Abstract: $\ell_1$ mean filtering is a conventional, optimization-based method to estimate the positions of jumps in a piecewise constant signal perturbed by additive noise. In this method, the $\ell_1$ norm penalizes sparsity of the first-order derivative of the signal. Theoretical results, however, show that in some situations, which can occur frequently in practice, even when the jump amplitudes tend to $\infty$, the conventional method identifies false change points. This issue is referred to as stair-casing problem and restricts practical importance of $\ell_1$ mean filtering. In this paper, sparsity is penalized more tightly than the $\ell_1$ norm by exploiting a certain class of nonconvex functions, while the strict convexity of the consequent optimization problem is preserved. This results in a higher performance in detecting change points. To theoretically justify the performance improvements over $\ell_1$ mean filtering, deterministic and stochastic sufficient conditions for exact change point recovery are derived. In particular, theoretical results show that in the stair-casing problem, our approach might be able to exclude the false change points, while $\ell_1$ mean filtering may fail. A number of numerical simulations assist to show superiority of our method over $\ell_1$ mean filtering and another state-of-the-art algorithm that promotes sparsity tighter than the $\ell_1$ norm. Specifically, it is shown that our approach can consistently detect change points when the jump amplitudes become sufficiently large, while the two other competitors cannot.

Citations (26)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.