Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 124 tok/s
Gemini 2.5 Pro 52 tok/s Pro
GPT-5 Medium 25 tok/s Pro
GPT-5 High 31 tok/s Pro
GPT-4o 79 tok/s Pro
Kimi K2 206 tok/s Pro
GPT OSS 120B 435 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

Laplacian regularized low rank subspace clustering (1610.07488v2)

Published 24 Oct 2016 in cs.CV

Abstract: The problem of fitting a union of subspaces to a collection of data points drawn from multiple subspaces is considered in this paper. In the traditional low rank representation model, the dictionary used to represent the data points is chosen as the data points themselves and thus the dictionary is corrupted with noise. This problem is solved in the low rank subspace clustering model which decomposes the corrupted data matrix as the sum of a clean and self-expressive dictionary plus a matrix of noise and gross errors. Also, the clustering results of the low rank representation model can be enhanced by using a graph of data similarity. This model is called Laplacian regularized low rank representation model with a graph regularization term added to the objective function. Inspired from the above two ideas, in this paper a Laplacian regularized low rank subspace clustering model is proposed. This model uses a clean dictionary to represent the data points and a graph regularization term is also incorporated in the objective function. Experimental results show that, compared with the traditional low rank representation model, low rank subspace clustering model and several other state-of-the-art subspace clustering model, the model proposed in this paper can get better subspace clustering results with lower clustering error.

Citations (2)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.