Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 58 tok/s
Gemini 2.5 Pro 52 tok/s Pro
GPT-5 Medium 12 tok/s Pro
GPT-5 High 17 tok/s Pro
GPT-4o 95 tok/s Pro
Kimi K2 179 tok/s Pro
GPT OSS 120B 463 tok/s Pro
Claude Sonnet 4 38 tok/s Pro
2000 character limit reached

A Multi-view Dimensionality Reduction Algorithm Based on Smooth Representation Model (1910.04439v3)

Published 10 Oct 2019 in cs.LG and stat.ML

Abstract: Over the past few decades, we have witnessed a large family of algorithms that have been designed to provide different solutions to the problem of dimensionality reduction (DR). The DR is an essential tool to excavate the important information from the high-dimensional data by mapping the data to a low-dimensional subspace. Furthermore, for the diversity of varied high-dimensional data, the multi-view features can be utilized for improving the learning performance. However, many DR methods fail to integrating multiple views. Although the features from different views are extracted by different manners, they are utilized to describe the same sample, which implies that they are highly related. Therefore, how to learn the subspace for high-dimensional features by utilizing the consistency and complementary properties of multi-view features is important in the present. In this paper, we propose an effective multi-view dimensionality reduction algorithm named Multi-view Smooth Preserve Projection. Firstly, we construct a single view DR method named Smooth Preserve Projection based on the Smooth Representation model. The proposed method aims to find a subspace for the high-dimensional data, in which the smooth reconstructive weights are preserved as much as possible. Then, we extend it to a multi-view version in which we exploits Hilbert-Schmidt Independence Criterion to jointly learn one common subspace for all views. A plenty of experiments on multi-view datasets show the excellent performance of the proposed method.

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)