Papers
Topics
Authors
Recent
2000 character limit reached

A Lightweight Randomized Nonlinear Dictionary Learning Method using Random Vector Functional Link (2402.03833v2)

Published 6 Feb 2024 in cs.CV

Abstract: Kernel-based nonlinear dictionary learning methods operate in a feature space obtained by an implicit feature map, and they are not independent of computationally expensive operations like Singular Value Decomposition (SVD). This paper presents an SVD-free lightweight approach to learning a nonlinear dictionary using a randomized functional link called a Random Vector Functional Link (RVFL). The proposed RVFL-based nonlinear Dictionary Learning (RVFLDL) learns a dictionary as a sparse-to-dense feature map from nonlinear sparse coefficients to the dense input features. Sparse coefficients w.r.t an initial random dictionary are derived by assuming Horseshoe prior are used as inputs making it a lightweight network. Training the RVFL-based dictionary is free from SVD computation as RVFL generates weights from the input to the output layer analytically. Higher-order dependencies between the input sparse coefficients and the dictionary atoms are incorporated into the training process by nonlinearly transforming the sparse coefficients and adding them as enhanced features. Thus the method projects sparse coefficients to a higher dimensional space while inducing nonlinearities into the dictionary. For classification using RVFL-net, a classifier matrix is learned as a transform that maps nonlinear sparse coefficients to the labels. The empirical evidence of the method illustrated in image classification and reconstruction applications shows that RVFLDL is scalable and provides a solution better than those obtained using other nonlinear dictionary learning methods.

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Paper to Video (Beta)

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube