Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 45 tok/s
Gemini 2.5 Pro 54 tok/s Pro
GPT-5 Medium 22 tok/s Pro
GPT-5 High 20 tok/s Pro
GPT-4o 99 tok/s Pro
Kimi K2 183 tok/s Pro
GPT OSS 120B 467 tok/s Pro
Claude Sonnet 4 38 tok/s Pro
2000 character limit reached

On the Approximation Properties of Random ReLU Features (1810.04374v3)

Published 10 Oct 2018 in stat.ML and cs.LG

Abstract: We study the approximation properties of random ReLU features through their reproducing kernel Hilbert space (RKHS). We first prove a universality theorem for the RKHS induced by random features whose feature maps are of the form of nodes in neural networks. The universality result implies that the random ReLU features method is a universally consistent learning algorithm. We prove that despite the universality of the RKHS induced by the random ReLU features, composition of functions in it generates substantially more complicated functions that are harder to approximate than those functions simply in the RKHS. We also prove that such composite functions can be efficiently approximated by multi-layer ReLU networks with bounded weights. This depth separation result shows that the random ReLU features models suffer from the same weakness as that of shallow models. We show in experiments that the performance of random ReLU features is comparable to that of random Fourier features and, in general, has a lower computational cost. We also demonstrate that when the target function is the composite function as described in the depth separation theorem, 3-layer neural networks indeed outperform both random ReLU features and 2-layer neural networks.

Citations (3)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com