Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 173 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 33 tok/s Pro
GPT-5 High 28 tok/s Pro
GPT-4o 94 tok/s Pro
Kimi K2 177 tok/s Pro
GPT OSS 120B 450 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

Limiting Network Size within Finite Bounds for Optimization (1903.02809v1)

Published 7 Mar 2019 in cs.LG, cs.NE, and math.OC

Abstract: Largest theoretical contribution to Neural Networks comes from VC Dimension which characterizes the sample complexity of classification model in a probabilistic view and are widely used to study the generalization error. So far in the literature the VC Dimension has only been used to approximate the generalization error bounds on different Neural Network architectures. VC Dimension has not yet been implicitly or explicitly stated to fix the network size which is important as the wrong configuration could lead to high computation effort in training and leads to over fitting. So there is a need to bound these units so that task can be computed with only sufficient number of parameters. For binary classification tasks shallow networks are used as they have universal approximation property and it is enough to size the hidden layer width for such networks. The paper brings out a theoretical justification on required attribute size and its corresponding hidden layer dimension for a given sample set that gives an optimal binary classification results with minimum training complexity in a single layered feed forward network framework. The paper also establishes proof on the existence of bounds on the width of the hidden layer and its range subjected to certain conditions. Findings in this paper are experimentally analyzed on three different dataset using Mathlab 2018 (b) software.

Citations (2)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.