Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 37 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 10 tok/s Pro
GPT-5 High 15 tok/s Pro
GPT-4o 84 tok/s Pro
Kimi K2 198 tok/s Pro
GPT OSS 120B 448 tok/s Pro
Claude Sonnet 4 31 tok/s Pro
2000 character limit reached

An anatomy-based V1 model: Extraction of Low-level Features, Reduction of distortion and a V1-inspired SOM (2302.09074v1)

Published 18 Feb 2023 in q-bio.NC and cs.NE

Abstract: We present a model of the primary visual cortex V1, guided by anatomical experiments. Unlike most machine learning systems our goal is not to maximize accuracy but to realize a system more aligned to biological systems. Our model consists of the V1 layers 4, 2/3, and 5, with inter-layer connections between them in accordance with the anatomy. We further include the orientation selectivity of the V1 neurons and lateral influences in each layer. Our V1 model, when applied to the BSDS500 ground truth images (indicating LGN contour detection before V1), can extract low-level features from the images and perform a significant amount of distortion reduction. As a follow-up to our V1 model, we propose a V1-inspired self-organizing map algorithm (V1-SOM), where the weight update of each neuron gets influenced by its neighbors. V1-SOM can tolerate noisy inputs as well as noise in the weight updates better than SOM and shows a similar level of performance when trained with high dimensional data such as the MNIST dataset. Finally, when we applied V1 processing to the MNIST dataset to extract low-level features and trained V1-SOM with the modified MNIST dataset, the quantization error was significantly reduced. Our results support the hypothesis that the ventral stream performs gradual untangling of input spaces.

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.