Emergent Mind

Evaluating CNNs on the Gestalt Principle of Closure

(1904.00285)
Published Mar 30, 2019 in cs.CV

Abstract

Deep convolutional neural networks (CNNs) are widely known for their outstanding performance in classification and regression tasks over high-dimensional data. This made them a popular and powerful tool for a large variety of applications in industry and academia. Recent publications show that seemingly easy classifaction tasks (for humans) can be very challenging for state of the art CNNs. An attempt to describe how humans perceive visual elements is given by the Gestalt principles. In this paper we evaluate AlexNet and GoogLeNet regarding their performance on classifying the correctness of the well known Kanizsa triangles, which heavily rely on the Gestalt principle of closure. Therefore we created various datasets containing valid as well as invalid variants of the Kanizsa triangle. Our findings suggest that perceiving objects by utilizing the principle of closure is very challenging for the applied network architectures but they appear to adapt to the effect of closure.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.