Analysis of Structured Deep Kernel Networks (2105.07228v2)
Abstract: In this paper, we leverage a recent deep kernel representer theorem to connect kernel based learning and (deep) neural networks in order to understand their interplay. In particular, we show that the use of special types of kernels yields models reminiscent of neural networks that are founded in the same theoretical framework of classical kernel methods, while benefiting from the computational advantages of deep neural networks. Especially the introduced Structured Deep Kernel Networks (SDKNs) can be viewed as neural networks (NNs) with optimizable activation functions obeying a representer theorem. This link allows us to analyze also NNs within the framework of kernel networks. We prove analytic properties of the SDKNs which show their universal approximation properties in three different asymptotic regimes of unbounded number of centers, width and depth. Especially in the case of unbounded depth, more accurate constructions can be achieved using fewer layers compared to corresponding constructions for ReLU neural networks. This is made possible by leveraging properties of kernel approximation.