Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-Task Learning Using Neighborhood Kernels (1707.03426v1)

Published 11 Jul 2017 in cs.LG and stat.ML

Abstract: This paper introduces a new and effective algorithm for learning kernels in a Multi-Task Learning (MTL) setting. Although, we consider a MTL scenario here, our approach can be easily applied to standard single task learning, as well. As shown by our empirical results, our algorithm consistently outperforms the traditional kernel learning algorithms such as uniform combination solution, convex combinations of base kernels as well as some kernel alignment-based models, which have been proven to give promising results in the past. We present a Rademacher complexity bound based on which a new Multi-Task Multiple Kernel Learning (MT-MKL) model is derived. In particular, we propose a Support Vector Machine-regularized model in which, for each task, an optimal kernel is learned based on a neighborhood-defining kernel that is not restricted to be positive semi-definite. Comparative experimental results are showcased that underline the merits of our neighborhood-defining framework in both classification and regression problems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Niloofar Yousefi (20 papers)
  2. Cong Li (142 papers)
  3. Mansooreh Mollaghasemi (2 papers)
  4. Georgios Anagnostopoulos (2 papers)
  5. Michael Georgiopoulos (11 papers)

Summary

We haven't generated a summary for this paper yet.