Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
166 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Influence of Dimensions on the Complexity of Computing Decision Trees (2205.07756v2)

Published 16 May 2022 in cs.CC

Abstract: A decision tree recursively splits a feature space $\mathbb{R}{d}$ and then assigns class labels based on the resulting partition. Decision trees have been part of the basic machine-learning toolkit for decades. A large body of work treats heuristic algorithms to compute a decision tree from training data, usually aiming to minimize in particular the size of the resulting tree. In contrast, little is known about the complexity of the underlying computational problem of computing a minimum-size tree for the given training data. We study this problem with respect to the number $d$ of dimensions of the feature space. We show that it can be solved in $O(n{2d + 1}d)$ time, but under reasonable complexity-theoretic assumptions it is not possible to achieve $f(d) \cdot n{o(d / \log d)}$ running time, where $n$ is the number of training examples. The problem is solvable in $(dR){O(dR)} \cdot n{1+o(1)}$ time, if there are exactly two classes and $R$ is an upper bound on the number of tree leaves labeled with the first~class.

Citations (6)

Summary

We haven't generated a summary for this paper yet.