Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 89 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 15 tok/s Pro
GPT-5 High 19 tok/s Pro
GPT-4o 90 tok/s Pro
Kimi K2 211 tok/s Pro
GPT OSS 120B 459 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

Deep differentiable forest with sparse attention for the tabular data (2003.00223v1)

Published 29 Feb 2020 in cs.LG and stat.ML

Abstract: We present a general architecture of deep differentiable forest and its sparse attention mechanism. The differentiable forest has the advantages of both trees and neural networks. Its structure is a simple binary tree, easy to use and understand. It has full differentiability and all variables are learnable parameters. We would train it by the gradient-based optimization method, which shows great power in the training of deep CNN. We find and analyze the attention mechanism in the differentiable forest. That is, each decision depends on only a few important features, and others are irrelevant. The attention is always sparse. Based on this observation, we improve its sparsity by data-aware initialization. We use the attribute importance to initialize the attention weight. Then the learned weight is much sparse than that from random initialization. Our experiment on some large tabular dataset shows differentiable forest has higher accuracy than GBDT, which is the state of art algorithm for tabular datasets. The source codes are available at https://github.com/closest-git/QuantumForest

Citations (5)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.

Authors (1)