Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 134 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 23 tok/s Pro
GPT-5 High 29 tok/s Pro
GPT-4o 79 tok/s Pro
Kimi K2 188 tok/s Pro
GPT OSS 120B 434 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Verifiable Boosted Tree Ensembles (2402.14988v1)

Published 22 Feb 2024 in cs.LG, cs.CR, cs.LO, and stat.ML

Abstract: Verifiable learning advocates for training machine learning models amenable to efficient security verification. Prior research demonstrated that specific classes of decision tree ensembles -- called large-spread ensembles -- allow for robustness verification in polynomial time against any norm-based attacker. This study expands prior work on verifiable learning from basic ensemble methods (i.e., hard majority voting) to advanced boosted tree ensembles, such as those trained using XGBoost or LightGBM. Our formal results indicate that robustness verification is achievable in polynomial time when considering attackers based on the $L_\infty$-norm, but remains NP-hard for other norm-based attackers. Nevertheless, we present a pseudo-polynomial time algorithm to verify robustness against attackers based on the $L_p$-norm for any $p \in \mathbb{N} \cup {0}$, which in practice grants excellent performance. Our experimental evaluation shows that large-spread boosted ensembles are accurate enough for practical adoption, while being amenable to efficient security verification.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (44)
  1. Provably robust boosted decision stumps and trees against adversarial attacks. In NeurIPS, 2019.
  2. Measuring neural net robustness with constraints. In NeurIPS, 2016.
  3. Evasion attacks against machine learning at test time. In ECML PKDD, 2013.
  4. Classification and Regression Trees. Wadsworth, 1984.
  5. Beyond robustness: Resilience verification of tree-based classifiers. Comput. Secur., 121, 2022.
  6. Verifiable learning for robust tree ensembles. In Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security, CCS 2023, Copenhagen, CA, USA, November 26-30, 2023, 2023.
  7. Certifying decision trees against evasion attacks by program analysis. In ESORICS, 2020.
  8. Treant: training evasion-aware decision trees. Data Min. Knowl. Discov., 34(5):1390–1420, 2020.
  9. Robust decision trees against adversarial examples. In ICML, 2019.
  10. Robustness verification of tree-based models. In NeurIPS, 2019.
  11. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining, pages 785–794, 2016.
  12. Cost-aware robust tree ensembles for security applications. In USENIX Security Symposium, 2021.
  13. Learning security classifiers with verified global robustness properties. In ACM CCS, 2021.
  14. Adversarial exemples: A survey and experimental evaluation of practical attacks on machine learning for windows malware detection. ACM Trans. Priv. Secur., 24(4):27:1–27:31, 2021.
  15. Verifying tree ensembles by reasoning about potential instances. In SDM, 2021.
  16. Output range analysis for deep feedforward neural networks. In NFM, 2018.
  17. Verifying robustness of gradient boosted models. In AAAI, 2019.
  18. A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci., 55(1):119–139, 1997.
  19. Jerome H Friedman. Greedy function approximation: a gradient boosting machine. Annals of statistics, pages 1189–1232, 2001.
  20. Computers and Intractability; A Guide to the Theory of NP-Completeness. W. H. Freeman & Co., USA, 1990.
  21. Verification of neural networks: Enhancing scalability through pruning. In ECAI, 2020.
  22. Fast provably robust decision trees and boosting. In ICML, 2022.
  23. Safety verification of deep neural networks. In CAV, 2017.
  24. Efficient exact verification of binarized neural networks. In NeurIPS, 2020.
  25. Evasion and hardening of tree ensemble classifiers. In ICML, 2016.
  26. Reluplex: An efficient SMT solver for verifying deep neural networks. In CAV, 2017.
  27. The marabou framework for verification and analysis of deep neural networks. In CAV, 2019.
  28. Lightgbm: A highly efficient gradient boosting decision tree. Advances in neural information processing systems, 30, 2017.
  29. Globally-robust neural networks. In ICML, 2021.
  30. An approach to reachability analysis for feed-forward relu neural networks. CoRR, abs/1706.07351, 2017.
  31. Towards deep learning models resistant to adversarial attacks. In ICLR, 2018.
  32. Knapsack problems. Handbook of Combinatorial Optimization: Volume1–3, pages 299–428, 1998.
  33. Abstract interpretation of decision tree ensemble classifiers. In AAAI, 2020.
  34. Genetic adversarial training of decision trees. In GECCO, 2021.
  35. Formal verification of a decision-tree ensemble model and detection of its violation ranges. IEICE Trans. Inf. Syst., 103-D(2):363–378, 2020.
  36. Intriguing properties of neural networks. In ICLR, 2014.
  37. Evaluating robustness of neural networks with mixed integer programming. In ICLR, 2019.
  38. Formal verification of input-output mappings of tree ensembles. Sci. Comput. Program., 194:102450, 2020.
  39. Efficient training of robust decision trees against adversarial examples. In ICML, 2021.
  40. Adversarially robust decision tree relabeling. In ECML PKDD, 2022.
  41. Robust optimal classification trees against adversarial examples. In AAAI, 2022.
  42. On lp-norm robustness of ensemble decision stumps and trees. In ICML, 2020.
  43. Training for faster adversarial robustness verification via inducing relu stability. In ICLR, 2019.
  44. On the certified robustness for ensemble models and beyond. In ICLR, 2022.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube