Return to Full Page

More Interpretable Decision Trees

  • Authors
  • Gilmore E, Estivill-Castro V, Hexel R
  • UPF authors
  • Type
  • Articles de recerca
  • Journal títle
  • Lecture Notes in Computer Science / Artificial Intelligence
  • Publication year
  • 2021
  • Volume
  • 12886
  • Pages
  • 280-292
  • ISSN
  • 0302-9743
  • Publication State
  • Publicat
  • Abstract
  • We present a new Decision Tree Classifier (DTC) induction algorithm that produces vastly more interpretable trees in many situations. These understandable trees are highly relevant for explainable artificial intelligence, fair automatic classification, and human-in-the-loop learning systems. Our method is an improvement over the Nested Cavities (NC) algorithm. That is, we profit from the parallel-coordinates visualisation of high dimensional datasets. However, we build a hybrid with other decision tree heuristics to generate node-expanding splits. The rules in the DTCs learnt using our algorithm have a straightforward representation and, thus, are readily understood by a human user, even though our algorithm constructs rules whose nodes can involve multiple attributes. We compare our algorithm to the well-known decision tree induction algorithm C4.5, and find that our methods produce similar accuracy with significantly smaller trees. When coupled with a human-in-the-loop-learning (HILL) system, our approach can be highly effective for inferring understandable patterns in datasets.
  • Complete citation
  • Gilmore E, Estivill-Castro V, Hexel R. More Interpretable Decision Trees. Lecture Notes in Computer Science / Artificial Intelligence 2021; 12886( ): 280-292.
Bibliometric indicators
  • 0 times cited Scopus
  • Índex Scimago de 0.249 (2020)