Abstract
In this paper, we consider decision trees that use two types of queries: queries based on one attribute each and queries based on hypotheses about values of all attributes. Such decision trees are similar to the ones studied in exact learning, where membership and equivalence queries are allowed. We present dynamic programming algorithms for minimization of the depth and number of nodes of above decision trees and discuss results of computer experiments on various data sets and randomly generated Boolean functions. Decision trees with hypotheses generally have less complexity, i.e., they are more understandable and more suitable as a means for knowledge representation.
Original language | English (US) |
---|---|
Pages (from-to) | 1580 |
Journal | Electronics |
Volume | 10 |
Issue number | 13 |
DOIs | |
State | Published - Jun 30 2021 |
Bibliographical note
KAUST Repository Item: Exported on 2021-07-02Acknowledgements: Research reported in this publication was supported by King Abdullah University of Science and Technology (KAUST) including the provision of computing resources. The authors are greatly indebted to the anonymous reviewers for useful comments and suggestions.