A novel ensemble learning method using majority based voting of multiple selective decision trees

Mohammad Azad*, Tasnemul Hasan Nehal, Mikhail Moshkov

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

Traditional decision tree algorithms are susceptible to bias when certain classes dominate the dataset and prone to overfitting, particularly if they are not pruned. Previous studies have shown that combining several models can mitigate these issues by improving predictive accuracy and robustness. In this study, we propose a novel approach to address these challenges by constructing multiple selective decision trees using the entirety of the input dataset and employing a majority voting scheme for output forecasting. Our method outperforms competing algorithms, including KNN, Decision Trees, Random Forest, Bagging, XGB, Gradient Boost, and ExtraTrees, achieving superior accuracy in five out of ten datasets. This practical exploration highlights the effectiveness of our approach in enhancing decision tree performance across diverse datasets.

Original languageEnglish (US)
Article number42
JournalComputing
Volume107
Issue number1
DOIs
StatePublished - Jan 2025

Bibliographical note

Publisher Copyright:
© The Author(s) 2024.

Keywords

  • Decision tree
  • Ensembles
  • Machine learning
  • Majority vote
  • Multiple tree

ASJC Scopus subject areas

  • Software
  • Theoretical Computer Science
  • Numerical Analysis
  • Computer Science Applications
  • Computational Theory and Mathematics
  • Computational Mathematics

Fingerprint

Dive into the research topics of 'A novel ensemble learning method using majority based voting of multiple selective decision trees'. Together they form a unique fingerprint.

Cite this