Vision-Based Human Action Classification Using Adaptive Boosting Algorithm

Nabil Zerrouki, Fouzi Harrou*, Ying Sun, Amrane Houacine

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

73 Scopus citations

Abstract

Precise recognition of human action is a key enabler for the development of many applications, including autonomous robots for medical diagnosis and surveillance of elderly people in home environment. This paper addresses the human action recognition based on variation in body shape. Specifically, we divide the human body into five partitions that correspond to five partial occupancy areas. For each frame, we calculated area ratios and used them as input data for recognition stage. Here, we consider six classes of activities namely: walking, standing, bending, lying, squatting, and sitting. In this paper, we proposed an efficient human action recognition scheme, which takes advantages of the superior discrimination capacity of adaptive boosting algorithm. We validated the effectiveness of this approach by using experimental data from two publicly available databases fall detection databases from the University of Rzeszow's and the Universidad de Málaga fall detection data sets. We provided comparisons of the proposed approach with the state-of-the-art classifiers based on the neural network, K -nearest neighbor, support vector machine, and naïve Bayes and showed that we achieve better results in discriminating human gestures.

Original languageEnglish (US)
Pages (from-to)5115-5121
Number of pages7
JournalIEEE Sensors Journal
Volume18
Issue number12
DOIs
StatePublished - Jun 15 2018

Bibliographical note

Publisher Copyright:
© 2001-2012 IEEE.

Keywords

  • Fall detection
  • cascade classifier
  • gesture recognition
  • vision computing

ASJC Scopus subject areas

  • Instrumentation
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Vision-Based Human Action Classification Using Adaptive Boosting Algorithm'. Together they form a unique fingerprint.

Cite this