Published July 30, 2025 | Version CC-BY-NC-ND 4.0
Journal article Open

Advancements in Yoga Pose Recognition and Correction: A Comprehensive Literature Review

  • 1. Department of Computer Science and Engineering, PES University, Bengaluru (Karnataka), India.

Contributors

Contact person:

Researcher:

  • 1. Department of Computer Science and Engineering, PES University, Bengaluru (Karnataka), India.
  • 2. Assistant Professor, Department of Computer Science and Engineering, PES University, Bengaluru (Karnataka), India.

Description

Abstract: The growing popularity of yoga, especially in post-pandemic wellness trends, has led to an increasing demand for automated systems capable of real-time pose detection, classification, and correction. This literature review surveys and compares recent advancements in yoga pose recognition and correction technologies, with an emphasis on the integration of deep learning, computer vision, and optimisation techniques. The reviewed works employ a variety of methods, ranging from CNNs, GRUs, and Vision Transformers to heuristic-based models and hybrid architectures, for both static and dynamic pose estimation. Systems leveraging lightweight models, such as MoveNet, as well as multimodal approaches combining AR, personalised recommendations, and real-time corrective feedback, demonstrate significant potential for mobile, web-based, and wearable deployments. This review synthesizes insights on model performance, technological innovations, and future opportunities, providing a foundation for researchers and developers aiming to build intelligent, user-centric yoga tutor applications.

Files

C367915030725.pdf

Files (696.9 kB)

Name Size Download all
md5:95f16ea243ef7575f316081e61d7574f
696.9 kB Preview Download

Additional details

Identifiers

Dates

Accepted
2025-07-15
Manuscript Received on 23 May 2025 | First Revised Manuscript Received on 27 June 2025 | Second Revised Manuscript Received on 09 July 2025 | Manuscript Accepted on 15 July 2025 | Manuscript published on 30 July 2025.

References