Neuromorphic Computing for Occlusion-Aware Object Detection in AR/VR: A Review of SNN-Based Real-Time Techniques
Creators
Description
The rapid integration of Augmented Reality (AR) and Virtual Reality (VR) into consumer and industrial applications has created an urgent need for efficient object detection systems capable of handling real-time data under challenging conditions, including occlusion. Conventional object detection frameworks, particularly those based on deep learning architectures such as R-CNN and YOLO, often struggle with occluded environments due to their reliance on complete scene information and high computational demands. These limitations are exacerbated in resource-constrained platforms like mobile AR/VR devices and head-mounted displays. Neuromorphic computing, inspired by the biological brain and implemented using Spiking Neural Networks (SNNs), offers a promising alternative through its event-driven processing and low-power characteristics. This paper reviews and evaluates state-of-the-art approaches that integrate neuromorphic methods for occlusion-aware object detection in AR/VR environments. Two prominent strategies are examined: one converting ANN-based YOLO frameworks into SNN-compatible models with channel-wise normalization, and another incorporating Mask R-CNN for image segmentation prior to SNN-based detection. Experimental results from benchmark datasets demonstrate that SNN-based models not only improve detection accuracy under occlusion (up to 98.60% on YOLO-V3-Tiny datasets) but also reduce computational overhead, making them suitable for real-time deployment. This review highlights the emerging role of neuromorphic computing in enhancing perception for immersive systems and sets the foundation for future developments in AR/VR vision technologies.
Files
Review paper publish.pdf
Files
(1.6 MB)
Name | Size | Download all |
---|---|---|
md5:a1a42d7ce9d8c2666bd6ba0ffca3099a
|
1.6 MB | Preview Download |