A Visual SLAM-Based Framework for Indoor Hovering Robots in 3D Rescue Missions with Dynamic Obstacle Avoidance

Authors

  • Dania Madeleina Otoka Niabanga
  • Amir Ali Mokhtarzadeh
  • Ervan Novt Tanggono
  • Gloire Orianne Kifouria
  • Josepha Fansi Nguietchuan
  • Israel Ntumba Mbala

DOI:

https://doi.org/10.54097/xtcjg268

Keywords:

UAV, Indoor rescue, Visual-inertial SLAM, Dynamic obstacle avoidance, Adaptive path planning, Deep learning, LiDAR, GPS-denied environments

Abstract

This paper presents an intelligent and energy-efficient UAV framework for indoor rescue missions in GPS-denied environments. The proposed system integrates visual-inertial SLAM (Simultaneous Localization and Mapping), dynamic obstacle avoidance, and adaptive path planning using lightweight embedded sensors. At its core, the design fuses monocular camera data with inertial measurements through ORB-SLAM3, enhanced by 3D depth sensing from the STMicroelectronics VL53L9 dToF LiDAR. A deep learning module based on YOLOv5 and semantic mapping enables the UAV to detect and navigate around dynamic obstacles, while a novel risk-based path planner evaluates cost maps in real time to optimize safety and energy consumption. The architecture is evaluated in simulated disaster environments, demonstrating improved localization accuracy, reduced power usage, and superior obstacle avoidance over conventional SLAM-based and planning baselines. The results support the framework’s viability for deployment in real-world rescue missions requiring intelligent aerial autonomy.

Downloads

Download data is not yet available.

References

[1] Shahmoradi, A., et al. (2023). UAV-assisted search and rescue in disaster management: A review. Drones, 7(2), 64. https://doi.org/10.3390/drones7020064

[2] Gupta, A., & Fernando, X. (2022). Simultaneous Localization and Mapping (SLAM) and Data Fusion in Unmanned Aerial Vehicles: Recent Advances and Challenges. Drones, 6(4), 85. https://doi.org/10.3390/drones6040085. https://doi.org/10.1109/TRO.2016.2624754

[3] Fu, Y., Zhou, X., & Xu, D. (2021). Visual SLAM with dynamic obstacle avoidance in indoor environments. Sensors, 21(6), 2034. https://doi.org/10.3390/s21062034

[4] Yang, L., Ye, J., Zhang, Y., Wang, L., & Qiu, C. (2024). A semantic SLAM-based method for navigation and landing of UAVs in indoor environments. Knowledge-Based Systems, 293, 111693. https://doi.org/10.1016/j.knosys.2024.111693.

[5] Mokhtarzadeh, A. A.; & Yangqing, Z. J. “Human-Robot Interaction and Self-Driving-Car Safety: Integrating Dispositif Networks.” In Proc. IEEE Int. Conf. Intelligence and Safety for Robotics (ISR), Shenyang, 2018, 494–499. DOI 10.1109/IISR.2018.8535696.

[6] Birk, A., et al. (2009). Rescue robotics—a crucial milestone on the road to intelligent systems. Advanced Robotics, 23(9), 1051–1068. https://doi.org/10.1163/156855309X452449

[7] Tzoumanikas, D., et al. (2019). A survey of SLAM techniques for UAVs. Drones, 3(4), 75. https://doi.org/10.3390/drones3040075

[8] Tardioli, D., et al. (2019). Autonomous navigation of aerial robots in confined spaces. Sensors, 19(5), 1097. https://doi.org/10.3390/s19051097

[9] Zhou, L., Wang, Z., & Wang, H. (2020). Visual SLAM for indoor environments: A comprehensive survey. Robotics and Autonomous Systems, 125, 103425. https://doi.org/10.1016/j.robot.2019.103425

[10] Cadena, C., et al. (2016). Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age. IEEE Transactions on Robotics, 32(6), 1309–1332. https://doi.org/10.1109/TRO.2016.2624754

[11] Fuentes-Pacheco, J., Ruiz-Ascencio, J., & Rendón-Mancha, J. M. (2015). Visual simultaneous localization and mapping: A survey. Artificial Intelligence Review, 43, 55–81. https://doi.org/10.1007/s10462-013-9405-y

[12] Xiong, Y., Zhou, Y., She, J., & Yu, A. (2025). Collaborative Coverage path planning for UAV swarm for Multi-region Post-Disaster assessment. Vehicular Communications, 100915. https://doi.org/10.1016/j.vehcom.2025.100915Qin, T., Li, P., & Shen, S. (2018). VINS-Mono: A robust and versatile monocular visual-inertial state estimator. IEEE Transactions on Robotics, 34(4), 1004–1020. https://doi.org/10.1109/TRO.2018.2853729

[13] Qin, T., Li, P., & Shen, S. (2018). VINS-Mono: a robust and versatile monocular Visual-Inertial state estimator. IEEE Transactions on Robotics, 34(4), 1004–1020. https://doi.org/10.1109/tro.2018.2853729

[14] Mur-Artal, R., & Tardós, J. D. (2017). ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras. IEEE Transactions on Robotics, 33(5), 1255–1262. https://doi.org/10.1109/TRO.2017.2705103

[15] Engel, J., Schöps, T., & Cremers, D. (2014). LSD-SLAM: Large-scale direct monocular SLAM. ECCV, 834–849. https://doi.org/10.1007/978-3-319-10605-2_54

[16] Campos, C., et al. (2021). ORB-SLAM3: An accurate open-source library for visual, visual-inertial, and multi-map SLAM. IEEE Transactions on Robotics, 37(6), 1874–1890. https://doi.org/10.1109/TRO.2021.3075644

[17] Fu, Y., Zhou, X., & Xu, D. (2021). Visual SLAM with dynamic obstacle avoidance in indoor environments. Sensors, 21(6), 2034. https://doi.org/10.3390/s21062034

[18] Chen, X., et al. (2021). Real-time dynamic obstacle detection and tracking for autonomous robots using deep learning. IEEE Access, 9, 71349–71359. https://doi.org/10.1109/ACCESS.2021.3077794

[19] Dharmadhikari, T., Sahu, S. K., & Gandhi, N. (2020). Deep learning-based dynamic obstacle avoidance for UAVs. IEEE Sensors Journal, 20(24), 15281–15289. https://doi.org/10.1109/JSEN.2020.3029122

[20] Tai, L., Paolo, G., & Liu, M. (2017). Virtual-to-real deep reinforcement learning: Continuous control of mobile robots for mapless navigation. IROS, 31–36. https://doi.org/10.1109/IROS.2017.8202134

[21] Zhou, Y., Zhou, B., & Wang, J. (2021). LiDAR-based SLAM and motion planning for UAVs in cluttered environments. Sensors, 21(18), 6058. https://doi.org/10.3390/s21186058Bottom of Form

[22] Fei, H.; Mokhtarzadeh, A. A.; Zhou, J.; & Geng, J. “Feature-Fusion Panoramic Segmentation via Deep Neural Networks.” Journal of Physics: Conference Series 2467 (2023): 012006. DOI 10.1088/1742-6596/2467/1/012006.

[23] Lin, J.; Gu, Z.; Mokhtarzadeh, A. A.; Chen, X.; Ashim, K.; & Shi, K. “A Fast Humanoid-Robot Arm for Boxing Based on Servo Motors.” In Proc. Int. Conf. High-Performance Big Data and Intelligent Systems (HPBD&IS), Macau, 2021, 252–255. DOI 10.1109/HPBDIS53214.2021.9658471.

[24] Song, L.; & Mokhtarzadeh, A. A. “Automatic-Charging Method for Quadruped Robots.” Journal of Physics: Conference Series 2467 (2023): 012028. DOI 10.1088/1742-6596/2467/1/012028.

[25] Song, L.; & Mokhtarzadeh, A. A. “Automatic-Charging Method for Quadruped Robots.” Journal of Physics: Conference Series 2467 (2023): 012028. DOI 10.1088/1742-6596/2467/1/012028.

[26] Li, S.; Mokhtarzadeh, A. A.; Gao, H.; & Zhang, Y. “Pose-Aware Multi-Position Feature Network for Driver-Distraction Recognition.” Journal of Physics: Conference Series 2467 (2023): 012013. DOI 10.1088/1742-6596/2467/1/012013.

[27] Amir Ali Mokhtarzadeh (2023). “Robotics and AI.” Nanjing: Nanjing University Press, 2023.

Downloads

Published

29-04-2026

Issue

Section

Articles

How to Cite

Niabanga, D. M. O., Mokhtarzadeh, A. A., Tanggono, E. N., Kifouria, G. O., Nguietchuan, J. F., & Mbala, I. N. (2026). A Visual SLAM-Based Framework for Indoor Hovering Robots in 3D Rescue Missions with Dynamic Obstacle Avoidance. Journal of Computing and Electronic Information Management, 21(1), 101-110. https://doi.org/10.54097/xtcjg268