Off-Road Terrain Identification And Analysis

Main Article Content

Nagula Sai Sathvik, Pudi Manikanta, Kotha Sai Varshith, Pendli Bestha Sai Praneeth, Mohit Lalit, Mohit Angurala

Abstract

Background: The role of the Terrain is paramount for any autonomous vehicle to drive safely on any type of surface. The Autonomous vehicles should have the capability of identifying the terrain and should adapt to the environment. With the evolution of robotics and Artificial Intelligence, and understanding diverse terrains, the techniques for terrain identification are also advancing with a major focus on safety.


Methodology: To make Terrain Detection and Identification more reliable we used instance segmentation which is a more sophisticated type of segmentation that goes a step ahead of semantic segmentation by performing both object detection and segmentation at the same time. In order to perform Instance segmentation, we used the YOLOv8 architecture which is considered to be the state-of-the-art CNN (Convolutional Neural Network) architecture. The YOLOv8 model was trained on an Off-road Terrain Dataset.


Results: Our findings indicate that the state-of-the-art YOLOv8 instance segmentation model provided the best results for terrain detection and segmentation with a threshold confidence of 0.60, and the results provide a maximum confidence of 0.92 which indicates an accurate segmentation model for the given terrain detection problem.


Conclusion: The present work motivates for a more viable hardware model that makes use of trained computer vision models and cutting-edge sensors that can be tested on different soils and terrain. The results obtained can be used to study about the different Terrains and select the most suitable model, this in turn drives for further research in the subject of Terrain Identification and Detection.

Article Details

Section
Articles
Author Biography

Nagula Sai Sathvik, Pudi Manikanta, Kotha Sai Varshith, Pendli Bestha Sai Praneeth, Mohit Lalit, Mohit Angurala

1Nagula Sai Sathvik

2 Pudi Manikanta

3 Kotha Sai Varshith

4 Pendli Bestha Sai Praneeth

5 Mohit Lalit

6 Mohit Angurala

1Corresponding author:  Nagula Sai Sathvik1*, Chandigarh University, Gharuan, India - 140314. 

E-mail *sathvik2210@gmail.com  

2, 3, 4, 5, 6 Chandigarh University, Gharuan, India- 140314

Copyright © JES 2023 on-line : journal.esrgroups.org

References

S. Wang, “Road Terrain Classification Technology for Autonomous Vehicle.” [Online]. Available: http://www.springer.com/series/15608

M. Bajracharya, A. Howard, L. H. Matthies, B. Tang, and M. Turmon, “Autonomous off-road navigation with end-to-end learning for the LAGR program,” J Field Robot, vol. 26, no. 1, pp. 3–25, Jan. 2009, doi: 10.1002/rob.20269.

P. Papadakis, “Terrain traversability analysis methods for unmanned ground vehicles: A survey,” Engineering Applications of Artificial Intelligence, vol. 26, no. 4. Elsevier Ltd, pp. 1373–1385, 2013. doi: 10.1016/j.engappai.2013.01.006.

S. Sharma et al., “CaT: CAVS Traversability Dataset for Off-Road Autonomous Driving,” IEEE Access, vol. 10, pp. 24759–24768, 2022, doi: 10.1109/ACCESS.2022.3154419.

H. Cholakkal, G. Sun, F. Shahbaz Khan, and L. Shao, “Object Counting and Instance Segmentation with Image-level Supervision.”

D. Jiang, G. Li, C. Tan, L. Huang, Y. Sun, and J. Kong, “Semantic segmentation for multiscale target based on object recognition using the improved Faster-RCNN model,” Future Generation Computer Systems, vol. 123, pp. 94–104, Oct. 2021, doi: 10.1016/j.future.2021.04.019.

2018 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2018.

S. M. Patole, M. Torlak, D. Wang, and M. Ali, “Automotive Radars: A review of signal processing techniques,” IEEE Signal Process Mag, vol. 34, no. 2, pp. 22–35, Mar. 2017, doi: 10.1109/MSP.2016.2628914.

S. Royo and M. Ballesta-Garcia, “An overview of lidar imaging systems for autonomous vehicles,” Applied Sciences (Switzerland), vol. 9, no. 19, Oct. 2019, doi: 10.3390/app9194093.

Telecommunications Society, Univerzitet u Beogradu. School of Electrical Engineering, IEEE Communications Society. Serbia & Montenegro Chapter, and Institute of Electrical and Electronics Engineers., TELFOR 2018 : 26. Telekomunikacioni Forum : Beograd, 21 i 22. novembar 2018. godine, Sava T︠s︡entar = TELFOR 2018 : 26th Telecommunications Forum : Belgrade, 21 and 22 November 2017, the SAVA Center.

Engineers., TELFOR 2018 : 26. Telekomunikacioni Forum : Beograd, 21 i 22. novembar 2018. godine, Sava T︠s︡entar = TELFOR 2018 : 26th Telecommunications Forum : Belgrade, 21 and 22 November 2017, the SAVA Center.

J. Steinbaeck, C. Steger, G. Holweg, and N. Druml, “Next generation radar sensors in automotive sensor fusion systems,” in 2017 Symposium on Sensor Data Fusion: Trends, Solutions, Applications, SDF 2017, Institute of Electrical and Electronics Engineers Inc., Dec. 2017, pp. 1–6. doi: 10.1109/SDF.2017.8126389.

Institute of Electrical and Electronics Engineers, 2016 IEEE 83rd Vehicular Technology Conference (VTC Spring) : proceedings : Nanjing, China, 15-18 May 2016.

V. van der Burg, G. de Boer, A. A. Akdag Salah, S. Chandrasegaran, and P. Lloyd, “Objective Portrait,” Association for Computing Machinery (ACM), Jul. 2023, pp. 387–400. doi: 10.1145/3563657.3595974.

H. Xu, Y. Gao, F. Yu, and T. Darrell, “End-to-end Learning of Driving Models from Large-scale Video Datasets,” Dec. 2016, [Online]. Available: http://arxiv.org/abs/1612.01079

H. Caesar et al., “nuScenes: A multimodal dataset for autonomous driving,” Mar. 2019, [Online]. Available: http://arxiv.org/abs/1903.11027

I. Krylov, S. Nosov, and V. Sovrasov, “Open Images V5 Text Annotation and Yet Another Mask Text Spotter,” Jun. 2021, [Online]. Available: http://arxiv.org/abs/2106.12326

P. Sun et al., “Scalability in Perception for Autonomous Driving: Waymo Open Dataset,” Dec. 2019, [Online]. Available: http://arxiv.org/abs/1912.04838

A. Valada, R. Mohan, and W. Burgard, “Self-Supervised Model Adaptation for Multimodal Semantic Segmentation,” Aug. 2018, doi: 10.1007/s11263-019-01188-y.

W. Maddern, G. Pascoe, C. Linegar, and P. Newman, “Year, 1000km: The Oxford RobotCar Dataset.” [Online]. Available: http://robotcar-dataset.robots.ox.ac.uk

J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You Only Look Once: Unified, Real-Time Object Detection.” [Online]. Available: http://pjreddie.com/yolo/

Nagesh Appe, S. R. ., Arulselvi, G. ., & Balaji, G. . (2023). Tomato Ripeness Detection and Classification using VGG based CNN Models. International Journal of Intelligent Systems and Applications in Engineering, 11(1), 296–302. Retrieved from https://ijisae.org/index.php/IJISAE/article/view/2538