AIRCloud: A Segmented Dataset of 3D Point Clouds in Brazil
Abstract
Semantic segmentation of 3D point clouds is pivotal to advancing autonomous vehicles because it provides detailed information about the surrounding environment. Nevertheless, major challenges stem from the overall scarcity of annotated datasets for low-resolution LiDAR sensors and the complete absence of collections acquired specifically within Brazil. This work introduces AIRCloud, a segmented 3D point-cloud dataset captured nationally with a 16-beam LiDAR sensor. For validation, we employ the Range-Image U-Net (RIU-Net) architecture previously trained on SemanticKITTI. Multiple preand post-processing techniques were assessed to ease the limitations imposed by the sensor’s low resolution. Results, expressed as mean Intersection over Union (mIoU), show that targeted strategies can boost RIU-Net performance—for example, raising mIoU from 40.9% (baseline) to 44.6% with nearest neighbors interpolation. These findings underscore the potential of lower-cost sensors in Brazilian contexts, broadening the outlook for autonomous-driving research in Brazil.References
Behley, J., Garbade, M., Milioto, A., Quenzel, J., Behnke, S., Stachniss, C., and Gall, J. (2019). SemanticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences. In Proc. of the IEEE/CVF International Conf. on Computer Vision (ICCV).
Behley, J. and Stachniss, C. (2018). Efficient surfel-based slam using 3d laser range data in urban environments.
Biasutti, P., Bugeau, A., Aujol, J.-F., and Brédif, M. (2019). Riu-net: Embarrassingly simple semantic segmentation of 3d lidar point cloud. arXiv preprint arXiv:1905.08748.
Caesar, H., Bankiti, V., Lang, A. H., Vora, S., Liong, V. E., Xu, Q., Krishnan, A., Pan, Y., Baldan, G., and Beijbom, O. (2020). nuscenes: A multimodal dataset for autonomous driving. In CVPR.
Chen, K., Oldja, R., Smolyanskiy, N., Birchfield, S., Popov, A., Wehr, D., Eden, I., and Pehserl, J. (2020). Mvlidarnet: Real-time multi-class scene understanding for autonomous driving using multiple views. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 2288–2294.
Chen, S., Liu, B., Feng, C., Vallespi-Gonzalez, C., and Wellington, C. (2021). 3d point cloud processing and learning for autonomous driving: Impacting map creation, localization, and perception. IEEE Signal Processing Magazine, 38(1):68–86.
Elhousni, M. and Huang, X. (2020). A survey on 3d lidar localization for autonomous vehicles. In 2020 IEEE Intelligent Vehicles Symposium (IV), pages 1879–1884.
Geiger, A., Lenz, P., and Urtasun, R. (2012). Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite. In Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pages 3354–3361.
Pan, Y., Gao, B., Mei, J., Geng, S., Li, C., and Zhao, H. (2020). Semanticposs: A point cloud dataset with large quantity of dynamic instances.
Pendleton, S. D., Andersen, H., Du, X., Shen, X., Meghjani, M., Eng, Y. H., Rus, D., and Ang, M. H. (2017). Perception, planning, control, and coordination for autonomous vehicles. Machines, 5(1).
Rateke, T., Justen, K., and Von Wangenheim, A. (2019). Road surface classification with images captured from low-cost camera - road traversing knowledge (rtk) dataset. Revista de Informática Teórica e Aplicada, 26.
Shinzato, P. Y., dos Santos, T. C., Rosero, L. A., Ridel, D. A., Massera, C. M., Alencar, F., Batista, M. P., Hata, A. Y., Osório, F. S., and Wolf, D. F. (2016). Carina dataset: An emerging-country urban scenario benchmark for road detection systems. In 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), pages 41–46.
Sun, P., Kretzschmar, H., Dotiwalla, X., Chouard, A., Patnaik, V., Tsui, P., Guo, J., Zhou, Y., Chai, Y., Caine, B., Vasudevan, V., Han, W., Ngiam, J., Zhao, H., Timofeev, A., Ettinger, S., Krivokon, M., Gao, A., Joshi, A., Zhang, Y., Shlens, J., Chen, Z., and Anguelov, D. (2020). Scalability in perception for autonomous driving: Waymo open dataset. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 2443–2451.
Wu, B., Wan, A., Yue, X., and Keutzer, K. (2018). Squeezeseg: Convolutional neural nets with recurrent crf for real-time road-object segmentation from 3d lidar point cloud. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pages 1887–1893.
Behley, J. and Stachniss, C. (2018). Efficient surfel-based slam using 3d laser range data in urban environments.
Biasutti, P., Bugeau, A., Aujol, J.-F., and Brédif, M. (2019). Riu-net: Embarrassingly simple semantic segmentation of 3d lidar point cloud. arXiv preprint arXiv:1905.08748.
Caesar, H., Bankiti, V., Lang, A. H., Vora, S., Liong, V. E., Xu, Q., Krishnan, A., Pan, Y., Baldan, G., and Beijbom, O. (2020). nuscenes: A multimodal dataset for autonomous driving. In CVPR.
Chen, K., Oldja, R., Smolyanskiy, N., Birchfield, S., Popov, A., Wehr, D., Eden, I., and Pehserl, J. (2020). Mvlidarnet: Real-time multi-class scene understanding for autonomous driving using multiple views. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 2288–2294.
Chen, S., Liu, B., Feng, C., Vallespi-Gonzalez, C., and Wellington, C. (2021). 3d point cloud processing and learning for autonomous driving: Impacting map creation, localization, and perception. IEEE Signal Processing Magazine, 38(1):68–86.
Elhousni, M. and Huang, X. (2020). A survey on 3d lidar localization for autonomous vehicles. In 2020 IEEE Intelligent Vehicles Symposium (IV), pages 1879–1884.
Geiger, A., Lenz, P., and Urtasun, R. (2012). Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite. In Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pages 3354–3361.
Pan, Y., Gao, B., Mei, J., Geng, S., Li, C., and Zhao, H. (2020). Semanticposs: A point cloud dataset with large quantity of dynamic instances.
Pendleton, S. D., Andersen, H., Du, X., Shen, X., Meghjani, M., Eng, Y. H., Rus, D., and Ang, M. H. (2017). Perception, planning, control, and coordination for autonomous vehicles. Machines, 5(1).
Rateke, T., Justen, K., and Von Wangenheim, A. (2019). Road surface classification with images captured from low-cost camera - road traversing knowledge (rtk) dataset. Revista de Informática Teórica e Aplicada, 26.
Shinzato, P. Y., dos Santos, T. C., Rosero, L. A., Ridel, D. A., Massera, C. M., Alencar, F., Batista, M. P., Hata, A. Y., Osório, F. S., and Wolf, D. F. (2016). Carina dataset: An emerging-country urban scenario benchmark for road detection systems. In 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), pages 41–46.
Sun, P., Kretzschmar, H., Dotiwalla, X., Chouard, A., Patnaik, V., Tsui, P., Guo, J., Zhou, Y., Chai, Y., Caine, B., Vasudevan, V., Han, W., Ngiam, J., Zhao, H., Timofeev, A., Ettinger, S., Krivokon, M., Gao, A., Joshi, A., Zhang, Y., Shlens, J., Chen, Z., and Anguelov, D. (2020). Scalability in perception for autonomous driving: Waymo open dataset. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 2443–2451.
Wu, B., Wan, A., Yue, X., and Keutzer, K. (2018). Squeezeseg: Convolutional neural nets with recurrent crf for real-time road-object segmentation from 3d lidar point cloud. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pages 1887–1893.
Published
2025-07-20
How to Cite
SANTOS, Lucas B.; PINHEIRO, Beatriz; MARTINS, Pedro; MATTEUS, Victor; LEONEL, Matheus; SENE, Iwens G.; ARAÚJO, Lucas.
AIRCloud: A Segmented Dataset of 3D Point Clouds in Brazil. In: INTEGRATED SOFTWARE AND HARDWARE SEMINAR (SEMISH), 52. , 2025, Maceió/AL.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2025
.
p. 203-214.
ISSN 2595-6205.
DOI: https://doi.org/10.5753/semish.2025.8245.
