Low-Light Robust Detection System for Vulnerable Road Users
Abstract
This work proposes a computer vision-based ADAS system designed to enhance the safety of Vulnerable Road Users (VRUs) in blind spot regions using only rear-view camera input. The method combines object detection and multi-object tracking to generate proximity alerts in real time. The system also integrates a fallback mechanism for low-light conditions, based on headlight detection. To support research in this domain, we introduce FilterLane-VRU, a new dataset composed of real-world rear-view urban traffic scenarios annotated with alert labels. The proposed pipeline offers a cost-effective and reliable solution for VRU protection in urban contexts.
Keywords:
ADAS system, Vulnerable road users (VRUs), Computer vision, Object detection and tracking
References
Al-Qassab, H., Pang, S., Al-Qizwini, M., & Radha, H. (2018). Visual sensor fusion and data sharing across connected vehicles for active safety. Technical Report 2018-010026, SAE International, Warrendale, PA, USA.
Alaqeel, A. A., Alburadi, A., Nashashibi, A. Y., Sarabandi, K., & Shaman, H. (2023). Detection and identification of pedestrians and bicyclists using J-band automotive radars. In IGARSS 2023-2023 IEEE International Geoscience and Remote Sensing Symposium (pp. 6157–6159).
Borba, T. D., Vaculín, O., Marzbani, H., & Jazar, R. N. (2025). Increasing safety of vulnerable road users in scenarios with occlusion: A collaborative approach for smart infrastructures and automated vehicles. IEEE Access, 13, 8851–8885.
Caesar, H., Bankiti, V., Lang, A. H., Vora, S., Liong, V. E., Xu, Q., Krishnan, A., Pan, Y., Baldan, G., & Beijbom, O. (2020). NuScenes: A multimodal dataset for autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 11621–11631).
de Carvalho, C. H. R., & Guedes, E. P. (2023). Balanço da primeira década de ação pela segurança no trânsito no Brasil e perspectivas para a segunda década. Administração Pública. Governo. Estado.
Dubey, A., Fuchs, J., Reissland, T., Weigel, R., & Lurz, F. (2020). Uncertainty analysis of deep neural network for classification of vulnerable road users using micro-Doppler. In 2020 IEEE Topical Conference on Wireless Sensors and Sensor Networks (WiSNeT) (pp. 23–26).
Han, L., Zheng, P., Li, H., Chen, J., Hua, Z., & Zhang, Z. (2022). A novel early warning strategy for right-turning blind zone based on vulnerable road users detection. Neural Computing and Applications, 1–20.
Jocher, G., Qiu, J., & Chaurasia, A. (2023). Ultralytics YOLO.
Liu, Z., Tang, H., Amini, A., Yang, X., Mao, H., Rus, D. L., & Han, S. (2023). BevFusion: Multi-task multi-sensor fusion with unified bird’s-eye view representation. In 2023 IEEE International Conference on Robotics and Automation (ICRA) (pp. 2774–2781).
Malinverno, M., Avino, G., Casetti, C., Chiasserini, C.-F., Malandrino, F., & Scarpina, S. (2018). Performance analysis of C-V2I-based automotive collision avoidance. In 2018 IEEE 19th International Symposium on “A World of Wireless, Mobile and Multimedia Networks” (WoWMoM) (pp. 1–9).
Mannion, P. (2019). Vulnerable road user detection: State-of-the-art and open challenges. arXiv preprint arXiv:1902.03601.
Ortiz, F. M., Sammarco, M., Costa, L. H. M. K., & Detyniecki, M. (2023). Applications and services using vehicular exteroceptive sensors: A survey. IEEE Transactions on Intelligent Vehicles, 8(1), 949–969.
Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 779–788).
Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster R-CNN: Towards real-time object detection with region proposal networks. Advances in Neural Information Processing Systems, 28.
Silva, R. M., Azevedo, G. F., Berto, M. V. V., Rocha, J. R., Fidelis, E. C., Nogueira, M. V., Lisboa, P. H., & Almeida, T. A. (2024). Vulnerable road user detection and safety enhancement: A comprehensive survey.
Sun, P., Kretzschmar, H., Dotiwalla, X., Chouard, A., Patnaik, V., Tsui, P., Guo, J., Zhou, Y., Chai, Y., Caine, B., et al. (2020). Scalability in perception for autonomous driving: Waymo open dataset. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 2446–2454).
Van Beeck, K., & Goedemé, T. (2016). The automatic blind spot camera: A vision-based active alarm system. In Computer Vision–ECCV 2016 Workshops: Amsterdam, The Netherlands, October 8-10 and 15-16, 2016, Proceedings, Part I 14 (pp. 122–135). Springer.
Wang, Z., Huang, Z., Gao, Y., Wang, N., & Liu, S. (2024). MV2DFusion: Leveraging modality-specific object semantics for multi-modal 3D detection. arXiv preprint arXiv:2408.05945.
Wojke, N., Bewley, A., & Paulus, D. (2017). Simple online and real-time tracking with a deep association metric. In 2017 IEEE International Conference on Image Processing (ICIP) (pp. 3645–3649).
Wu, B., Wan, A., Yue, X., & Keutzer, K. (2018). SqueezeSeg: Convolutional neural nets with recurrent CRF for real-time road-object segmentation from 3D LiDAR point cloud. In 2018 IEEE International Conference on Robotics and Automation (ICRA) (pp. 1887–1893).
Zhang, Y., Sun, P., Jiang, Y., Yu, D., Weng, F., Yuan, Z., Luo, P., Liu, W., & Wang, X. (2022). ByteTrack: Multi-object tracking by associating every detection box. In European Conference on Computer Vision (pp. 1–21).
Zhao, Y., Lv, W., Xu, S., Wei, J., Wang, G., Dang, Q., Liu, Y., & Chen, J. (2024). DETRs beat YOLOs on real-time object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 16965–16974).
Alaqeel, A. A., Alburadi, A., Nashashibi, A. Y., Sarabandi, K., & Shaman, H. (2023). Detection and identification of pedestrians and bicyclists using J-band automotive radars. In IGARSS 2023-2023 IEEE International Geoscience and Remote Sensing Symposium (pp. 6157–6159).
Borba, T. D., Vaculín, O., Marzbani, H., & Jazar, R. N. (2025). Increasing safety of vulnerable road users in scenarios with occlusion: A collaborative approach for smart infrastructures and automated vehicles. IEEE Access, 13, 8851–8885.
Caesar, H., Bankiti, V., Lang, A. H., Vora, S., Liong, V. E., Xu, Q., Krishnan, A., Pan, Y., Baldan, G., & Beijbom, O. (2020). NuScenes: A multimodal dataset for autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 11621–11631).
de Carvalho, C. H. R., & Guedes, E. P. (2023). Balanço da primeira década de ação pela segurança no trânsito no Brasil e perspectivas para a segunda década. Administração Pública. Governo. Estado.
Dubey, A., Fuchs, J., Reissland, T., Weigel, R., & Lurz, F. (2020). Uncertainty analysis of deep neural network for classification of vulnerable road users using micro-Doppler. In 2020 IEEE Topical Conference on Wireless Sensors and Sensor Networks (WiSNeT) (pp. 23–26).
Han, L., Zheng, P., Li, H., Chen, J., Hua, Z., & Zhang, Z. (2022). A novel early warning strategy for right-turning blind zone based on vulnerable road users detection. Neural Computing and Applications, 1–20.
Jocher, G., Qiu, J., & Chaurasia, A. (2023). Ultralytics YOLO.
Liu, Z., Tang, H., Amini, A., Yang, X., Mao, H., Rus, D. L., & Han, S. (2023). BevFusion: Multi-task multi-sensor fusion with unified bird’s-eye view representation. In 2023 IEEE International Conference on Robotics and Automation (ICRA) (pp. 2774–2781).
Malinverno, M., Avino, G., Casetti, C., Chiasserini, C.-F., Malandrino, F., & Scarpina, S. (2018). Performance analysis of C-V2I-based automotive collision avoidance. In 2018 IEEE 19th International Symposium on “A World of Wireless, Mobile and Multimedia Networks” (WoWMoM) (pp. 1–9).
Mannion, P. (2019). Vulnerable road user detection: State-of-the-art and open challenges. arXiv preprint arXiv:1902.03601.
Ortiz, F. M., Sammarco, M., Costa, L. H. M. K., & Detyniecki, M. (2023). Applications and services using vehicular exteroceptive sensors: A survey. IEEE Transactions on Intelligent Vehicles, 8(1), 949–969.
Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 779–788).
Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster R-CNN: Towards real-time object detection with region proposal networks. Advances in Neural Information Processing Systems, 28.
Silva, R. M., Azevedo, G. F., Berto, M. V. V., Rocha, J. R., Fidelis, E. C., Nogueira, M. V., Lisboa, P. H., & Almeida, T. A. (2024). Vulnerable road user detection and safety enhancement: A comprehensive survey.
Sun, P., Kretzschmar, H., Dotiwalla, X., Chouard, A., Patnaik, V., Tsui, P., Guo, J., Zhou, Y., Chai, Y., Caine, B., et al. (2020). Scalability in perception for autonomous driving: Waymo open dataset. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 2446–2454).
Van Beeck, K., & Goedemé, T. (2016). The automatic blind spot camera: A vision-based active alarm system. In Computer Vision–ECCV 2016 Workshops: Amsterdam, The Netherlands, October 8-10 and 15-16, 2016, Proceedings, Part I 14 (pp. 122–135). Springer.
Wang, Z., Huang, Z., Gao, Y., Wang, N., & Liu, S. (2024). MV2DFusion: Leveraging modality-specific object semantics for multi-modal 3D detection. arXiv preprint arXiv:2408.05945.
Wojke, N., Bewley, A., & Paulus, D. (2017). Simple online and real-time tracking with a deep association metric. In 2017 IEEE International Conference on Image Processing (ICIP) (pp. 3645–3649).
Wu, B., Wan, A., Yue, X., & Keutzer, K. (2018). SqueezeSeg: Convolutional neural nets with recurrent CRF for real-time road-object segmentation from 3D LiDAR point cloud. In 2018 IEEE International Conference on Robotics and Automation (ICRA) (pp. 1887–1893).
Zhang, Y., Sun, P., Jiang, Y., Yu, D., Weng, F., Yuan, Z., Luo, P., Liu, W., & Wang, X. (2022). ByteTrack: Multi-object tracking by associating every detection box. In European Conference on Computer Vision (pp. 1–21).
Zhao, Y., Lv, W., Xu, S., Wei, J., Wang, G., Dang, Q., Liu, Y., & Chen, J. (2024). DETRs beat YOLOs on real-time object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 16965–16974).
Published
2025-05-19
How to Cite
AVENA, Vinicius; COUTO, Rodrigo S.; CAMPISTA, Miguel Elias M.; COSTA, Luís Henrique M. K..
Low-Light Robust Detection System for Vulnerable Road Users. In: URBAN COMPUTING WORKSHOP (COURB), 9. , 2025, Natal/RN.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2025
.
p. 155-168.
ISSN 2595-2706.
DOI: https://doi.org/10.5753/courb.2025.9064.
