RD: A NOVEL DATASET FOR OBJECT DETECTION IN RAINY WEATHER CONDITIONS

  • Quoc-Viet Hoang National Taipei University of Technology (NTUT), Taiwan
  • Trung-Hieu Le National Taipei University of Technology (NTUT), Taiwan
  • Minh-Quy Nguyen Hung Yen University of Technology and Education
Keywords: Machine learning, deep learning, object detection, dataset

Abstract

In recent years, object detection models using convolutional neural networks have received considerable attention and achieved appealing results in autonomous driving systems. However, in inclement weather conditions, the performance of these models decreases tremendously due to the lack of relevant datasets for training strategies. In this work, we address the problems of object detection with rain interference by introducing a novel rain driving dataset, named RD. Our dataset highlights a variety of data sources with 1,100 real-word rainy images depicting a variety of driving scenes and comes with ground truth bounding box annotations for five common traffic object categories. We adopt RD for training three state-of-the-art object detection models, encompassing SSD512, RetinaNet, and YOLO-V3. Experimental results show that the performance of SSD512, RetinaNet, and YOLO-V3 models are advanced up to 5.64%, 8.97%, and
5.70%, respectively.

References

R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hier-archies for accurate object detection and semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 580–587.

R. Girshick, “Fast r-cnn,” in The IEEE International Conference on Computer Vision (ICCV), December 2015.

S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” in Advances in neural information processing systems, 2015, pp. 91–99.

W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg, “Ssd: Single shot multibox detector,” in European conference on computer vision. Springer, 2016, pp. 21–37.

T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Doll ´ar, “Focal loss for dense object detection,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2980–2988.

J. Redmon and A. Farhadi, “Yolov3: An incremental improve- ment,” arXiv preprint arXiv:1804.02767, 2018.

J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unifi ed, real-time object detection,” in Proceedings of the IEEE conference on computer vision and pattern recognition 2016, pp. 779– 788.

J. Redmon and A. Farhadi, “Yolo9000: better, faster, stronger,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 7263–7271.

T. Le, D. Jaw, I. Lin, H. Liu and S. Huang, “An efficient hand detection method based on convolutional neural network,” 2018 7th International Symposium on Next Generation Electronics (ISNE), Taipei, 2018, pp. 1-2; doi: 10.1109/ISNE.2018.8394651.

T. Le, S. Huang and D. Jaw, “Cross-Resolution Feature Fusion for Fast Hand Detection in Intelligent Homecare Systems,” in IEEE Sensors Journal, 15 June15, 2019, vol. 19, no. 12, pp. 4696-4704; doi: 10.1109/JSEN.2019.2901259.

Q. -V. Hoang, T. -H. Le and S. -C. Huang, “An Improvement of RetinaNet for Hand Detection in Intelligent Homecare Systems,” 2020 IEEE International Conference on Consumer Electronics - Taiwan (ICCE-Taiwan), Taoyuan, Taiwan, 2020, pp. 1-2; doi: 10.1109/ICCE-Taiwan49838.2020.9258335.

Q. -V. Hoang, T. -H. Le and S. -C. Huang, “Data Augmentation for Improving SSD Performance in Rainy Weather Conditions,” 2020 IEEE International Conference on Consumer Electronics - Taiwan (ICCE-Taiwan), Taoyuan, Taiwan, 2020, pp. 1-2; doi: 10.1109/ICCE-Taiwan49838.2020.9258127.

S. -C. Huang, T. -H. Le and D. -W. Jaw, “DSNet: Joint Semantic Learning for Object Detection in Inclement Weather Conditions,” in IEEE Transactions on Pattern Analysis and Machine Intelligence; doi: 10.1109/TPAMI.2020.2977911.

T. -H. Le, P. -H. Lin and S. -C. Huang, “LD-Net: An Efficient Lightweight Denoising Model Based on Convolutional Neural Network,” in IEEE Open Journal of the Computer Society, 2020, vol. 1, pp. 173-181; doi: 10.1109/OJCS.2020.3012757.

S. Li, I. B. Araujo, W. Ren, Z. Wang, E. K. Tokuda, R. H. Junior, R. Cesar-Junior, J. Zhang, X. Guo, and X. Cao, “Single image deraining: A comprehensive benchmark analysis,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 3838–3847.

A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics: The KITTI dataset. The International Journal of Robotics Research,” Int. J. Rob. Res., 2013, no. October, pp. 1–6.

F. Yu et al., “BDD100K: A Diverse Driving Video Database with Scalable Annotation Tooling,” 2018, pp. 1–16.

B. Li, W. Ren, D. Fu, D. Tao, D. Feng, W. Zeng, and Z. Wang, “Benchmarking single-image dehazing and beyond,” IEEE Trans-actions on Image Processing, 2018, vol. 28, no. 1, pp. 492–505.

https://github.com/tzutalin/labelImg.

Published
2020-10-12
How to Cite
Quoc-Viet Hoang, Trung-Hieu Le, & Minh-Quy Nguyen. (2020). RD: A NOVEL DATASET FOR OBJECT DETECTION IN RAINY WEATHER CONDITIONS. UTEHY Journal of Science and Technology, 27, 86-90. Retrieved from http://tapchi.utehy.edu.vn/index.php/jst/article/view/394