Please wait a minute...
Computational Visual Media  2020, Vol. 6 Issue (4): 401-415    doi: 10.1007/s41095-020-0190-8
Research Article     
Fluid-inspired field representation for risk assessment in road scenes
Xuanpeng Li1(),Lifeng Zhu1,(✉)(),Qifan Xue1(),Dong Wang1(),Yongjie Jessica Zhang2()
1 School of Instrument Science and Engineering, Southeast University, Nanjing 210096, China
2 Department of Mechanical Engineering, CarnegieMellon University, Pittsburgh, PA 15213, USA
Download: PDF (1079 KB)      HTML  
Export: BibTeX | EndNote (RIS)      

Abstract  

Prediction of the likely evolution of trafficscenes is a challenging task because of high uncertaintiesfrom sensing technology and the dynamic environment. It leads to failure of motion planning for intelligent agents like autonomous vehicles. In this paper, we propose a fluid-inspired model to estimate collision risk in road scenes. Multi-object states are detected and tracked, and then a stable fluid model is adopted to construct the risk field. Objects’ state spaces are used as the boundary conditions in the simulation of advection and diffusion processes. We have evaluated our approach on the public KITTI dataset; our modelcan provide predictions in the cases of misdetection and tracking error caused by occlusion. It proves a promising approach for collision risk assessment in road scenes.



Key wordsfluid-inspired risk field      multi-object tracking      road scenes     
Received: 01 June 2020      Published: 30 November 2020
Fund:  National Natural Science Foundation of China(Grant No. 61906038);Fundamental Research Funds for the Central Universities(Grant No. 2242019K40039)
Corresponding Authors: Lifeng Zhu     E-mail: li_xuanpeng@seu.edu.cn;lfzhulf@gmail.com;xue_qifan@seu.edu.cn;kingeast16@ seu.edu.cn;jessicaz@andrew.cmu.edu
About author: Xuanpeng Li received his B.S. and M.S. degrees in instrument science and technology from Southeast University, China, in 2007 and 2010, and his Ph.D. degree in information technology from the Université de Technologie de Compiègne, France, in 2014. From 2014 to 2015, he was a post-doctor at LIVIC, IFSTTAR, and VEDECOM in France. Since 2015, he has been an assistant professor with the School of Instrument Science and Engineering. His research interests include causal perception, scene understanding, driving behavior analysis, and risk estimation for intelligent transportation systems.|Lifeng Zhu received his doctoral degree in computer science from Peking University in 2012. From 2012 to 2015, he was a post-doctor at the Universities of Tokyo and Pennsylvania. Since 2018, he has been an associate professor in the Department of Instrument Science and Technology in Southeast University. His research topics are visual computing and human computer interaction. His interests are in particular shape modeling, simulation, and visualization methods for medical science and intelligent transportation systems.|Qifan Xue is currently a Ph.D. student in Southeast University, China. He received his B.S. degree from Southeast University in 2017. His research interests include causal perception and scene understanding on the traffic scene.|Dong Wang received his B.S. degree from the School of Instrument Science and Engineering, Southeast University in 2011, and his Ph.D. degree from Southeast University in 2016. He is currently an assistant professor with the School of Instrument Science and Engineering, Southeast University. His research interests include intelligent transportation, sensor technology, and signal processing.|Yongjie Jessica Zhang is the George Tallman Ladd and Florence Barrett Ladd Professor of Mechanical Engineering at Carnegie Mellon University with a courtesy appointment in the Departmentof Biomedical Engineering. She received her B.Eng. degree in automotive engineer-ing and her M.Eng. in engineering me-chanics from Tsinghua University, China, and M.Eng. degrees in aerospace engineering and engineering mechanics, and her Ph.D. degree in computational Engineering and Sciences from the Institute for Computational Engineering and Sciences, the University of Texas at Austin. Her research interests include computational geometry, mesh generation, computer graphics, visualization, finite element methods, isogeometric analysis, and their application to biomedicine and engineering.
Cite this article:

Xuanpeng Li,Lifeng Zhu,Qifan Xue,Dong Wang,Yongjie Jessica Zhang. Fluid-inspired field representation for risk assessment in road scenes. Computational Visual Media, 2020, 6(4): 401-415.

URL:

http://cvm.tsinghuajournals.com/10.1007/s41095-020-0190-8     OR     http://cvm.tsinghuajournals.com/Y2020/V6/I4/401

Fig. 1 Our system has two main parts: 3D object detection, and tracking and fluid-inspired risk assessment.
Fig. 2 Advection and diffusion: (a) only advection; (b) only diffusion; (c) advection and diffusion.
Fig. 3 Case study in which the 11th vehicle disappears (b) and reappears (c). Above: distance field representation. Below: fluid field representation.
Fig. 4 Diffusion: (a) isotropic, (b) anisotropic.
Fig. 5 Risk representation. Left: raw LiDAR point cloud corresponding to the field of view of the frontal camera, annotated with 3D object bounding boxes. Right: risk map; the ego-vehicle (magenta) is located at the coordinate origin, with two observed vehicles (black). Red regions have high collision risk; blue have no collision risk.
Fig. 6 Risk assessment on the expressway from KITTI tracking test set 0006: (a) LiDAR point cloud annotated with 3D bounding boxes; (b) 3D object tracking results projected onto the synchronized image sequence; (c) risk representation based on the fluid field.
Fig. 7 Risk assessment when the ego-vehicle is stopped at an intersection, from KITTI tracking test set 0010: (a) LiDAR point cloud annotated with 3D bounding boxes; (b) 3D object tracking results projected onto the synchronized image sequence; (c) risk representation based on the fluid field.
Fig. 8 Risk assessment when the ego-vehicle turns at the intersection from KITTI tracking test set 0014: (a) LiDAR point cloud annotated with 3D bounding boxes; (b) the 3D object tracking results projected on the synchronized image sequence; (c) risk representation based on the fluid field.
Fig. 9 Risk assessment at a roundabout from KITTI tracking test set 0008: (a) LiDAR point cloud annotated with 3D bounding boxes; (b) 3D object tracking results projected on the synchronized image sequence; (c) risk representation based on the fluid field.
Fig. 10 Risk assessment when the ego-vehicle approaches other vehicles from KITTI tracking test set 0013: (a) LiDAR point cloud annotated with 3D bounding boxes; (b) the 3D object tracking results projected on the synchronized image sequence; (c) risk representation based on the fluid field.
Fig. 11 The fluid model provides a field-based representation of an object for which tracking fails. Red circles indicate risk map changes when the vehicle becomes lost due to occlusion.
Fig. 12 Comparison of the fluid-inspired and POM risk maps.
Fig. 13 Predictive occupancy map represents the risk of moving vehicles around the ego-vehicle by calculation of the ATTO.
[1]   Laugier, C.; Paromtchik, I. E.; Perrollaz, M.; Yong, M.; Yoder, J.; Tay, C.; Mekhnacha, K.; Nègre, A. Probabilistic analysis of dynamic scenes and collision risks assessment to improve driving safety. IEEE Intelligent Transportation Systems Magazine Vol. 3, No. 4, 4-19, 2011.
[2]   Lefèvre, S.; Vasquez, D.; Laugier, C. A survey on motion prediction and risk assessment for intelligent vehicles. ROBOMECH Journal Vol. 1, No. 1, 1-14, 2014.
[3]   Lee, M.; Sunwoo, M.; Jo, K. Collision risk assessment of occluded vehicle based on the motion predictions using the precise road map. Robotics and Autonomous Systems Vol. 106, 179-191, 2018.
[4]   Coué, C.; Pradalier, C.; Laugier, C.; Fraichard, T.; Bessière, P. Bayesian occupancy filtering for multitarget tracking: An automotive application. The International Journal of Robotics Research Vol. 25, No. 1, 19-30, 2006.
[5]   Nguyen, T. N.; Michaelis, B.; Al-Hamadi, A.; Tornow, M.; Meinecke, M. M. Stereo-camera-based urban environment perception using occupancy grid and object tracking. IEEE Transactions on Intelligent Transportation Systems Vol. 13, No. 1, 154-165, 2012.
[6]   Lee, K.; Kum, D. Collision avoidance/mitigation system: Motion planning of autonomous vehicle via predictive occupancy map. IEEE Access Vol. 7, 52846-52857, 2019.
[7]   Hamrick, J.; Battaglia, P.; Tenenbaum, J. B. Internal physics models guide probabilistic judgments about object dynamics. In: Proceedings of the 33rd Annual Conference of the Cognitive Science Society, Vol. 2, 2011.
[8]   Yang, Z. S.; Yu, Y.; Yu, D. X.; Zhou, H. X.; Mo, X. L. APF-based car following behavior considering lateral distance. Advances in Mechanical Engineering Vol. 5, 207104, 2013.
[9]   Wang, J. Q.; Wu, J.; Li, Y. The driving safety field based on driver-vehicle-road interactions. IEEE Transactions on Intelligent Transportation Systems Vol. 16, No. 4, 2203-2214, 2015.
[10]   Villegas, R.; Yang, J.; Zou, Y.; Sohn, S.; Lin, X.; Lee, H. Learning to generate long-term future via hierarchical prediction. In: Proceedings of the IEEE International Conference on Machine Learning, 3560-3569, 2017.
[11]   Li, J.; Ma, H.; Zhan, W.; Tomizuka, M. Generic probabilistic interactive situation recognition and prediction: From virtual to real. In: Proceedings of the IEEE International Conference on Intelligent Transportation Systems, 3218-3224, 2018.
[12]   Chorin, A. J.; Marsden, J. E. A Mathematical Introduction to Fluid Mechanics. New York: Springer, 1990.
[13]   Simon, M.; Milz, S.; Amende, K.; Gross, H. M. Complex-YOLO: An Euler-region-proposal for real-time 3D object detection on point clouds. In: Computer Vision - ECCV 2018 Workshops. Lecture Notes in Computer Science, Vol. 11129. Leal-Taixé, L.; Roth, S. Eds. Springer Cham, 197-209, 2019.
[14]   Beltran, J.; Guindel, C.; Moreno, F. M.; Cruzado, D.; Garcia, F.; de La Escalera, A. BirdNet: A 3D object detection framework from LiDAR information. In: Proceedings of the 21st International Conference on Intelligent Transportation Systems, 3517-3523, 2018.
[15]   Li, B. 3D fully convolutional network for vehicle detection in point cloud. In: Proceedings of the IEEE International Conference on Intelligent Robots and Systems, 1513-1518, 2017.
[16]   Engelcke, M.; Rao, D.; Wang, D. Z.; Tong, C. H.; Posner, I. Vote3Deep: Fast object detection in 3D point clouds using efficient convolutional neural networks. In: Proceedings of the IEEE International Conference on Robotics and Automation, 1355-1361, 2017.
[17]   Zhou, Y.; Tuzel, O. VoxelNet: End-to-end learning for point cloud based 3D object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 4490-4499, 2018.
[18]   Chen, X.; Ma, H.; Wan, J.; Li, B.; Xia, T. Multi-view 3D object detection network for autonomous driving. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 6526-6534, 2017.
[19]   Ku, J.; Mozifian, M.; Lee, J.; Harakeh, A.; Waslander, S. L. Joint 3D proposal generation and object detection from view aggregation. In: Proceedings of the IEEE International Conference on Intelligent Robots and Systems, 1-8, 2018.
[20]   Butt A. A.; Collins, R. T. Multi-target tracking by Lagrangian relaxation to min-cost network flow. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1846-1853, 2013.
[21]   Kuo, C.; Huang, C.; Nevatia, R. Multi-target tracking by on-line learned discriminative appearance models. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 685-692, 2010.
[22]   Bae S.-H.; Yoon, K.-J. Robust online multi-object tracking based on tracklet confidence and online discriminative appearance learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1218-1225, 2014.
[23]   Yoon, J. H.; Lee, C.; Yang, M.; Yoon, K. Online multi-object tracking via structural constraint event aggregation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1392-1400, 2016.
[24]   Leal-Taixé, L.; Canton-Ferrer, C.; Schindler, K. Learning by tracking: Siamese CNN for robust target association. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 33-40,2016.
[25]   Tang, S. Y.; Andres, B.; Andriluka, M.; Schiele, B. Multi-person tracking by multicut and deep matching. In: Computer Vision - ECCV 2016 Workshops. Lecture Notes in Computer Science, Vol. 9914. Hua, G.; Jégou, H. Eds. Springer Cham, 100-111, 2016.
[26]   Park, S.; Lee, K.; Yoon, K. Robust online multiple object tracking based on the confidence-based relative motion network and correlation filter. In: Proceedings of the IEEE International Conference on Image Processing, 3484-3488, 2016.
[27]   Xiang, Y.; Alahi, A.; Savarese, S. Learning to track: Online multi-object tracking by decision making. In: Proceedings of the IEEE International Conference on Computer Vision, 4705-4713, 2015.
[28]   Dueholm, J. V.; Kristoffersen, M. S.; Satzoda, R. K.; Moeslund, T. B.; Trivedi, M. M. Trajectories and maneuvers of surrounding vehicles with panoramic camera arrays. IEEE Transactions on Intelligent Vehicles Vol. 1, No. 2, 203-214, 2016.
[29]   Xie, G. T.; Gao, H. B.; Qian, L. J.; Huang, B.; Li, K. Q.; Wang, J. Q. Vehicle trajectory prediction by integrating physics- and maneuver-based approaches using interactive multiple models. IEEE Transactions on Industrial Electronics Vol. 65, No. 7, 5999-6008, 2018.
[30]   Deo, N.; Trivedi, M. M. Multi-modal trajectory prediction of surrounding vehicles with maneuver based LSTMs. In: Proceedings of the IEEE Intelligent Vehicles Symposium, 1179-1184, 2018.
[31]   Schulz, J.; Hubmann, C.; L?chner, J.; Burschka, D. Interaction-aware probabilistic behavior prediction in urban environments. In: Proceedings of the IEEE International Conference on Intelligent Robots and Systems, 3999-4006, 2018.
[32]   Li, J.; Ma, H.; Tomizuka, M. Interaction-aware multi-agent tracking and probabilistic behavior prediction via adversarial learning. In: Proceedings of the IEEE International Conference on Robotics and Automation, 6658-6664, 2019.
[33]   Reichardt, D.; Shick, J. Collision avoidance in dynamic environments applied to autonomous vehicle guidance on the motorway. In: Proceedings of the IEEE Intelligent Vehicles Symposium, 74-78, 1994.
[34]   Wolf, M. T.; Burdick, J. W. Artificial potential functions for highway driving with collision avoidance. In: Proceedings of the IEEE International Conference on Robotics and Automation, 3731-3736, 2008.
[35]   Kim, K.; Kim, B.; Lee, K.; Ko, B.; Yi, K. Design of integrated risk management-based dynamic driving control of automated vehicles. IEEE Intelligent Transportation Systems Magazine Vol. 9, No. 1, 57-73, 2017.
[36]   Wang, J. Q.; Wu, J.; Zheng, X. J.; Ni, D. H.; Li, K. Q. Driving safety field theory modeling and its application in pre-collision warning system. Transportation Research Part C: Emerging Technologies Vol. 72, 306-324, 2016.
[37]   Zhu, L.; Li, X.; Lu, W.; Zhang, Y. J. A field-based representation of surrounding vehicle motion from a monocular camera. In: Proceedings of the IEEE Intelligent Vehicles Symposium, 1761-1766, 2018.
[38]   Shi, S.; Wang, X.; Li, H. PointRCNN: 3D object proposal generation and detection from point cloud. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 770-779, 2019.
[39]   Kuhn, H. W. The Hungarian method for the assignment problem. Naval Research Logistics Quarterly Vol. 2, Nos. 1-2, 83-97, 1955.
[40]   Stam, J. Stable fluids. In: Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques, 121-128, 1999.
No related articles found!