Lane Detection Method Based on Semantic Segmentation and Road Structure
-
摘要: 车道线的准确检测对于智能辅助驾驶和车道偏离预警系统的性能有着非常重要的作用,当前的传统研究方法普遍存在对复杂道路环境的适应性不够,检测精度有待提高等问题。针对复杂交通环境的车道线检测问题,充分考虑到复杂道路结构的语义信息,提出了1种基于语义分割与道路结构的车道线检测方法。该算法采用Encoder-Decoder的基础网络结构模式,通过改进实现语义分割,利用池化层的索引功能,以反池化的方式进行上采样,在每个上采样之后连接多个卷积层。然后再使用标准交叉熵损失函数训练分割网络,利用深度学习方法得到排除外部环境干扰的道路分割图像,并对分割后的道路图像进行透视变换,采用Hough变换和边缘点的参数空间投票,快速提取和修正车道线左右边缘点,将提取的边缘点进行贝塞尔曲线拟合,实现车道线的平滑显示。提出的算法在相关车道线数据集上进行了训练和测试,与基于参数空间投票方法相比,准确度提升5.1%,时间平均增加了8 ms;与卷积神经网络(convolutional neural networks,CNN)方法相比,准确度降低了1.75%,时间平均减少了6.2 ms。测试结果表明,利用提出的语义分割编解码网络有助于优化模型结构,在满足实时检测要求的基础上降低了对计算硬件资源的需求。Abstract: The accurate detection of lane markings plays a crucial role in the performance of intelligent assisted driving and lane departure warning systems. Current traditional research methods generally lack adaptability to complex road environments and need to improve detection accuracy. To address the problem of lane marking detection in complex traffic environments, a lane marking detection method based on semantic segmentation and road structure is proposed. The algorithm adopts an Encoder-Decoder network architecture to improve semantic segmentation. It uses the indexing function of pooling layers to perform upsampling in a de-convolutional manner, connecting multiple convolutional layers after each upsampling. The segmentation network is then trained using the standard cross-entropy loss function to obtain road segmentation images that exclude external environmental interference. Perspective transformation is applied to the segmented road images, and Hough transform and parameter space voting of edge points are used to quickly extract and correct the left and right boundary edge points of the lane markings. The extracted edge points are fitted using Bezier curves to achieve smooth display of the lane markings. The proposed algorithm was trained and tested on relevant lane marking datasets. Compared to the parameter space voting method, it achieved a 5.1% increase in accuracy with an average increase of 8 ms in time. Compared to the convolutional neural networks (CNN) network method, it had a 1.75% decrease in accuracy with an average decrease of 6.2 ms in time. The test results demonstrate that the proposed semantic segmentation encoding-decoding network helps optimize the model structure and reduces the demand for computing hardware resources while meeting real-time detection requirements.
-
Key words:
- intelligent traffic /
- lane detection /
- semantic segmentation /
- road structure /
- parameter voting space
-
表 1 语义分割网络结构具体参数设置表
Table 1. Specific parameter setting of the semantic segmentation network structure
类型 卷积核 步长 输出 covl_1 3×3 1 512×256×64 covl_1 3×3 1 512×256×64 Pool1, max, indices 2×2 2 256×128×64 covl_1 3×3 1 256×128×128 covl_1 3×3 1 256×128×128 Pool2, max, indices 2×2 2 128×64×128 covl_1 3×3 1 128×64×256 covl_1 3×3 1 128×64×256 covl_1 3×3 1 128×64×256 Pool3, max, indices 2×2 2 64×32×256 covl_1 3×3 1 64×32×512 covl_1 3×3 1 64×32×512 covl_1 3×3 1 64×32×512 Pool4, max, indices 2×2 2 32×16×512 Dilated conv5_1 3×3 1 32×16×512 Dilated conv5_2 3×3 1 32×16×512 Dilated conv5_3 3×3 1 32×16×512 表 2 车道线识别统计
Table 2. Lane recognition statistics
视频序号 总帧数 识别率/% 平均耗时/ms 白天1 1 653 93.8 30.5 白天2 1 784 94.1 31.2 白天3 1 452 93.6 31.8 白天4 1 645 94.3 30.9 白天5 1 542 93.2 30.4 白天6 1 265 92.9 31.3 雨天 793 87.2 31.6 夜晚 1 203 85.9 31.9 表 3 算法对比结果
Table 3. Lane recognition statistics
-
[1] KUSANO K D, GABLER H C. Comparison of expected crash and injury reduction from production forward collision and lane departure warning systems[J]. Journal of Crash Prevention & Injury Control, 2015, 16(2): 109-114. [2] ABRAMOV A, BAYER C, HELLER C, et al. Multi-lane perception using feature fusion based on GraphSLAM[C]. 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS), Daejeon, Korea(South): IEEE, 2016. [3] LI Q, ZHOU J, LI B J, et al. Robust lane-detection method for low-speed environments[J]. Sensors, 2018, 18(12): 4274. doi: 10.3390/s18124274 [4] NEVEN D, BRABANDERE B D, GEORGOULIS S, et al. Towards end-to-end lane detection: an instance segmentation approach[C]. IEEE Intelligent Vehicles Symposium, Changshu, China: IEEE, 2018. [5] NAROTE S P, BHUJBAL P N, NAROTE A S, et al. A review of recent advances in lane detection and departure warning system[J]. Pattern Recognition, 2018, 1(73): 216-234. [6] DING L, ZHANG H Y, XIAO J S, et al. A lane detection method based on semantic segmentation[J]. Computer Modeling in Engineering & Sciences, 2020, 122(3): 1039-1053. [7] NAVARRO J, DENIELl J, YOUSFI E, et al. Influence of lane departure warnings onset and reliability on car drivers' behaviors. [J]. Applied Ergonomics, 2017: 123-131. [8] KAZEMI M, BALEGHI Y. L*a*b* color model based road lane detection in autonomous vehicles[J]. Bangladesh Journal of Scientific and Industrial Research, 2017, 52(4): 273-280. doi: 10.3329/bjsir.v52i4.34814 [9] XIAO J, LUO L, YAO Y, et al. Lane detection based on road module and extended Kalman filter[C]. Pacific-Rim Symposium on Image and Video Technology, Wuhan, China: IEEE, 2017. [10] 刘彬, 刘宏哲. 基于改进Enet网络的车道线检测算法[J]. 计算机科学, 2020, 47(4): 142-149. https://www.cnki.com.cn/Article/CJFDTOTAL-JSJA202004023.htmLIU B, LIU H Z. Lane detection algorithm based on improved enet network[J]. Computer Science, 2020, 47(4): 142-149. (in Chinese) https://www.cnki.com.cn/Article/CJFDTOTAL-JSJA202004023.htm [11] REN S Q, HE K M, GIRSHICK R, et al. Faster R-CNN: towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137-1149. doi: 10.1109/TPAMI.2016.2577031 [12] DING L, XU Z R, ZONG J F, et al. A lane line detection algorithm based on convolutional neural network[C]. Geometry and Vision First International Symposium, Auckland, New Zealand: IEEE, 2021. [13] GAIKWAD V, LOKHANDE S D. Lane departure identification for advanced driver assistance[J]. IEEE Transactions on Intelligent Transportation Systems, 2015, 16(2): 910-918. [14] 周经美, 王钰, 宁航, 等. 面向多元场景结合GLNet的车道线检测算法[J]. 中国公路学报, 2021, 34(7): 118-127. https://www.cnki.com.cn/Article/CJFDTOTAL-ZGGL202107010.htmZHOU J M, WANG Y, NING H, et al. Lane detection algorithm based on GLNet for multiple scenes[J]. China Journal of Highway and Transport, 2021, 34(7): 118-127. (in Chinese) https://www.cnki.com.cn/Article/CJFDTOTAL-ZGGL202107010.htm [15] PAN X, SHI J, LUO P, et al. Spatial as deep: spatial CNN for traffic scene understanding[C]. 32nd AAAI Conference on Artificial Intelligence, New Orleans, LA, United States: IEEE, 2018. [16] 杨鹏强, 张艳伟, 胡钊政. 基于改进RepVGG网络的车道线检测算法[J]. 交通信息与安全, 2022, 40(2): 73-81. doi: 10.3963/j.jssn.1674-4861.2022.02.009YANG P Q, ZHANG Y W, HU Z Z. Lane-line detection algorithm based on an improved RepVGG network[J]. Journal of Transport Information and Safety, 2022, 40(2): 73-81. (in Chinese) doi: 10.3963/j.jssn.1674-4861.2022.02.009 [17] HOU Y, MA Z, LIU C, et al. Learning lightweight lane detection CNNs by self attention distillation[C]. International Conference on Computer Vision, Seoul, Korea: IEEE, 2019. [18] HAO T, XU X L, QI L Y, et al. Edge computation offloading and caching for self-driving with deep reinforcement learning[J]. IEEE Transactions on Vehicular Technology, 2021, 12 (70): 13281-13293. [19] 彭红, 肖进胜, 程显. 基于扩展卡尔曼滤波器的车道线检测算法[J]. 光电子·激光, 2015(3): 567-574.PENG H, XIAO J S, CHENG X. Lane detection algorithm based on extended Kalman filter[J]. Journal of Optoelectronics·Laser, 2015(3): 567-574. (in Chinese)