logo资料库

车道线检测国外现状.docx

第1页 / 共4页
第2页 / 共4页
第3页 / 共4页
第4页 / 共4页
资料共4页,全文预览结束
对于车道检测,大多数现有的算法都基于手工制作的低级特征[1][2][3](Aly 2008; Son 等 2015; Jung,Youn 和 Sull 2016),限制了其处理恶劣条件的能力。只有 Huval 等人[4]。(2015) 首次尝试在车道检测中采用深度学习,但没有大量的一般数据集。而对于语义分割,基于 CNN 的方法已成为主流并取得了巨大成功[5][6](Long,Shelhamer,and Darrell 2015; Chen et al。2017 )。在神经网络中还有一些尝试利用空间信息。 Visin 等人[7](2015)和 Bell 等人 [8](2016)使用递归神经网络沿每行或每列传递信息,因此在一个 RNN 层中,每个像素位 置只能接收来自同一行或列的信息。 Liang 等人(2016a; 2016b)[9][10]提出了 LSTM 的变体 来利用语义对象解析中的上下文信息,但是这样的模型在计算上是昂贵的。研究人员还尝试 将 CNN 与 MRF 或 CRF 等图形模型结合起来,其中消息传递通过与大内核进行卷积实现(Liu et al。2015; Tompson et al。2014; Chu et al。2016)[11][12][13]。在 SCNN 中,(1)顺序消息 传递方案比传统的密集 MRF / CRF 具有更高的计算效率,(2)消息传播为残差,使得 SCNN 易于训练, (3)SCNN 是灵活的,可以应用于任何层次的深度神经网络。 车道检测仍然是机器视觉研究的沃土领域;因此,已经提出了许多方法来完成这项任务。 然而,霍夫变换的变化仍然是最常用和最常用的方法之一。在这些方法中,输入图像首先使 用 Canny 边缘检测器[14]或可控滤波器[15]预处理以找到边缘,然后是阈值。经典的霍夫变 换然后用于在二值图像中找到直线,这通常对应于车道边界。随机 Hough 变换[16],也是一 种更快,更经典的 Hough 变换记忆效率的对应物,也被用于车道检测[17],[18]。当道路大 部分直线时,用于线路寻找的经典霍夫变换效果很好;然而,对于弯曲道路,样条[19]和双曲 线拟合[20]通常用于提供支撑。分段线拟合显示出一些改进,通过对路段图像进行霍夫变换 以产生曲线并处理许多关于阴影和道路图案的问题[21,22]。此外,边缘方向的结合也被用来 消除一些错误的信号[23],[24]。不幸的是,通过霍夫变换,通常很难确定一条线是否与伪 像或车道边界相对应。在色彩分割方法中,RGB 图像经常转换为 YCbCr,HSI 或自定义色彩 空间。在这些替代色彩空间中,一个像素的亮度和色度分量被单独建模。结果,可以大大减 少阴影和颜色分量中的动态照明的影响。因此,这种转换通常会增强对黄色车道标记等有色 物体的检测[32],[25]。然而,由于这些方法在像素级别运行,它们通常对来自路灯或类似 照明源的环境光颜色的变化敏感。 [26]中已经显示了使用直方图来分割车道标记。然而, 为了进行直方图计算,需要在水平频带中存在一部分车道标记。立体视觉和 3D 也被用于车 道检测。在[27]和[28]中,立体声摄像头用于提供路面的两个有利位置,希望通过单摄像头 方法改善结果。具体而言,在每个视图中检测车道标记,然后使用极线几何和相机校准信息 将结果合并。在[27]和[28]中,假设车辆始终位于车道中心,固定搜索区域用于查找车道标 记。学习方法,如人工神经网络[33]和支持向量机[29]也已用于车道检测;但是,他们可能不 会遇到没有遇到过训练遇到的道路条件时表现良好[30]。最后,[31]中的综合文献综述总结 了当今大多数突出的车道检测技术。 虽然车道和道路标记检测似乎是一个简单的问题,但该算法必须在各种环境下准确,并 且计算时间快。基于手工特征的车道检测方法[34-40]检测通用形状的标记,并尝试拟合线或 样条来定位通道。这组算法在某些情况下表现良好,但在不熟悉的条件下表现不佳。在道路 标记检测算法的情况下,大部分作品都基于手工制作的特征。陶等人。 [41]将多个感兴趣 区域提取为最大稳定极值区域(MSER)[42],并依靠 FAST [43]和 HOG [44]特征为每个道路标 记建立模板。同样,格林哈尔等人。 [45]利用 HOG 特征和支持向量机训练生成类标签。然 而,正如在车道检测案例中,这些方法在不熟悉的条件下表现出性能下降。 最近,深度学习方法在计算机视觉领域取得了巨大成功,包括车道检测。 [47,46]提出 了一种基于 CNN 的车道检测算法。 Jun Li et al。 [48]同时使用 CNN 和递归神经网络(RNN) 来检测车道边界。在这项工作中,CNN 提供了车道结构的几何信息,并且该信息被 RNN 用 来检测车道。贝赫等人。 [49]提出使用双视角卷积中性网络(DVCNN)框架进行车道检测。
在这种方法中,正视图和俯视图图像作为输入馈送到 DVCNN。与车道检测算法类似, 一些作品研究了神经网络作为特征提取器和分类器的应用,以提高路标识别和识别的性 能.Bailo 等人文献[50]提出了一种方法,将多个感兴趣区域提取为 MSER [51],合并可能属于 同一类别的区域,最后利用 PCANet [52]和神经网络对区域提议进行分类。 [1]Aly, M. 2008. Real time detection of lane markers in urban streets. In Intelligent Vehicles Symposium, 2008 IEEE, 7–12. IEEE. [2]Son, J.; Yoo, H.; Kim, S.; and Sohn, K. 2015. Real-time illumination invariant lane detection for lane departure warning system. Expert Systems with Applications 42(4):1816–1824. [3]Jung, S.; Youn, J.; and Sull, S. 2016. Efficient lane detection based on spatiotemporal images. IEEE Transactions on Intelligent Transportation Systems 17(1):289–295. [4]Huval, B.; Wang, T.; Tandon, S.; Kiske, J.; Song, W.; Pazhayampallil, J.; Andriluka, M.; Rajpurkar, P.; Migimatsu, T.; Cheng-Yue, R.; et al. 2015. An empirical evaluation of deep learning on highway driving. arXiv preprint arXiv:1504.01716. [5]Long, J.; Shelhamer, E.; and Darrell, T. 2015. Fully convolutional networks for semantic segmentation. In CVPR. [6]Chen, L.-C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; and Yuille, A. L. 2017. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. TPAMI. [7]Visin, F.; Kastner, K.; Cho, K.; Matteucci, M.; Courville, A.; and Bengio, Y. 2015. Renet: A recurrent neural network based alternative to convolutional networks. arXiv preprint arXiv:1505.00393. [8]Bell, S.; Lawrence Zitnick, C.; Bala, K.; and Girshick, R. 2016. Inside-outside net: Detecting objects in context with skip pooling and recurrent neural networks. In CVPR. [9]Liang, X.; Shen, X.; Feng, J.; Lin, L.; and Yan, S. 2016a. Semantic object parsing with graph lstm. In ECCV. [10]Liang, X.; Shen, X.; Xiang, D.; Feng, J.; Lin, L.; and Yan, S. 2016b. Semantic object parsing with local-global long short-term memory. In CVPR. [11]Liu, Z.; Li, X.; Luo, P.; Loy, C.-C.; and Tang, X. 2015. Semantic image segmentation via deep parsing network. In ICCV. [12]Tompson, J. J.; Jain, A.; LeCun, Y.; and Bregler, C. 2014. Joint training of a convolutional network and a graphical model for human pose estimation. In NIPS. [13]Chu, X.; Ouyang, W.; Wang, X.; et al. 2016. Crf-cnn: Modeling structured information in human pose estimation. In NIPS [14] J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Mach. Intell., vol. PAMI-8, no. 6, pp. 679–698, Nov. 1986. [15] J. McCall, D. Wipf, M. Trivedi, and B. Rao, “Lane change intent analysis using robust operators and sparse Bayesian learning,” IEEE Trans. Intell. Transp. Syst., vol. 8, no. 3, pp. 431–440, Sep. 2007. [16] L. Xu, E. Oja, and P. Kultanen, “A new curve detection method: Randomized hough transform (RHT),” Pattern Recognit. Lett., vol. 11, no. 5, pp. 331–338, 1990. [17] A. T. Saudi, J. Hijazi, and J. Sulaiman, “Fast lane detection with randomized hough transform,” in Proc. Symp. Inf. Technol., 2008, vol. 4, pp. 1–5. [18] J. Wang, Y. Wu, Z. Liang, and Y. Xi, “Lane detection based on random hough transform on region of interesting,” in Proc. IEEE Conf. Inform. Autom., 2010, pp. 1735–1740.
[19] M. Aly, “Real time detection of lane markers in urban streets,” in Proc. IEEE Intell. Vehicles Symp., 2008, pp. 7–12. [20] O. Khalifa, A. Assidiq, and A. Hashim, “Vision-based lane detection for autonomous artificial intelligent vehicles,” in Proc. IEEE Int. Conf. Semantic Comput., 2009, pp. 636–641. [21] T. Taoka, M. Manabe, and M. Fukui, “An efficient curvature lane recognition algorithm by piecewise linear approach,” in Proc. IEEE Veh. Technol. Conf., 2007, pp. 2530–2534. [22] Q. Truong, B. Lee, N. Heo, Y. Yum, and J. Kim, “Lane boundaries detection algorithm using vector lane concept,” in Proc. Conf. Control, Autom., Robot. Vis., 2008, pp. 2319–2325. [23] J. W. Lee and J. Cho, “Effective lane detection and tracking method using statistical modeling of color and lane edge-orientation,” in Proc. Conf. Comput. Sci. Convergence Inf. Technol., Nov. 2009, pp. 1586–1591. [24] J. Gong, A. Wang, Y. Zhai, G. Xiong, P. Zhou, and H. Chen, “High speed lane recognition under complex road conditions,” in Proc. IEEE Int. Vehicles Symp., 2008, pp. 566–570. [25] H. Cheng, B. Jeng, P. Tseng, and K. Fan, “Lane detection with moving vehicles in the traffic scenes,” IEEE Trans. Intell. Transp. Syst., vol. 7, no. 4, pp. 571–582, Dec. 2006. [26] J. P. Gonzalez and U. Ozguner, “Lane detection using histogram-based segmentation and decision trees,” in Proc. IEEE Conf. Intell. Transp. Syst., 2000, pp. 346–351. [27] N. Benmansour, R. Labayrade, D. Aubert, and S. Glaser, “Stereovisionbased 3D lane detection system: A model driven approach,” in Proc. IEEE Conf. Intell. Transp. Syst., 2008, pp. 182–188. [28] S. Nedevschi, R. Schmidt, T. Graf, R. Danescu, D. Frentiu, T. Marita, F. Oniga, and C. Pocol, “3D lane detection system based on stereovision,” in Proc. IEEE Conf. Intell. Transp. Syst., 2004, pp. 161–166. [29] Z. Kim, “Robust lane detection and tracking in challenging scenarios,” IEEE Trans. Intell. Transp. Syst., vol. 9, no. 1, pp. 16–26, Mar. 2008. [30] J. C. McCall and M. M. Trivedi, “An integrated, robust approach to lane marking detection and lane tracking,” in Proc. IEEE Int. Vehicles Symp., 2004, pp. 533–537. [31] J. McCall and M. Trivedi, “Video-based lane estimation and tracking for driver assistance: Survey, system, and evaluation,” IEEE Trans. Intell. Transp. Syst., vol. 7, no. 1, pp. 20–37, Mar. 2006. [32] T. Y. Sun, S. J. Tsai, and V. Chan, “HSI color model based lane-marking detection,” in Proc. IEEE Conf. Intell. Transp. Syst., 2006, pp. 1168–1172. [33] D. Pomerleau, “ALVINN: An autonomous land vehicle in a neural network,” in Advances in Neural Information Processing Systems. San Mateo, CA: Morgan Kaufmann, 1989. [34] H. Deusch, J. Wiest, S. Reuter, M. Szczot, M. Konrad, and K. Dietmayer. A random finite set approach to multiple lane detection. In ITSC, 2012. [35] H. Jung, J. Min, and J. Kim. An efficient lane detection algorithm for lane departure detection. In IV, 2013. [36] J. Hur, S.-N. Kang, and S.-W. Seo. Multi-lane detection in urban driving environments using conditional random fields. In IV, 2013. [37] A. Borkar, M. Hayes, and M. T. Smith. A novel lane detection system with efficient ground truth generation. IEEE Transactions on Intelligent Transportation Systems (TITS), 13(1):365–374, 2012. [38] R. Satzoda and M. Trivedi. Vision-based lane analysis: Exploration of issues and approaches for embedded realization. In CVPR Workshops, 2013.
[39] H. Tan, Y. Zhou, Y. Zhu, D. Yao, and K. Li. A novel curve lane detection based on improved river flow and ransa. In ITSC, 2014. [40] P.-C. Wu, C.-Y. Chang, and C. H. Lin. Lane-mark extraction for automobiles under complex conditions. Pattern Recognition, 47(8):2756–2767, 2014. [41] T. Wu and A. Ranganathan. A practical system for road marking detection and recognition. In IV, 2012. [42] J. Matas, O. Chum, M. Urban, and T. Pajdla. Robust widebaseline stereo from maximally stable extremal regions. Image and Vision Computing, 22(10):761–767, 2004. [43] D. G. Viswanathan. Features from accelerated segment test (fast), 2009. [44] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In CVPR, 2005. [45] J. Greenhalgh and M. Mirmehdi. Automatic detection and recognition of symbols and text on the road surface. In ICPRAM, 2015. [46] J. Kim and M. Lee. Robust lane detection based on convolutional neural network and random sample consensus. In ICONIP, 2014. [47] B. Huval, T. Wang, S. Tandon, J. Kiske, W. Song, J. Pazhayampallil, M. Andriluka, P. Rajpurkar, T. Migimatsu, R. Cheng-Yue, et al. An empirical evaluation of deep learning on highway driving. arXiv preprint arXiv:1504.01716, 2015. [48] J. Li, X. Mei, and D. Prokhorov. Deep neural network for structural prediction and lane detection in traffic scene. IEEE Transactions on Neural Networks and Learning Systems (TNNLS), PP(99):1–14, 2016. [49] B. He, R. Ai, Y. Yan, and X. Lang. Accurate and robust lane detection based on dual-view convolutional neutral network. In IV, 2016. [50] J. Matas, O. Chum, M. Urban, and T. Pajdla. Robust widebaseline stereo from maximally stable extremal regions. Image and Vision Computing, 22(10):761–767, 2004. [51] O. Bailo, S. Lee, F. Rameau, J. S. Yoon, and I. S. Kweon. Robust road marking detection and recognition using densitybased grouping and machine learning techniques. In WACV, 2017. [52] T.-H. Chan, K. Jia, S. Gao, J. Lu, Z. Zeng, and Y. Ma. Pcanet: A simple deep learning baseline for image classification? IEEE Transactions on Image Processing (TIP), 24(12):5017–5032, 2015.
分享到:
收藏