Privacy statement: Your privacy is very important to Us. Our company promises not to disclose your personal information to any external company with out your explicit permission.
In the past, lane line detection performance depended on the method of artificial visual verification. However, this method cannot objectively quantify the performance of the lane detection system. At the same time, due to the complexity of the lane detection system, different hardware and algorithms, different data collection methods and acquisition environment (weather, road), etc., will affect the test results. Therefore, there is no unified method for evaluating lane lines. This paper introduces a design of lane line detection system based on ACP parallel vision theory, which will effectively solve the problem of lane line performance evaluation and testing, and achieve accurate and stable lane line detection.
1 Introduction
The study found that traffic accidents are mostly caused by driver's human factors, such as driver's inattention, wrong judgment and execution [1]. Lane line detection technology is a vital function in advanced driver assistance systems (ADAS) and has led to systems such as automatic lane departure warning and lane keeping [2]-[3]. The detection accuracy and stability of lane lines are two important performance indicators of lane line detection technology. The lane detection system should have the ability to evaluate the detection results and identify irrational detection [4]-[5]. For a conventional car, when unreasonable lane line detection results are found, the driver should be instructed to pay attention to the current road conditions. For cars with ADAS or autopilot functions, the car is responsible for evaluating the test results and should ensure that a safe driving strategy is implemented without the participation of the driver.
How to improve the reliability of lane detection technology to cope with the complex and changing driving environment is an important challenge currently facing. Using data fusion and function fusion can effectively construct an accurate and stable lane detection system. Functional fusion refers to the combination of multiple detection functions, such as the integration of road detection and vehicle detection technologies. Data fusion uses laser radar, GPS and other devices to make up for the lack of cameras, thereby improving the accuracy and stability of lane detection [4]. At the same time, this article discusses the evaluation methods of various traditional lane line detection systems. Aiming at the shortcomings of the existing methods in performance and evaluation, a parallel lane detection method based on ACP parallel vision theory was proposed. The parallel lane line detection method uses artificial parallel systems to provide massive data, which will effectively compensate for the inability to adequately train and evaluate defects caused by insufficient data in traditional lane line detection algorithms.
2 Vision-based lane line algorithm overview
2.1 The basic process of lane detection
The vision-based lane line detection technology mainly includes three processes of image preprocessing, lane line detection and tracking, as shown in FIG. 1 . The most common image preprocessing techniques include region of interest extraction, vanishing point detection, image graying, noise processing, inverse perspective transformation, image segmentation, and edge detection. The lane line features mainly include information such as its color and edge. After the lane line is identified and modeled, in order to improve the real-time detection accuracy and stability of the lane line, the lane line model parameters can be filtered using the tracking algorithm to improve the detection accuracy and stability of the lane line.
Figure 1. Basic lane detection process
2.2 Traditional lane detection algorithm
Vision-based lane detection technology can be divided into two categories: feature-based detection methods [9]-[19] and model-based detection methods [20]-[29]. The feature-based lane line detection algorithm uses features such as color, texture, and edge of the lane line to detect. In [10], lane line pixels intensity and edge features are used to detect lane lines through an adaptive threshold method. Literature [11] uses the spatial features of lane lines and Hough transform to detect lane lines. Literature [12] uses particle filters to identify lane lines. The article points out that it is difficult to meet the stability requirements in the practical application of the strict lane line model. The lane line detection algorithm based on the particle filter does not need to accurately model the lane line, but only needs to follow the lane line feature to obtain good results. The literature [13] transforms RGB color pictures into YUV format, using lane line edges and lane line widths for lane detection. Document [14] enhances the contrast of the lane line color by converting the color picture to the HSV format, and then completes the lane line detection according to the pixel intensity. Literature [16] gives a lane detection method based on frequency domain features. In summary, the feature-based lane detection algorithm is more direct and simple, and is suitable for clear lane lines. However, the feature-based lane line detection algorithm is difficult to cope with scenarios where the lane line is complex or the visibility condition is not good.
The model-based lane line detection algorithm will typically assume the lane line as a straight line model, a parabolic model, or a higher order curve model. In addition to this, there is also a need for road assumptions. For example, the road should be flat and continuous. Literature [21] proposed a B-Snake model that can fit lane lines of arbitrary shapes. The literature [22] further improved the model and turned it into a parallel snake model. In [23], the lane line model is divided into two parts, the near-field end is a linear model, and the far-field end-curve model is fitted by B-snake. [25] proposed an integrated lane detection method based on Hough transform, RANSAC and B-Spline. First, it was roughly detected using the Hough transform and then further fitted using the RANSAC and B-spline models. Literature [33] gives a method for automatic switching modeling of multiple lane lines based on RANSAC model fitting. The model-based lane line detection method is generally more stable and accurate than the feature-based method, and it is also easier to use the filter algorithm to estimate the model parameters. However, model-based methods often also require more computational requirements to fit the model parameters.
2.3 lane-detection algorithm based on machine learning
In recent years, deep learning technology has been widely used in image recognition, segmentation, and object detection. Literature [36] pointed out that deep convolutional networks can significantly improve the detection accuracy of lane lines to more than 90%. Literature [37] proposed a lane detection method based on deep convolutional network and recurrent neural network. The convolutional neural network is responsible for determining whether each picture contains a lane line and outputs the position and direction of the lane line. The circulatory neural network is responsible for identifying the structure of the lane lines in the video and can effectively recognize the lane lines covered by the surrounding vehicles or objects. Literature [38] uses convolutional neural networks to process images from two side-view cameras, and uses end-to-end lane-detection algorithms to train real-world and synthetic images. The literature [41] uses a combination of front view and top view images to treat two types of images separately using different convolutional networks. Finally, a global strategy is used to make predictions based on the physical characteristics of lane lines. In addition, researchers have developed lane-line search methods based on evolutionary and heuristic algorithms. Literature [42] proposes a belief network inspired by driver behavior and a multi-institutional detection model. Literature [43] proposed an optimization lane line search method based on ant colony algorithm. Literature [44] presented a multi-lane detection method based on a random walk model. A directed random walk algorithm based on a Markov probability matrix is used to connect the lane line features. To sum up, the lane detection algorithm based on machine learning and intelligent algorithms has shown more powerful performance than traditional methods. Although the learning-based method has more requirements for the computational performance of the on-vehicle controller, with the continuous upgrading of the hardware system, the detection algorithm based on machine learning will become the main lane detection method due to its powerful computing capability.
3 lane line detection integration method
3.1 Overview of Integration Methods
The stability and adaptability of the lane detection system is the core issue that really restricts the application of the lane line system. For automotive companies, a single external environmental sensor is not enough to provide a safe and effective environmental awareness. Tesla, Mobileye, and Delphi all use sensor fusion methods (cameras, lidars, millimeter-wave radar, etc.) to improve the vehicle's ability to sense the surrounding environment. This article reviews the traditional lane-line integration method and divides it into algorithm-level integration, system-level integration, and sensor-level integration, as shown in Figure 2.
3.2 Algorithm Integration
Traditional algorithm layer integration mainly has two structures: serial and parallel. Serial integration methods are more common [20][21][25]. For example, Huff transform, RANSAC algorithm and model fitting are used in lane detection in order to improve detection accuracy gradually. In addition, many literatures also use the serial structure of the tracking algorithm after the lane detection module to improve the accuracy of lane detection [21][22][45]-[47]. Literature [50][51] gives a method for parallel lane detection. Document [50] proposes to run two lane line detection algorithms that are relatively independent and with different methods. By comparing the two test results to determine a reasonable lane line position. If two different algorithms give similar results, then the current lane line detection is considered reasonable. In comparison, although the parallel integration of redundant algorithms improves the detection accuracy, it increases the amount of system operations and reduces the real-time performance of the system.
3.3 System Integration
Barriers in real-world roads are likely to affect lane line detection accuracy. For example, guardrails can easily lead to false detection of lane lines due to their strong lane-like features [54]-[56]. Therefore, the organic combination of the lane detection system with other obstacle detection helps improve the vehicle's overall environmental awareness. Proximity vehicles can also detect lane misdetections due to similar color, shadows, or shadow problems. The literature [30][57]-[60] pointed out that the vehicle ahead detection is helpful to distinguish the lane line from the vehicle shadow and reduce the influence of vehicle obstruction, which can improve the accuracy of lane detection. The detection of road markings and feasible road areas can also improve the accuracy of lane detection [4][7][66]-[68]. Tesla and Mobileye also proposed that road identification can enhance the stability of lane detection [69][70]. Road detection is usually preceded by lane line detection. Accurate road detection can optimize the selection of areas of interest and improve lane detection efficiency. In addition, since the road boundary and the lane line have the same orientation, the road detection can assist the lane line confidence evaluation system to complete the verification of the detected lane line.
3.3 Sensor Integration
The sensor fusion method can maximize the accuracy and stability of the lane detection system. Literature [76] used RADAR to detect the surrounding vehicles to accurately divide the vehicle edge pixels to obtain a road map containing only lane lines. The literature [77][78] combines GPS and road images and uses GPS to obtain road shapes, edges, and trends to optimize lane detection algorithms. Lidar has high precision and a wide range of environmental awareness. Therefore, combining the laser radar with a camera can make up for the lack of a camera system. Lidar can obtain lane line locations through different reflection effects of road and lane markings [88]. The literature [89] uses the lidar to detect the front obstacles to obtain an accurate and feasible area as the basis for lane detection. In [56], a method based on multi-camera fusion with Lidar is proposed to detect urban road lane lines. Although the sensor fusion method is more accurate than the above two methods, a complicated calibration process is required between the sensors. At the same time, the increase of the hardware system also increases the system cost.
Figure 2. Integrated approach to lane detection
4 lane line detection evaluation method
In the past, detection performance depended on the method of artificial visual verification. However, this method cannot objectively quantify the performance of the lane detection system. At the same time, due to the complexity of the lane detection system, different hardware and algorithms, different data collection methods and acquisition environment (weather, road), etc., will affect the test results. Therefore, there is no unified method for evaluating lane lines. This section discusses various factors that affect the accuracy of the lane line system and summarizes the assessment framework for the lane line detection system, as shown in Figure 3. The lane line evaluation algorithm is divided into two types of online evaluation and offline evaluation.
Figure 3. Lane line system evaluation system
4.1 Factors affecting the accuracy of the lane line system
The accuracy of lane detection systems is often limited by a variety of factors. Precise lane line systems on high-speed roads cannot guarantee that the urban environment is also effective because of the more complex urban traffic conditions and lane markings. Therefore, the performance of the lane line system needs to be comprehensively evaluated instead of only considering a certain index. As shown in Table 1, lane line evaluation systems need to consider as many influencing factors as possible. The most ideal detection method is to use a unified detection platform and quantitative indicators, but this is difficult to achieve in reality.
4.2 Offline Detection Methods
Off-line detection based on pictures and video data is a commonly used lane detection method. The famous open data sets are KITTI and Caltech Road [7][25]. Image data sets are easier to publish, but you need to manually mark the position of the lane line on each image. Manual annotation requires a lot of time and is not suitable for large-scale data sets. At the same time, the picture cannot effectively reflect the driving environment and comprehensively measure the lane line algorithm. The evaluation method based on video data is more able to reflect the real driving environment and algorithm performance. However, this also significantly increases the difficulty of data annotation. In this regard, the literature [95] gives a semi-automatic video annotation method, intercepting a fixed number of lines in each frame, connecting the time series pictures in chronological order and line sequence, and marking the pixel points of the lane lines on the time series pictures more accurately. Restore the position of the lane line in the video. Scholars have also proposed lane line assessment methods based on artificially generated scenes [28][56], which automatically generate images with annotations similar to real road conditions through simulation software.
4.3 Online Evaluation Methods
The method of on-line evaluation of lane lines generally integrates other detection systems or sensors to comprehensively evaluate the confidence of lane line detection results. The road geometry information obtained through road detection is conducive to the rationality of real-time detection of lane lines. Literature [96] proposed a real-time lane line detection algorithm, which uses the three indicators of lane line slope, road width and vanishing point to calculate the credibility of lane line detection results. Reference [5] uses the method of installing the camera on the side of the vehicle to provide a true reference position for the lane line detection results. In addition, the document [56] uses a camera and a lidar to establish a lane line detection confidence probability network. Literature [56][77] proposed the use of GPS, Lidar, and high-precision maps, using the obtained road width and direction as indicators for detecting lane lines. Literature [97] proposed the use of vanishing points, road rotation information and the establishment of a similarity model between frames to detect the continuity of lane lines.
4.4 Evaluation Index
The evaluation index of traditional lane line is mostly based on the subjective observation method. It has not formed a unified and effective lane line system test index. The literature [98] has designed a complete intelligent vehicle evaluation system, which mainly focuses on evaluating the overall intelligence of the vehicle. The literature [20] proposed that the lane detection system should meet the following five requirements: to overcome road shadows, to deal with roads without obvious lane markings, to identify curved roads, to satisfy the lane line shape constraints and to stabilize monitoring. The literature [101] proposes that the performance evaluation of the lane line system can not be limited to the detection rate, and the variance and change rate of the error between the detection value and the true value should be used as the performance evaluation index. The literature [102] further proposes five kinds of evaluation indicators, namely: the accuracy of lane line features, the positioning of the vehicle itself, the rate of change of lane line position, calculation efficiency and accuracy, and cumulative time error.
5 Lane Line Detection System Based on ACP Parallel Theory
Due to the inability to effectively simulate a variety of real scenes and environments, lane line detection performance is difficult to predict under unknown scenarios. Although the establishment of online evaluation and confidence estimation system can evaluate the accuracy of the current test results in real time. When it is found that there is an unreasonable test result, the driver can be promptly notified of the current problem. However, this is not enough to completely solve the problems faced by lane line detection algorithms in design and evaluation.
In order to solve this kind of problem, this paper proposes a lane detection system design framework based on ACP parallel theory. Parallel theory is the product of advanced control theory and computer simulation systems. The parallel control theory was first proposed by Wang Feiyue's researcher and has been successfully applied to the control and management of various types of complex systems [105]-[107]. The main purpose of establishing a parallel system is to connect the real world with one or more artificial societies to solve the problem of solving model modeling and testing difficulties. The establishment of parallel systems depends on the support of ACP theory. ACP (Artificial Society, Computational Experiments, Parallel Execution) consists of three parts: artificial society, computational experiments, and parallel execution. First of all, a whole system is modeled as a complex system, and a virtual mapping to the real system is formed in the computation space. Afterwards, a large number of simulation experiments are performed on the system in the artificial society by using computational experiments, so that the virtual system can face scenes that are few or difficult to appear in the real world. Through a large number of experimental calculations, a more complete system model and control method are obtained, and the model parameters are fed back to the actual physical layer. Finally, the complex system is run and tested in the real world and the artificial society in parallel to improve it, and eventually the complex system that is difficult to model is well controlled. Based on the parallel theory of ACP, the method of constructing a parallel vision system is given in [109]. Through the use of computer simulation software to create artificial scenes similar to the real world, using high-performance computing platforms to solve computer vision problems.
The lane detection system was introduced into the parallel vision framework. The framework of parallel lane detection system was designed in this paper, as shown in Fig. 4. First, using simulation software to create a virtual world similar to the virtual traffic environment. Later, through computational experiments, a large number of computer-labeled road images and limited real-world images were combined to train and validate a high-precision lane detection model. Finally, in the parallel execution stage, the results are fed back to the lane line detection model through continuous testing in the virtual world and the real world, and a safe and stable lane line detection system is realized using online learning and self-optimization. The introduction of lane detection into the ACP parallel vision framework and the use of parallel system simulations of the annotated data under various circumstances will effectively solve the dilemma of the evaluation and testing of the lane line detection system, and completely realize the complete testing of the lane line system. More secure and stable, and better respond to unexpected situations in the real world.
Figure 4. Parallel lane line detection method based on ACP theory
6 Conclusion
This paper discusses the development of lane detection technology in terms of algorithm, integration, and testing. Overall, lane line detection algorithms can be divided into two methods based on traditional computer vision and machine learning. As an effective method to improve the accuracy and stability of lane detection, the integrated method of lane detection is divided into algorithm layer integration, system function layer integration and signal layer integration. By analyzing the limitations of the lane line system performance evaluation and testing at this stage, this paper proposes a parallel lane line detection system design method based on ACP parallel theory. The parallel lane line detection technology will effectively solve the lane line performance evaluation and test problems and achieve accurate and stable lane line detection.
7 Reference
[1] Bellis, Elizabeth, and Jim Page. National motor vehicle crash causation survey (NMVCCS) SAS analytical user's manual. No. HS-811 053. 2008.
[2] Gayko, Jens E. "Lanedeparture and lane keeping." Handbook of Intelligent Vehicles. Springer London, 2012. 689-708.
[3] Visvikis C, Smith TL, PitcherM, et al. Study on lane departure warning and lane change assistant systems. TransportResearchLaboratory Project Rpt PPR, 2008, 374.
[4] Bar Hillel, Aharon, et al. "Recent progress in road and lane detection: a survey." Machine vision and applications (2014): 1-19.
[5] McCall, Joel C., and Mohan M. Trididi. "Video-based lane estimation and tracking for driver assistance: survey, system, and evaluation." IEEETransactions on Intelligent Transportation Systems 7.1 (2006): 20-37.
[6] Yenikaya, Sibel, Gökhan Yenikaya, and Ekrem Düven. "Keeping the vehicle on the road: A survey on on-road lane detection systems." ACM Computing Surveys (CSUR) 46.1 (2013): 2.
[7] Fritsch, Jannik, Tobias Kuhnl, and Andreas Geiger. "A new performance measure and evaluation benchmark for road detection algorithms." Intelligent Transportation Systems-(ITSC), 2013 16th International IEEE Conference on. IEEE, 2013.
[8] Beyeler, Michael, Florian Mirus, and Alexander Verl. "Vision-based robust road lanedetection in urban environments." Robotics and Automation (ICRA), 2014IEEE International Conference on. IEEE, 2014.
[9] Kang, Dong-Joong, and Mun-Ho Jung. "Roadlane segmentation using dynamic programming for active safety vehicles." Pattern Recognition Letters 24.16 (2003): 3177-3185.
[10] Suddamalla, Upendra, et al. "A novel algorithm of lane detection addressing varied scenarios of curvedand dashed lanemarks." ImageProcessing Theory, Tools and Applications (IPTA), 2015 International Conference on. IEEE, 2015.
[11] Collado, Juan M., et al. "Adaptive lane lanes detection and classification." International Conference on Advanced Concepts for Intelligent Vision Systems. Springer Berlin Heidelberg, 2006.
[12] Sehestedt, Stephan, et al. "Robust lane detection in urban environments." Intelligent Robots and Systems, 2007. IROS 2007. IEEE/RSJ International Conference on. IEEE, 2007.
[13] Lin, Qing, Young joon Han, and Hernsoo Hahn. "Real-time lane departure detection based on extended edge-linking algorithm." Computer Research and Development, 2010 Second International Conference on. IEEE, 2010.
[14] Cela, Andrés F., et al. "Lanes Detection Based on Unsupervised and Adaptive Classifier." Computational Intelligence, Communication Systems and Networks (CICSyN), 2013 Fifth International Conference on. IEEE, 2013.
[15] Borkar, Amol, et al. "Alayered approach to robust lane detection at night." Computational Intelligence in Vehicles and Vehicular Systems, 2009. CIVVS'09. IEEE Workshop on. IEEE, 2009.
[16] Kreucher, Chris, and Sridhar Lakshmanan. "LANA: a lane extraction algorithm that uses frequency domain features." IEEETransactions on Robotics and Automation 15.2 (1999): 343-350.
[17] Jung, Soonhong, Junsic Youn, and Sanghoon Sull. "Efficient lane detection based on spatiotemporal images." IEEE Transactions on Intelligent Transportation Systems 17.1 (2016): 289-295.
[18] Xiao, Jing, Shutao Li, and Bin Sun. "A Real-Time System for Lane Detection Based on FPGA and DSP." Sensing and Imaging 17.1 (2016): 1-13.
[19] Ozgunalp, Umar, and NaimDahnoun. "Lane detection based on improved feature map and efficient region of interest extraction." Signal and Information Processing (GlobalSIP), 2015 IEEE Global Conference on. IEEE, 2015.
[20] Wang, Yue, Dinggang Shen, and Eam Khwang Teoh. "Lane detection using spline model." Pattern Recognition Letters 21.8 (2000): 677-689.
[21] Wang, Yue, Eam Khwang Teoh, and Dinggang Shen. "Lane detection and tracking using B-Snake." Image and Vision Computing 22.4 (2004): 269-280.
[22] Li, Xiangyang, et al. "Lane detection and tracking using a parallel-snake approach." Journal of Intelligent & Robotic Systems 77.3-4 (2015): 597.
[23] Lim, King Hann, Kah Phooi Seng, and Li-Minn Ang. "River flow lane detection and Kalman filtering-based B-spline lane tracking." International Journal of Vehicular Technology 2012 (2012).
[24] Jung, Cláudio Rosito, and Christian Roberto Kelber. "An improved linear-parabolic model for lanefollowing and curve detection." Computer Graphics and Image Processing, 2005. SIBGRAPI 2005. 18th Brazilian Symposium on. IEEE, 2005.
[25] Aly, Mohamed. "Real time detection of lane markers in urban streets." Intelligent Vehicles Symposium, 2008 IEEE. IEEE, 2008.
[26] Borkar, Amol, Monson Hayes, and Mark T. Smith. "Robust lane detection and tracking with ransac and kalman filter." Image Processing (ICIP), 2009 16th IEEE International Conference on. IEEE, 2009.
[27] Lopez, A., et al. "Detection of Lane Markings based on Ridgeness and RANSAC." Intelligent Transportation Systems, 2005. Proceedings. 2005 IEEE. IEEE, 2005.
[28] López, A., et al. "Robustlane markings detection and road geometry computation." International Journal of Automotive Technology 11.3 (2010): 395-407.
[29] Chen, Qiang, and Hong Wang. "A real-time lane detection algorithm based on a hyperbola-pair model." Intelligent Vehicles Symposium, 2006 IEEE. IEEE, 2006.
[30] Tan, Huachun, et al. "Improved river flow and random sampleconsensus for curve lane detection." Advancesin Mechanical Engineering 7.7 (2015): 1687814015593866.
[31] Hur, Junhwa, Seung-Nam Kang, and Seung-Woo Seo. "Multi-lane detection in urbandriving environments using conditional random fields." Intelligent Vehicles Symposium (IV), 2013IEEE. IEEE, 2013
[32] Bounini, Farid, et al. "Autonomous Vehicle and Real Time Road Lanes Detection and Tracking." Vehicle Power and Propulsion Conference (VPPC), 2015 IEEE. IEEE, 2015.
[33] Wu, Dazhou, Rui Zhao, and Zhihua Wei. "Amulti-segment lane-switch algorithm for efficient real-time lane detection." Information and Automation (ICIA), 2014 IEEE International Conference on. IEEE, 2014.
[34] Zhou, Shengyan, et al. "A novel lane detection based on geometrical model and gabor filter." Intelligent Vehicles Symposium (IV), 2010 IEEE. IEEE, 2010.
[35] Niu, Jianwei, et al. "Robust Lane Detection using Two-stage Feature Extraction with Curve Fitting." Pattern Recognition 59 (2016): 225-233.
[36] He, Bei, et al. "Lanemarking detection based on Convolution Neural Network from point clouds." Intelligent Transportation Systems (ITSC), 2016 IEEE 19th International Conference on. IEEE, 2016.
[37] Li, Jun, Xue Mei, and Danil Prokhorov. "Deep neural network for structuralprediction and lane detection in traffic scene." IEEE Transactions on neural networks and learning systems (2016).
[38] Gurghian, Alexandru, et al. "DeepLanes: End-To-End Lane Position Estimation Using Deep Neural Networks." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops.2016
[39] Li, Xue, et al. "Lanedetection based on spiking neural network and hough transform." Image and Signal Processing (CISP), 2015 8th International Congress on. IEEE, 2015.
[40] Kim, Jihun, et al. "Fastlearning method for convolutional neural networks using extreme learningmachine and its application to lane detection." Neural Networks (2016).
[41] He, Bei, et al. "Accurate and robust lane detection based on Dual-View Convolutional Neural Network." Intelligent Vehicles Symposium (IV), 2016 IEEE. IEEE, 2016.
[42] Revilloud, Marc, Dominique Gruyer, and Mohamed-CherifRahal. "A new multi-agent approach for lane detection and tracking." Robotics and Automation (ICRA), 2016 IEEE International Conference on. IEEE, 2016.
[43] Bertozzi, Massimo, et al. "An evolutionary approach to lane markings detection inroad environments." Atti del 6 (2002): 627-636.
[44] Tsai, Luo-Wei, et al. "Lane detection using directional random walks." Intelligent Vehicles Symposium, 2008 IEEE. IEEE, 2008.
[45] Bai, Li, and Yan Wang. "Road tracking using particle filters with partition sampling and auxiliary variables." ComputerVision and Image Understanding 115.10 (2011): 1463-1471.
[46] Danescu, Radu, and SergiuNedevschi. "Probabilistic lane tracking in difficult road scenarios usingstereovision." IEEE Transactions onIntelligent Transportation Systems 10.2 (2009): 272-282.
[47] Kim, ZuWhan. "Robust lane detection and tracking in challenging scenarios." IEEE Transactions on Intelligent Transportation Systems 9.1 (2008): 16-26.
[48] Shin, Bok-Suk, Junli Tao, and Reinhard Klette. "A super particle filter for lane detection." Pattern Recognition 48.11 (2015): 3333-3345.
[49] Das, Apurba, Siva Srinivasa Murthy, and Upendra Suddamalla. "Enhanced Algorithm of Automated Ground Truth Generation and Validation for Lane Detection System by M2BMT" IEEE Transactions on Intelligent Transportation Systems (2016).
[50] Labayrade, Raphael, SS Leng, and Didier Aubert. "A reliable road lane detector approach combining two vision-based algorithms." Intelligent Transportation Systems, 2004. Proceedings. The 7th International IEEE Conference on. IEEE, 2004.
[51] Labayrade, Raphaël, et al. "A reliable and robust lanedetection system based on the parallel use of three algorithms for drivingsafety assistance." IEICEtransactions on information and systems 89.7, 2006: 2092-2100.
[52] Hernández, Danilo Cáceres, Dongwook Seo, and Kang-Hyun Jo. "Robust lane marking detection based on multi-feature fusion." Human System Interactions (HSI), 2016 9th International Conference on. IEEE, 2016.
[53] Yim, Young Uk, and Se-Young Oh. "Three-feature based automatic lane detection algorithm (TFALDA) for autonomous driving." IEEETransactions on Intelligent Transportation Systems 4.4 (2003): 219-225.
[54] Felisa, Mirko, and Paolo Zani. "Robust monocular lane detection in urban environments." Intelligent Vehicles Symposium (IV), 2010IEEE. IEEE, 2010.
[55] Bertozzi, Massimo, and AlbertoBroggi. "GOLD: A parallel real-time stereo vision system for generic obstacle and lane detection." IEEEtransactions on image processing 7.1 (1998): 62-81.
[56] Huang, Albert S., et al. "Finding multiple lanes in urban road networks with vision and lidar." Autonomous Robots 26.2 (2009): 103-122.
[57] Cheng, Hsu-Yung, et al. "Lane detection with moving vehicles in the traffic scenes." IEEETransactions on intelligent transportation systems 7.4 (2006): 571-582.
[58] Sivaraman, Sayanan, and MohanManubhai Trivedi. "Integrated lane and vehicle detection, localization, and tracking: A synergistic approach." IEEETransactions on Intelligent Transportation Systems 14.2 (2013): 906-917.
[59] Wu, Chi-Feng, Cheng-Jian Lin, and Chi-Yung Lee. "Applying a functional neurofuzzy network to real-timelane detection and front-vehicle distance measurement." IEEE Transactions on Systems, Man, and Cybernetics, Part C ( Applications and Reviews) 42.4 (2012): 577-589.
[60] Huang, Shih-Shinh, et al. "On-board vision system for lane recognition and front-vehicle detection to enhance driver's awareness." Robotics and Automation, 2004. Proceedings.ICRA'04. 2004 IEEE International Conference on. Vol. . 3. IEEE, 2004.
[61] Satzoda, Ravi Kumar, and MohanM. Trivedi. "Efficient lane and vehicle detection with integratedsynergies (ELVIS)." Computer Vision and Pattern Recognition Workshops (CVPRW), 2014 IEEE Conference on. IEEE, 2014.
[62] Kim, Huieun, et al. "Integration of vehicle and lanedetection for forward collision warning system." Consumer Electronics-Berlin (ICCE-Berlin), 2016 IEEE 6th International Conference on. IEEE, 2016.
[63] Qin, B., et al. "A general framework work for road marking detection and analysis." Intelligent Transportation Systems-(ITSC), 2013 16th International IEEE Conference on. IEEE, 2013.
[64] Kheyrollahi, Alireza, and TobyP. Breckon. "Automatic real-time road marking recognition using a featuredriven approach." Machine Vision and Applications 23.1 (2012): 123-133.
[65] Greenhalgh, Jack, and Majid Mirmehdi. "Detection and Recognition of Painted Road Surface Markings." ICPRAM (1). 2015.
[66] Oliveira, Gabriel L., WolframBurgard, and Thomas Brox. "Efficient deep models for monocular road segmentation." Intelligent Robots and Systems (IROS), 2016 IEEE/RSJ International Conference on. IEEE, 2016.
[67] Kong, Hui, Jean-Yves Audibert, and Jean Ponce. "Vanishing point detection for road detection." Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. IEEE, 2009.
[68] Levi, Dan, et al. "StixelNet: A Deep Convolutional Network for Obstacle Detection and Road Segmentation." BMVC. 2015.
[69] Stein, Gideon P., YoramGdalyahu, and Amnon Shashua. "Stereo-assist: Top-down stereo for driver assistance systems." IntelligentVehicles Symposium (IV), 2010 IEEE. IEEE, 2010.
[70] Raphael, Eric, et al. "Development of a camera-based forward collision alert system." SAEInternational Journal of Passenger Cars-Mechanical Systems 4.2011-01-0579, 2011: 467-478.
[71] Ma, Bing, S. Lakahmanan, and Alfred Hero. "Road and lane edge detection with multisensor fusion methods." Image Processing, 1999. ICIP 99. Proceedings. 1999 International Conference on. Vol. 2. IEEE, 1999.
[72] Beyeler, Michael, Florian Mirus, and Alexander Verl. "Vision-based robust road lanedetection in urban environments." Robotics and Automation (ICRA), 2014 IEEE International Conference on. IEEE, 2014.
[73] Ozgunalp, Umar, et al. "Multiple Lane Detection Algorithm Based on Novel Dense Vanishing Point Estimation." IEEE Transactions on Intelligent Transportation Systems 18.3 (2017): 621-632.
[74] Lipski, Christian, et al."A fast and robust approach to lane marking detection and lanetracking." Image Analysis andInterpretation, 2008. SSIAI 2008. IEEE Southwest Symposium on. IEEE, 2008.
[75] Kim, Dongwook, et al."Lane-level localization using an AVM camera for an automated drivingvehicle in urban environments." IEEE/ASMETransactions on Mechatronics 22.1 (2017): 280-290.
[76] Jung, HG, et al."Sensor fusion-based lane detection for LKS+ ACC system." International journal of automotivetechnology 10.2 (2009): 219-228.
[77] Cui, Dixiao, Jianru Xue, andNanning Zheng. "Real-Time Global Localization of Robotic Cars in LaneLevel via Lane Marking Detection and Shape Registration." IEEE Transactions on IntelligentTransportation Systems 17.4 (2016): 1039-1050.
[78] Jiang, Yan, Feng Gao, andGuoyan Xu. "Computer vision-based multiple-lane detection on straight roadand in a curve." Image Analysis andSignal Processing (IASP), 2010 International Conference on. IEEE, 2010.
[79] Rose, Christopher, et al."An integrated vehicle navigation system utilizing lane-detection andlateral position estimation systems in difficult environments for GPS." IEEE Transactions on IntelligentTransportation Systems 15.6 (2014): 2615-2629.
[80] Li, Qingquan, et al. "Asensor-fusion drivable-region and lane-detection system for autonomous vehiclenavigation in challenging road scenarios." IEEE Transactions on Vehicular Technology 63.2 (2014): 540-555.
[81] Kammel, Soren, and BenjaminPitzer. "Lidar-based lane marker detection and mapping." Intelligent Vehicles Symposium, 2008 IEEE.IEEE, 2008.
[82] Manz, Michael, et al. "Detection and tracking of road networksin rural terrain by fusing vision and LIDAR." Intelligent Robots and Systems (IROS), 2011 IEEE/RSJ InternationalConference on. IEEE, 2011.
[83] Schreiber, Markus, CarstenKnöppel, and Uwe Franke. "Laneloc: Lane marking based localization usinghighly accurate maps." IntelligentVehicles Symposium (IV), 2013 IEEE. IEEE, 2013.
[84] Clanton JM, Bevly DM, Hodel AS. A low-cost solution for an integrated multisensor lane departure warningsystem[J]. IEEE Transactions on Intelligent Transportation Systems, 2009,10(1): 47-59.
[85] Montemerlo, Michael, et al."Junior: The stanford entry in the urban challenge." Journal of field Robotics 25.9 (2008):569-597.
[86] Buehler, Martin, Karl Iagnemma, and Sanjiv Singh, eds. The DARPA urban challenge: autonomous vehiclesin city traffic. Vol. 56. Springer, 2009.
[87] Lindner, Philipp, et al. "Multi-channel lidarprocessing for lane detection and estimation." Intelligent Transportation Systems, 2009. ITSC'09. 12th InternationalIEEE Conference on. IEEE, 2009.
[88] Shin,Seunghak, Inwook Shim, and In So Kweon. "Combinatorial approach for lanedetection using image and LIDAR reflectance." Ubiquitous Robots andAmbient Intelligence (URAI), 2015 12th International Conference on. IEEE,2015.
[89] Amaradi, Phanindra, et al."Lane following and obstacle detection techniques in autonomous drivingvehicles." Electro InformationTechnology (EIT), 2016 IEEE International Conference on. IEEE, 2016.
[90] Dietmayer, Klaus, et al."Roadway detection and lane detection using multilayer laser scanner."Advanced Microsystems for AutomotiveApplications 2005. Springer Berlin Heidelberg, 2005. 197-213.
[91] Hernandez, Danilo Caceres,Van-Dung Hoang, and Kang-Hyun Jo. "Lane surface identification based onreflectance using laser range finder." SystemIntegration (SII), 2014 IEEE/SICE International Symposium on. IEEE, 2014.
[92] Sparbert, Jan, Klaus Dietmayer,and Daniel Streller. "Lane detection and street type classification usinglaser range images." IntelligentTransportation Systems, 2001. Proceedings. 2001 IEEE. IEEE, 2001.
[93] Broggi, Alberto, et al. "Alaser scanner-vision fusion system implemented on the terramax autonomousvehicle." Intelligent Robots andSystems, 2006 IEEE/RSJ International Conference on. IEEE, 2006.
[94] Zhao, Huijing, et al. "A laser-scanner-based approach towarddriving safety and traffic data collection." IEEE Transactions on intelligent transportation systems 10.3(2009): 534-546.
[95] Borkar, Amol, Monson Hayes, and Mark T. Smith. "Anovel lane detection system with efficient ground truth generation." IEEE Transactions on IntelligentTransportation Systems 13.1 (2012): 365-374.
[96] Lin,Chun-Wei, Han-Ying Wang, and Din-Chang Tseng. "A robust lane detection andverification method for intelligent vehicles." Intelligent Information Technology Application, 2009. IITA 2009. ThirdInternational Symposium on. Vol. 1. IEEE, 2009.
[97] Yoo, Ju Han, et al. "ARobust Lane Detection Method Based on Vanishing Point Estimation Using theRelevance of Line Segments." IEEETransactions on Intelligent Transportation Systems (2017).
[98] Li, Li, et al."Intelligence Testing for Autonomous Vehicles: A New Approach." IEEE Transactions on Intelligent Vehicles 1.2(2016): 158166.
[99] Kluge, Karl C."Performance evaluation of vision-based lane sensing: Some preliminarytools, metrics, and results." IntelligentTransportation System, 1997. ITSC'97., IEEE Conference on. IEEE, 1997.
[100]Veit,Thomas, et al. "Evaluation of road marking feature extraction."Intelligent Transportation Systems, 2008. ITSC 2008. 11th International IEEEConference on. IEEE, 2008.
[101]McCall, Joel C., and Mohan M. Trivedi. "Performance evaluationof a vision based lane tracker designed for driver assistance systems." Intelligent Vehicles Symposium, 2005.Proceedings. IEEE. IEEE, 2005.
[102]Satzoda, Ravi Kumar, and Mohan M. Trivedi. "On performanceevaluation metrics for lane estimation." Pattern Recognition (ICPR), 2014 22nd International Conference on.IEEE, 2014.
[103]Jung, Claudio Rosito, and Christian Roberto Kelber. "A robustlinear-parabolic model for lane following." Computer Graphics and Image Processing, 2004. Proceedings. 17thBrazilian Symposium on. IEEE, 2004.
[104]Haloi, Mrinal, and Dinesh Babu Jayagopi. "A robust lanedetection and departure warning system." Intelligent Vehicles Symposium (IV), 2015 IEEE. IEEE, 2015.
[105]FY Wang, “Parallel system methods for management and control of complexsystems,” Control Decision, vol. 19,no. 5, pp. 485-489, 514, May 2004.
[106]FY Wang, “Parallel control and management for intelligenttransportation systems: Concepts, architectures, and applications,” IEEE Trans .Intell. Transp. Syst., vol.11, no. 3, pp. 630-638, Sep. 2010.
[107]FY Wang, “Artificial societies, computational experiments, andparallel systems: A discussion on computational theory of complex socialeconomic systems,” Complex Syst.Complexity Sci., vol. 1, no. 4, pp.25-35, Oct.
[108]L. Li, YL Lin, DP Cao, NN Zheng, and FY Wang, “Parallellearning-a new framework for machine learning,” Acta Automat. Sin., vol. 43, no. 1, pp. 1-18, Jan. 2017.
[109]KF Wang, C. Gou, NN Zheng, JM Rehg, and FY Wang,“Parallel vision for perception and understanding of complex scenes: methods,framework, and perspectives,” Artif. Intell. Rev., vol. 48, no. 3, pp.298-328, Oct. 2017.
[110]Wang,FY, Zheng, NN, Cao, D., et al. Parallel driving in CPSS: a unified approachfor transport automation and vehicle intelligence. IEEE/CAA Journal of AutomaticaSinica, 2017, 4(4), pp.577-587.
[111] Lv, C., Liu, Y., Hu, X., Guo, H., Cao, D. andWang, FY Simultaneous observation of hybrid states for cyber-physicalsystems: A case study of electric vehicle powertrain. IEEE transactions oncybernetics, 2017.
[112]Silver,David, et al. "Mastering the game of Go with deep neural networks and treesearch." Nature 529.7587 (2016):484-489.
August 30, 2023
August 29, 2023
January 30, 2023
January 23, 2022
이 업체에게 이메일로 보내기
August 30, 2023
August 29, 2023
January 30, 2023
January 23, 2022
Privacy statement: Your privacy is very important to Us. Our company promises not to disclose your personal information to any external company with out your explicit permission.
Fill in more information so that we can get in touch with you faster
Privacy statement: Your privacy is very important to Us. Our company promises not to disclose your personal information to any external company with out your explicit permission.