Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Lane Detection Method under Low-Light Conditions Combining Feature Aggregation and Light Style Transfer

Lane Detection Method under Low-Light Conditions Combining Feature Aggregation and Light Style... Deep learning technology is widely used in lane detection, but applying this technology to conditions such as environmental occlusion and low light remains challenging. On the one hand, obtaining lane information before and after the occlusion in low-light conditions using an ordinary convolutional neural network (CNN) is impossible. On the other hand, only a small amount of lane data (such as CULane) have been collected under low-light conditions, and the new data require considerable manual labeling. Given the above problems, we propose a double attention recurrent feature-shift aggregator (DARESA) module, which uses the prior knowledge of the lane shape in space and channel dimensions, and enriches the original lane features by repeatedly capturing pixel information across rows and columns. This indirectly increased the global feature information and ability of the network to extract feature fine-grained information. Moreover, we trained an unsupervised low-light style transfer model suitable for autonomous driving scenarios. The model transferred the daytime images in the CULane dataset to low-light images, eliminating the cost of manual labeling. In addition, adding an appropriate number of generated images to the training set can enhance the environmental adaptability of the lane detector, yielding better detection results than those achieved by using CULane only. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Automatic Control and Computer Sciences Springer Journals

Lane Detection Method under Low-Light Conditions Combining Feature Aggregation and Light Style Transfer

Loading next page...
 
/lp/springer-journals/lane-detection-method-under-low-light-conditions-combining-feature-COOqRWmPOW

References (28)

Publisher
Springer Journals
Copyright
Copyright © Allerton Press, Inc. 2023. ISSN 0146-4116, Automatic Control and Computer Sciences, 2023, Vol. 57, No. 2, pp. 143–153. © Allerton Press, Inc., 2023.
ISSN
0146-4116
eISSN
1558-108X
DOI
10.3103/s0146411623020050
Publisher site
See Article on Publisher Site

Abstract

Deep learning technology is widely used in lane detection, but applying this technology to conditions such as environmental occlusion and low light remains challenging. On the one hand, obtaining lane information before and after the occlusion in low-light conditions using an ordinary convolutional neural network (CNN) is impossible. On the other hand, only a small amount of lane data (such as CULane) have been collected under low-light conditions, and the new data require considerable manual labeling. Given the above problems, we propose a double attention recurrent feature-shift aggregator (DARESA) module, which uses the prior knowledge of the lane shape in space and channel dimensions, and enriches the original lane features by repeatedly capturing pixel information across rows and columns. This indirectly increased the global feature information and ability of the network to extract feature fine-grained information. Moreover, we trained an unsupervised low-light style transfer model suitable for autonomous driving scenarios. The model transferred the daytime images in the CULane dataset to low-light images, eliminating the cost of manual labeling. In addition, adding an appropriate number of generated images to the training set can enhance the environmental adaptability of the lane detector, yielding better detection results than those achieved by using CULane only.

Journal

Automatic Control and Computer SciencesSpringer Journals

Published: Apr 1, 2023

Keywords: autonomous driving; obscured lane detection; light style transfer; fine-grained features

There are no references for this article.