Publication Detail
Object Classification in Asphalt Pavement Using Generative Model Based on Deep Learning
UCD-ITS-RP-22-93 Conference Paper |
Suggested Citation:
Chen, Yihan, Xingyu Gu, Hanyu Deng, Bingyan Cui, Zhen Liu, Qipeng Zhang (2023) Object Classification in Asphalt Pavement Using Generative Model Based on Deep Learning. Transportation Research Board 101st Annual Meeting
The automatic detection of pavement cracks can significantly improve the efficiency of road maintenance in pavement engineering. At present, artificial intelligence-based pavement crack detection may have the problem of an insufficient training dataset, which makes it difficult to train a model with high accuracy and a good generalization ability. To solve this problem, on the basis of a small-scale pavement-image dataset, a study on the application of a deep learning-based generative model including a generative adversarial network (GAN) and convolutional autoencoder (CAE) was conducted to augment the training datasets and identify pavement cracks and other objects. A two-stage data-augmentation algorithm integrating the image geometric transformation method and the CAE-based image reconstruction approach is proposed herein. DenseNet was used as a comparative verification network for various dataaugmentation methods. Test results indicated that after 100 iterations of DenseNet Network training, under the same test set, the test accuracy of network classification based on the original dataset was 78.43%, the test accuracy of data augmentation based on the deep convolutional GAN (DCGAN) was 81.56%, and the test accuracy of the proposed two-stage method was 87.19%. CAE image-reconstruction methods were further used to enhance the DCGAN-based data-augmentation methods, which improved the test result to 82.19%. Under the same dataset sample size, compared with the traditional dataaugmentation methods, such as geometric transformation and pixel color transformation, CAE image construction methods have a higher recognition accuracy.
Key words: .
Key words: .