Publication Detail
Deep Reinforcement Learning Based Platooning Control for Travel Delay and Fuel Optimization
UCD-ITS-RP-22-60 Journal Article |
Suggested Citation:
Yen, Chia-Cheng, Gao Hang, Michael Zhang (2022) Deep Reinforcement Learning Based Platooning Control for Travel Delay and Fuel Optimization. 2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC), 737 - 742
Vehicular emissions and traffic congestion have been deteriorated by highly urbanization. The worsen traffic burdens drivers with a higher cost and longer time on driving, and exposures pedestrians to unhealthy emissions such as PM, NOx, SO2 and greenhouse gases. In response to these issues, connected autonomous vehicles (CAVs), which enable information sharing between vehicles and infrastructure was proposed. With CAVs and advanced wireless technologies offering extremely low latency, platooning control can be realized to reduce the traffic delay, fuel consumption and emissions by improving traffic efficiency. However, conventional platooning control algorithms require complex computations and hence, are not a perfect candidate when applying to real-time operations. To overcome this issue, this work focuses on designing an innovative learning framework for platooning control capable of reducing the traffic delay and fuel consumption by the four basic platoon manipulations, e.g., split, acceleration, deceleration, and no-op. We integrate reinforcement learning (RL) with neural networks (NNs) to be able to model non-linear relationships between inputs and outputs for a complex application. The experimental results reveal decreasing trends of the delay and fuel usage and a growing trend of the reward. They demonstrate that the proposed DRL platooning control optimizes the average delay and fuel consumption by fine-tuning speeds and sizes of platoons.
Key words:
Key words:
Deep reinforcement learning, Connected autonomous vehicles (CAVs), Platooning control, Arrival timing vector, Travel delay, Fuel optimization