Inverse reinforcement learning of risk-sensitive utility
MetadataShow full item record
The uncertain and stochastic nature of the real world poses a challenge for autonomous cars in making decisions to ensure appropriate motion, considering the safety of the passengers and other cars that may or may not be autonomous. It is crucial for these systems to learn driving patterns of other vehicles from their environment in order to predict their movement for a better decision making. In this research, we focus on solving the highway merging problem, where an autonomous vehicle tries to merge onto a highway by using Inverse Reinforcement Learning. Human behavior is complex, and both linear and exponential utility functions fail to capture the non-linearity associated with such decision making. To resolve this issue, we model such behavior with a One-Switch utility function. We present an Inverse Reinforcement Learning technique that allows an autonomous vehicle to predict human driving patterns to efficiently merge onto a highway by modeling risk with a one- switch utility function.