Show simple item record

dc.contributor.authorBogert, Kenneth Daniel
dc.description.abstractRobots deployed into many real-world scenarios are expected to face situations that their designers could not anticipate. Machine learning is an effective tool for extending the capabilities of these robots by allowing them to adapt their behavior to the situation in which they find themselves. Most machine learning techniques are applicable to learning either static elements in an environment or elements with simple dynamics. We wish to address the problem of learning the behavior of other intelligent agents that the robot may encounter. To this end, we extend a well-known Inverse Reinforcement Learning (IRL) algorithm, Maximum Entropy IRL, to address challenges expected to be encountered by autonomous robots during learning. These include: occlusion of the observed agent’s state space due to limits of the learner’s sensors or objects in the environment, the presence of multiple agents who interact, and partial knowledge of other agents’ dynamics. Our contributions are investigated with experiments using simulated and real world robots. These experiments include learning a fruit sorting task from human demonstrations and autonomously penetrating a perimeter patrol. Our work takes several important steps towards deploying IRL alongside other machine learning methods for use by autonomous robots.
dc.subjectinverse reinforcement learning
dc.subjectmachine learning
dc.subjectMarkov decision process
dc.titleInverse reinforcement learning for robotic applications
dc.title.alternativehidden variables, multiple experts and unknown dynamics
dc.description.departmentComputer Science
dc.description.majorComputer Science
dc.description.advisorPrashant Doshi
dc.description.committeePrashant Doshi
dc.description.committeeLakshmish Ramaswamy
dc.description.committeeDon Potter

Files in this item


There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record