Show simple item record

dc.contributor.authorPerez Barrenechea, Dennis David
dc.description.abstractPartially observable Markov decision processes (POMDPs) have been largely accepted as a rich-framework for planning and control problems. In settings where multiple agents interact, POMDPs fail to model other agents explicitly. The interactive partially observable Markov decision process (I-POMDP) is a new paradigm that extends POMDPs to multiagent settings. The I-POMDP framework models other agents explicitly, making exact solution unfeasible but for the simplest settings. Thus, a need for good approximation methods arises, methods that could find solutions with tight error bounds and short periods of time. We develop a point based method for solving finitely nested I-POMDPs pproximately. The method maintains a set of belief points and form value functions including only the value vectors that are optimal at these belief points. Since I-POMDPs computation depends on the prediction of the actions of other agents in multiagent settings, an interactive generalization of the point based value iteration (PBVI) methods that recursively solves all models of other agents needed to be developed. We present some empirical results in domains on the literature and discuss the computational savings of the proposed method.
dc.subjectMarkov Decision Process
dc.subjectMultiagent systems
dc.subjectDecision making
dc.titleAnytime point based approximations for interactive POMDPs
dc.description.departmentArtificial Intelligence
dc.description.majorArtificial Intelligence
dc.description.advisorPrashant Doshi
dc.description.committeePrashant Doshi
dc.description.committeeKhaled Rasheed
dc.description.committeeWalter D. Potter

Files in this item


There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record