Show simple item record

dc.contributor.authorChandrasekaran, Muthukumaran
dc.description.abstractInteractive Dynamic Influence Diagrams (I-DIDs) and Interactive Partially Observable Markov Decision Processes (I-POMDPs) are well-established finitely-nested frameworks that operationalize the planning and decision-making of a self-interested agent in a multiagent setting under uncertainty. Furthermore, I-DIDs (and I-POMDPs) take the perspective of a single agent in a multiagent setting and assume no communication or pre-coordination between agents. Therefore, intuitively, they are naturally suited for ad hoc or impromptu teamwork. However, we show that teamwork is implausible due to the way such frameworks operationalize bounded rationality of the agents. Before that, we first seek to scale I-DIDs in the number of agents by addressing the curse of dimensionality due to exponential growth in the number of models ascribed to the others by the subject agent over time using the well-known concept of stochastic bisimulation. Next, we investigate the implausibility of teamwork in such frameworks and present a principled way to induce it by augmenting I-DIDs with level-0 models enhanced with superior reasoning capabilities using reinforcement learning. We further investigate teamwork in open settings -- where one or more agents may leave or re-enter the system at will without announcing their arrival or departure to the others. As such, individual planning under the constraints of uncertainty, and lack of pre-coordination or communication between agents is complex. This complexity is exacerbated by agent openness. We present ways for individual agents to plan and act in open-agent teams within the context of a variant of the I-POMDP framework. Finally, we expose the void in the literature for theoretical frameworks that may be suitable for analyzing pragmatic interactions spanning several time steps between typed agents in multiagent teams. We fill that void by formally establishing a novel game-theoretic framework, called the Bayesian Markov Game, where bayesian agents with explicitly-defined finite-level types engage in a markov game where each agent has private but incomplete information regarding others’ types. We characterize an equilibrium in this game and establish the conditions for its existence. In addition to laying strong theoretical foundations, we also empirically demonstrate the effectiveness of all our approaches and algorithms on multiple benchmark cooperative domains.
dc.subjectmultiagent systems
dc.subjectgraphical models
dc.subjectreinforcement learning
dc.subjectmarkov decision processes
dc.subjectad hoc teams
dc.subjectgame theory
dc.subjectopen agent systems
dc.titleFrameworks and algorithms for individual planning under cooperation
dc.description.departmentComputer Science
dc.description.majorComputer Science
dc.description.advisorPrashant Doshi
dc.description.committeePrashant Doshi
dc.description.committeeYifeng Zeng
dc.description.committeeLeenKiat Soh
dc.description.committeeKhaled Rasheed
dc.description.committeeWalter D. Potter

Files in this item


There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record