Show simple item record

dc.contributor.authorChandrasekaran, Muthukumaran
dc.date.accessioned2018-02-14T17:57:08Z
dc.date.available2018-02-14T17:57:08Z
dc.date.issued2017-08
dc.identifier.otherchandrasekaran_muthukumaran_201708_phd
dc.identifier.urihttp://purl.galileo.usg.edu/uga_etd/chandrasekaran_muthukumaran_201708_phd
dc.identifier.urihttp://hdl.handle.net/10724/37284
dc.description.abstractInteractive Dynamic Influence Diagrams (I-DIDs) and Interactive Partially Observable Markov Decision Processes (I-POMDPs) are well-established finitely-nested frameworks that operationalize the planning and decision-making of a self-interested agent in a multiagent setting under uncertainty. Furthermore, I-DIDs (and I-POMDPs) take the perspective of a single agent in a multiagent setting and assume no communication or pre-coordination between agents. Therefore, intuitively, they are naturally suited for ad hoc or impromptu teamwork. However, we show that teamwork is implausible due to the way such frameworks operationalize bounded rationality of the agents. Before that, we first seek to scale I-DIDs in the number of agents by addressing the curse of dimensionality due to exponential growth in the number of models ascribed to the others by the subject agent over time using the well-known concept of stochastic bisimulation. Next, we investigate the implausibility of teamwork in such frameworks and present a principled way to induce it by augmenting I-DIDs with level-0 models enhanced with superior reasoning capabilities using reinforcement learning. We further investigate teamwork in open settings -- where one or more agents may leave or re-enter the system at will without announcing their arrival or departure to the others. As such, individual planning under the constraints of uncertainty, and lack of pre-coordination or communication between agents is complex. This complexity is exacerbated by agent openness. We present ways for individual agents to plan and act in open-agent teams within the context of a variant of the I-POMDP framework. Finally, we expose the void in the literature for theoretical frameworks that may be suitable for analyzing pragmatic interactions spanning several time steps between typed agents in multiagent teams. We fill that void by formally establishing a novel game-theoretic framework, called the Bayesian Markov Game, where bayesian agents with explicitly-defined finite-level types engage in a markov game where each agent has private but incomplete information regarding others’ types. We characterize an equilibrium in this game and establish the conditions for its existence. In addition to laying strong theoretical foundations, we also empirically demonstrate the effectiveness of all our approaches and algorithms on multiple benchmark cooperative domains.
dc.languageeng
dc.publisheruga
dc.rightspublic
dc.subjectmultiagent systems
dc.subjectgraphical models
dc.subjectbisimulation
dc.subjectreinforcement learning
dc.subjectmarkov decision processes
dc.subjectteamwork
dc.subjectad hoc teams
dc.subjectgame theory
dc.subjectopen agent systems
dc.titleFrameworks and algorithms for individual planning under cooperation
dc.typeDissertation
dc.description.degreePhD
dc.description.departmentComputer Science
dc.description.majorComputer Science
dc.description.advisorPrashant Doshi
dc.description.committeePrashant Doshi
dc.description.committeeYifeng Zeng
dc.description.committeeLeenKiat Soh
dc.description.committeeKhaled Rasheed
dc.description.committeeWalter D. Potter


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record