Avi Pfeffer

Modeling the Reasoning of Agents in Games

 

Why do agents (people or computers) do things in strategic situations? Answering this question will impact how we build computer systems to assist, represent or interact with people in interactions with other agents such as negotiations and resource allocation. We identify four reasoning patterns that agents might use: choosing an action for its direct effect on the agent's utility, attempting to manipulate another agent, signalling information to another agent that the first agent knows, or revealing or hiding information from another agent that the first agent itself does not know. We present criteria that characterize each reasoning pattern as a pattern of paths in a multi-agent influence diagram, a graphical representation of games. We define a class of strategies in which agents do not make unmotivated distinctions, and show that if we assume all agents play these kinds of strategies, our categorization of reasoning patterns is complete and captures all situations in which an agent has reason to make a decision.

We then study how people use two reasoning patterns in a particular negotiation game. We use machine learning to learn models of people's play, and embed our learned models in computer negotiators. We find that negotiators that use our learned model outperform classical game-theoretic agents and also outperform people. Finally, we learn models of the way people's behavior changes in ongoing interactions with the same agent, particularly the degree to which retrospective (rewarding or punishing past behavior) and prospective (attempting to induce future good behavior) reasoning play a role.

 

 

 

 

 

Official inquiries about AIIS should be directed to Alexandre Klementiev (klementi AT uiuc DOT edu)
Last update: 08/30/2007