Professor: HECTOR GEFFNER

Description

The focus of this course is autonomous behavior, and more precisely, the different methods for developing "agents" capable of making their own decisions in real or simulated environments. This includes characters in video-games, robots, softbots in the web, etc. The problem of developing autonomous agents is a fundamental problem in Artificial Intelligence, where three basic approaches have been developed: the programmer-based approach, where the agent responses are hardwired by a human programmer; the learning-based approach, where the agent learns to control its behavior from experience or  information obtained from a teacher, and the model-based approach, where the agent control is derived automatically from a model describing the goals, the actions available, and the sensing capabilities. In the course, we review the three approaches to developing autonomous systems, with emphasis on the model-based approach, which in AI goes under the name of planning. We study autonomy in dynamic, partially observable settings involving a single agent or multiple agents. The course involves theory and experimentation.

Contents

  • Autonomous Behavior: Introduction, Approaches
  • Model 1: Classical Planning Models
  • Model 2: Markov Decision Processes
  • Model 3: Partially Observable MDPs
  • Model-based Solution methods: Heuristic Search, Value and Policy Iteration, LRTA*, RTDP
  • Model-free solution methods: Reinforcement Learning
  • Programming the control: finite-state machines and controllers
  • Multi-agent Models: fundamentals, Game Theory, Multi-agent planning Introduction

Bibliography

  • Artificial Intelligence: A Modern Approach, S. Russell and P. Norvig, Prentice Hall (3rd Edition), 2009
  • Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations, Yoav Shoham and Kevin Leyton-Brown, Cambridge Univ. Press, 2008
  • Automated Planning: Theory & Practice, Malik Ghallab, Dana Nau, Paolo Traverso, Morgan Kaufmann, 2004
  • Reinforcement Learning: An Introduction by Richard S. Sutton and Andrew G. Barto, MIT Press 1998
  • Neuro-Dynamic Programming, Dimitri P. Bertsekas and John N. Tsitsiklis, Athena Scientific 1996
  • Programming Game AI by Example, Mat Buckland, Jones & Bartlett Publishers; 2004

Evaluation

  • Exam 50%
  • Programming Projects: 25%
  • Seminar Presentations 25%

Format

  • 20h Theory: Professor Lectures
  • 5h Student Seminars
  • Project Homework