CSSE / Monash CSE443
Reasoning under uncertainty
About CSSE Courses Our People Research Student Information Community Links Internal Information

Ann Nicholson

The first part of the course focuses on two different techniques for modelling and reasoning under uncertainty: Markov Decision Processes and Bayesian (or Belief) networks. The second part provides a discussion of machine learning in the context of Bayesian networks, Markov Decision Processes and other machine learning techniques.

Markov Decision Processes are one way of modelling a stochastic environment. We look at how they can be used for planning using basic solution methods such as dynamic programming, and cover approximation methods that can be used for more complex domains.

Bayesian networks have rapidly become one of the leading technologies for applying AI to real world problems. This follows the work of Pearl, Lauritzen, and others in the late 1980s showing that Bayesian reasoning in practice could be tractable (although in principle it is NP-hard). We begin with a brief examination of the philosophy of Bayesianism, motivating the use of probabilities in decision making, agent modeling and data analysis, and contrasting Bayesian methods with certainty factors, fuzzy logic and the Dempster-Shafer calculus. We introduce Bayesian networks, their inference techniques and approximation methods. We illustrate their use in various applications (robotics and planning, medical decision making, intelligent tutoring, plan recognition, natural language generation and game playing).

There are many difficulties with constructing AI models (such as BNs or MDPs) using human domain knowledge, including lack of human domain expertise, difficulties in elicting causal structure and inconsistent probabilities. This has led to a strong interest in automating the learning of such models from statistical data, which is the focus of the second part of the course.

In the second part of the subject, we begin with an introduction to machine learning. We then look at some of the main machine learning techniques available for learning Bayesian net structures, such as conditional independence learning and statistical methods, including Minimum Message Length inference (MML). We also look at learning MDPs, often called "reinforcement learning" - a computational approach to learning whereby an agent tries to maximise the total amount of reward it receives when interacting with a complex, uncertain environment. Methods covered include Monte Carlo methods and temporal-difference learning.


Last updated 26.09.01 03:37:30 PM (K Fenwick) - Subject to change by the lecturer concerned.

Back to 2001 Honours subjects


Copyright © School of Computer Science and Software Engineering (Monash University) 1994-2000.
All rights reserved. See our disclaimer. Maintained by the web group.
Last updated: 26/09/2001