Towards agents with human-like decisions under uncertainty

Abstract

Creating autonomous virtual agents capable of exhibiting human-like behaviour under uncertainty is becoming increasingly relevant, for instance in multi-agent based simulations (MABS), used to validate social theories, and also as intelligent characters in virtual training environments (VTEs). The agents in these systems should not act optimally; instead, they should display intrinsic human limitations and make judgement errors. We propose a BDI based model which allows for the emergence of uncertainty related biases during the agent's deliberation process. A probability of success is calculated from the agent's beliefs and attributed to each available intention. These are then combined with the intention's utility using Prospect Theory, a widely validated descriptive model of human decision. We also distinguish risk from ambiguity, and allow for individual variability in attitudes towards these two types of uncertainty. In a travelling scenario, we demonstrate more realistic agent behaviours can be obtained by applying the proposed model.


Back to Table of Contents