Research Student - PhD
Research Program: Biomedical & Human Technologies
Research Lab: Human Machine Interface
Explainability and Trust in Multi-Agent Human-Autonomous Teams
In remote, risk laden environments, an autonomous AI agent can independently perform complex tasks, exhibit human-like abilities in dealing with ambiguities. This raises the risk of unexpected or unintended outcomes. Accordingly, human controllers provide assistance or supervision to such autonomous systems through human-autonomy teaming (HAT). Due to the delegation of risks and decision making by human controllers to autonomous systems, the establishment of 'trust’ in HAT, is key in ensuring safety and effectiveness in mission critical operations such as Mine Counter Measures (MCM). This project explores and demonstrates features required for an Intermediary Explainer Teammate for HAT (IET) that positively influences trust in the path planning and replanning for missions during complex and risky MCM operations. The project will design and develop the IET that provides explanation of the status and intent of multi-agent autonomous swarms that use planning algorithms such as Greedy, Lawnmower, A*, MCTS and Dec-MCTS. The IET will build further on existing work in explainability, to provide explanations to human operators in HAT. This project seeks to contribute to the resolution of principal-agent asymmetries in HAT, with significant impact on mission performance, trust, and reliability.
- Master of Business Administration, Griffith University (2017)
- Master of Research (School of Computer, Data and Mathematical Sciences, Western Sydney University