Dr Christopher Stanton

Research Fellow in Human-Machine Interaction

Would you trust a robot? Developing trusted autonomous systems.

This roboticist and human factors scientist researches how to best design autonomous systems and AI for human interaction, so that people can understand it, trust it, and rely upon it.



Artificial intelligence and autonomous systems are being entrusted with increasingly important tasks, from driving cars to trading stocks, making medical diagnoses, and as part of military forces.

However, faulty autonomous systems have crashed planes and cars, killing many people – so trust is crucial if this technology is to fulfil its promise of contributing to our safety and wellbeing.

For example, if a drone asked you to follow advice during a natural disaster, would you? What if the drone’s advice seemed counter-intuitive? If a military AI recommends a plan of attack, should the commanding officer agree to this course of action? How can the commanding officer be confident the AI is correct? Could a robot in your home persuade you to take your medicine or to do your rehabilitation exercises?

The aim of Dr Stanton’s research is to understand how these systems can be best designed to interact with people so that they can be trusted, relied upon, and integrated safely and successfully into our everyday lives.

My research examines trust towards robots and artificial intelligence (AI). If a robot or AI gave you advice, would you (and should you) trust that advice?

Dr Stanton investigates factors that influence our likelihood of trusting advice provided by artificial systems, and factors that contribute to successful team performance in teams comprised of both humans and machines.

To do this, he conducts experiments with human participants who interact with autonomous systems or AI as teammates.

The participants manipulate aspects of the robot’s behaviour, explanations, and appearance, then Dr Stanton measures whether participants accept and act upon the advice provided by their artificial teammate.

The aim is to build autonomous systems where people understand what they are doing, why they are doing it, and what they will do next.

I imagine a future where robots and AI are more than just tools, they are teammates, improving our quality of life.


Dr Stanton’s research contributes to the development of trustable, transparent, and explainable artificial intelligence and autonomous systems that can operate alongside people as artificial teammates.

This work has applications for defence, aerospace, transportation, health, healthy ageing and education, all of which are using autonomous systems.

His research has contributed to the design of human-machine interfaces in defence that allow robots to dangerous jobs, such as mine clearance, that are normally done by people.


Dr Stanton holds undergraduate degrees in arts (psychology and linguistics), information science (software engineering), and business (Hons 1st) from the University of Newcastle, and a PhD degree from the University of Technology Sydney in artificial intelligence and robotics.

He spent the first 10 years of his academic career writing AI algorithms for autonomous robotics, followed by 10 years conducting empirical psychology research investigating factors that influence our trust towards AI and autonomous robots.

He is now employed by both WSU and the Australian Department of Defence’s Defence Science and Technology Group (DSTG) as a senior research scientist.

He leads the Human-Machine Interaction team at the MARCS Institute.

Find out more

Teleoperation of humanoid robots using machine learning; the shell game with humanoid robots; and the development of MAHVS – an air traffic control game which allows us to conduct research in human-autonomy teaming.



The issues faced by autonomous systems are too large to be solved independently.

Dr Stanton collaborates with multiple universities and industry partners, across defence, health and academia.


Phone+61 2 9772 6802
LocationWestern Sydney University Westmead campus