Human-Machine Interaction

HMI MainInvestigations of human-machine interaction in human endeavours ranging from knowledge discovery to information visualisations and believable interaction in immersive cyber-physical worlds is an increasingly essential component in understanding of human functioning.

In the Human-Machine Interaction research program, we analyse believability of the behaviour of virtual agents "living" in cyber-physical worlds — what are the features of believability, how can they be formalised and implemented in computational form, and how can believability be evaluated?

We also investigate cyber-physical interfaces focussing both on virtual environments and also the tight integration of the virtual and physical worlds populated by humans as well as virtual and robotic agents.

Developing the machinery for natural interaction between humans, robotic agents and virtual agents, building suitable interfaces for this task and integrating them into both physical and virtual environments is another significant direction of our research.

On the analytics side, we develop consistent visual computing techniques (including information visualisation and visual languages) for assisting the creative process of visual investigation and knowledge discovery.

More information on specific research themes can be found below.


Believability of Embodied Virtual Agents and Virtual Environments

HMI Virtual AgentsVirtual agents are represented as graphical characters (avatars) that often resemble a human being. It should always be the goal for these agents to act "believably" given the representation they have; hence, enabling and evaluating their believability is essential.

Having both humans and agents fully immersed into and constrained by the same computer-simulated environment provides fantastic opportunities for artificial intelligence (AI) researchers to study human behaviour, and for AI-controlled avatars to engage and learn from their interactions with people who spend time in these virtual worlds. However, for people to interact naturally with AI-controlled avatars, the AI-controlled avatars need to keep humans engaged in meaningful joint activities - simply put, they need to be believable.

The overall goal of this project is to understand how to make electronic (virtual) marketplaces believable. An electronic marketplace is a space populated by computerised players that represent the variety of human and software traders, intermediaries, and information and infrastructure providers. Believable electronic marketplaces are perceived as "marketplaces where people are", as "marketplaces that are alive and engaging". Present electronic markets are focused on the backend transaction processing and catalogue-style interaction, and do not provide such perceptions. A marketplace should have items and traders with "presence", constituting a rich interaction space.

The believability of the place depends on the believability of the presence and interactions in it, including the players' behaviour and the narrative scenarios of the marketplace (Simoff, Sierra & de Mántaras, 2009). The project builds on the research and technologies developed from the ARC DP0879789.

Visualisation, Visual Languages, Visual Reasoning and Visual Computing

HMI VisualisationThis project will focus on the development of appropriate interactive visualisations and visual languages for supporting analysis, discovery and decision-making processes.

One example of a sub-project in this area is visualisation of information about the quality of interactions for supporting decision making. We interact when we work, when we learn, when we visit a doctor, and when we play. With the advent of information and communications technology we can collect rich data (video, audio, and various transcripts including text chat) about such interactions. This opens an opportunity to monitor the dynamics of interactions and to obtain deeper insights into how they unfold and deliver this information to the interacting parties.

This project aims to develop quantitative measures of the quality of interactions, develop visualisation principles for the design of a technology for visualising information about the dynamics of unfolding of interactions and presenting it appropriately on different displays ranging from large screens to mobile devices. The overall purpose of this technology is the delivery of such information to the point of decision making. The project builds on the previous work and technology development of Simoff and his colleagues (Simoff, 2008; Simoff & Galloway, 2008; Deray & Simoff, 2009a; Deray & Simoff, 2009b, Deray & Simoff, 2009c).

Another sub-project example is in visual analytics, addressing the combination of automated analysis techniques, interactive visualisation techniques, visual reasoning and analytical reasoning. The long term goal of this project is to provide an effective platform for visual analytics of very large and information-rich data sets that can smartly adapt to the analyst profile and operate on different devices.

By "smart adaptation" we mean the ability to provide the best display (projection) of analytical images with respect to the analytical task, the analyst's preference, analytic behaviour and some additional rules coming from broader knowledge about human cognition, visual communication and human visual processing system. The project builds on the results and research problems identified in (Simoff, 2008; Simoff & Galloway, 2008; Simoff, Böhlen & Mazeika, 2008).

Preserving and Simulating Cultures in Virtual Worlds

Preserving virtual worldsCulture is not something one can easily learn from a book. The traditional way of capturing and preserving various attributes of a culture involves the results of archaeological excavations in combination with written sources. This results in a set of functional descriptions and illustrations that describe a culture.The drawbacks of this approach include the lack of realism and interactivity, and moreover it is difficult to recreate, preserve and teach culture simply through the use of text and diagrams.

The overall goal of our project is to preserve and simulate cultures within 3D Virtual Worlds, where a heritage site is reconstructed and populated with autonomous computational agents behaving similar to the actual people representing the given culture. In order to solve this task we will extend our work (Bogdanovych et al., 2010) based on the combination of the Virtual Institutions technology (Bogdanovych & Simoff, 2011) in combination with imitation learning (for teaching cultural characteristics to autonomous agents through embodied interactions with human experts).

We populate Virtual Worlds with computational agents capable of capturing expert knowledge, improving it through social learning and then teaching ancient cultures to visitors through embodied interactions. This project will build on our existing technology that was used for simulating the culture of the city of Uruk in the Virtual World of Second Life (Bogdanovych, et. al., 2011) and won 3rd place in the international artificial intelligence contest (Orlando, Florida, February, 2011 organised by US Army), and the research and technologies developed from the ARC grant DP0879789.

Believable Human-Computer-Interaction with Motion Capture

Motion captureThis project seeks to enable natural ways of interacting with 3D Virtual Worlds, avatars and autonomous agents, so that those interactions are similar to interactions between people in the real world. In particular, we will investigate using full-body motion capture for manipulating objects in a virtual world as well as building the virtual world itself. We will also extend our work on utilising motion capture for real time motion streaming onto an avatar in a virtual world (Bogdanovych, 2011). High-end motion capture equipment (XSENS MVN) is employed for investigating how a combination of speech recognition and motion capture can be used for navigating a virtual world and building new objects within that world.

We will also investigate the impact of full body motion capture on the future of video games. During the last five years the gaming industry has experienced a significant change of paradigm. Traditional input devices like the keyboard, mouse, touch screen and hand-held game console are being widely replaced by motion capture technology such as the Nintendo Wii, Playstation Move, and Microsoft Kinect. Players are embracing the change that allows them to employ their entire bodies in the game and interact with the computer in a more natural and intuitive way.

To illustrate this idea we have developed the Motion Capture Basketball video game (Bogdanovych, 2011). In this game the player equipped with XSENS MVN suit pantomimes various basketball moves, while the corresponding avatar is replicating those moves and plays with the ball in the virtual world of Second Life. The key novelty shown in this game is that there is no gesture recognition being conducted. Instead, the data received from the suit is being processed and streamed directly into the virtual world. This data stream is then used for animating the avatar and estimating the ball movement depending on the physical parameters of the player's body joints. The motion stream is broadcasted to all the objects present in the virtual world, bringing new possibilities to game development.

We will investigate our hypothesis that for games like basketball it is possible to adjust the physics system so that movements that result in scoring a point in the game would also result scoring on the actual physical court and vice-versa.

Real-Time Human-Robot Interactive Coaching System with Full-Body Control Interface

Real-time HMIWith autonomous robots becoming increasingly prevalent in society, natural and intuitive methods are required to interact, guide and improve robot behaviour. When one person coaches another the objective of the coach is to use their relevant expert knowledge and experience to improve the task performance of the person being coached. This knowledge transfer is often verbal, but can be aided by demonstration, pictures, videos, and other forms of communication. However, what is the best way to coach an autonomous robot?

Learning by demonstration, observation and imitation are approaches to robot learning in which a teacher (or coach) provides examples of the desired robot behaviour. Examples range from a teleoperated robot recording the actions performed by the teacher, to autonomous robots learning to perform actions by watching a human teacher perform a similar action. Likewise, in the realm of virtual agents, imitation learning has been used to teach autonomous agents in gaming environments to perform complex manoeuvres performed by human experts.

In this project we consider how to best teach humanoid robots to perform complex movement patterns by mimicking the behaviour of a human operator wearing a full-body motion capture suit. We will build on top of our initial experiments (Bogdanovych, Stanton, et. al. 2011) with using motion capture for coaching robots playing soccer in the RoboCup Standard Platform League (SPL).

Our long-term goal is to explore methods of teaching the robot (in real-time) to extend and improve their capabilities without explicit programming.

Our team

Research Program Leader Professor Simeon Simoff


Researchers Dr Anton Bogdanovych

Dr Omar Mubin

Dr Quang Vinh Nguyen

Dr Laurence Park

Dr Christopher Stanton

Dr Glenn Stone


Research Students Muneeb Ahmad