Robots are foreseen to be used in a variety of home contexts and interacting with novice users, such as assisting dependent elderly in daily housework (i.e., caring and unpacking groceries, fetching, and pouring a glass of water, etc.). Enabling robots to adapt to their environment by learning context specific task would be necessary for them to be used adequately by non-programming users. Several methods have been proposed to teach new skills to robots while keeping the human-in the loop. Among these methods, the Reinforcement Learning (RL) approach is the most common one. However, the literature reports several issues with including human trainers in RL scenarios. Several researches report positive bias in RL rewards, and that human-generated reward signals change as the learning progresses being inconsistent over time (the trainer adapts her strategy). This can be explained by the difficulty for human trainers to teach basic procedural motions, hence they tend to exaggerate their demonstrations or be more kind with time. In education, a good instructor maintains a mental model of the learner’s state (what has been learned and what is still confusing). This helps the teacher to appropriately structure the learning task with timely feedback and guidance. The learner can help the instructor by expressing their internal state via communicative acts that reveals their understanding, confusion, and attention. But robot’s learning parameters can be overwhelming for a novice and increase the human workload (increasing inaccurate feedbacks, and hence decreasing the robot’s learning). The challenge relies on training humans to be efficient trainers and enabling them to plan, assess and manage the robot’s learning.
Another noticeable issue is the disengagement of humans during the training task. Teaching procedural skills to a robot learner can be time consuming and repetitive. This often results in increased noise in human feedback making their input less reliable. Some researchers have imagined several strategies for the robot to cope with this such as detecting inconsistencies and asking for additional feedback. A recent work proposes to design an adversarial game in which the human must disturb the robot’s learning. This project proposes to expand upon this idea and investigate how collaborative and competitive games could enable better quality feedback when robots are learning from humans. Inspired by instructional design, we will study how building teaching tools for human teachers can effectively improve the robot’s learning. We will also aim to engage the trainer longer by identifying and integrating gamification element in the training.
This project is a 3-year project funded by the ARC starting mid-2021.
For this project we are looking to hire 2 PhD students in the Faculty of Engineering, School of Computer Science. The project will make an extensive use of the National Facility for Human Robot Interaction Research.
For more information about the PhD positions please see:
Or contact Dr Wafa Johal: firstname.lastname@example.org