Skip to main content

Pint of Robotics: Learning Human Actions: from Perception to Robot Learning, Dr Claudio Coppola, Queen Mary University of London

Date
Date
Wednesday 24 February 2021, 1700-1800
Location
Zoom (see below for registration link)

Speaker: Dr Claudio Coppola, Queen Mary University of London

Title: Learning Human Actions: from Perception to Robot Learning

When: Wednesday 24th February, 1700-1800

Where: Zoom (free but registration required) https://universityofleeds.zoom.us/meeting/register/tZElf-yoqzotHtGwFMLgm2MJ3xMxGyBQA11w

Abstract: While understanding and replicating human activities is an easy task for other humans, it is still a complicated skill to program into a robot. Understanding human actions has become an increasingly popular research topic for its wide set of applications like automated grocery stores, surveillance, sport coaching software or ambient assisted living. In robotics, being able to understand human actions not only allows us to classify an action automatically but also to learn how to replicate the same action to achieve a specific goal. Being able to teach robots how to achieve certain goals by demonstrating the movements can reduce the programming effort and allows us to teach more complex tasks. Human demonstration of simple robotic tasks has already found its way to industry (e.g. robotic painting, simple pick and place of rigid objects), but still, it cannot be applied to the dexterous handling of generic objects (e.g. soft and delicate objects), that would result in larger applicability (e.g. food handling). In this talk, an approach for the recognition of human social activities sequences using a combination of probabilistic temporal ensembles and a system for the teleoperation of a robot hand using a cheap setting composed by a haptic glove and a depth camera is presented.

Short Bio: Claudio Coppola is a Postdoctoral Researcher at Queen Mary University of London. He is working in the domain of Cognitive Robotics with a particular focus in Human Activity Understanding and Robot Learning from Demonstration. During his PhD, he developed an approach for the recognition of human social activities and activities of daily living using depth cameras for social robots in assistive scenarios. His work currently focuses on using human demonstration to teach grasps to a four-fingered manipulator in a manufacturing context. In particular, he is working on the development of a teleoperation framework that maps the human hand movements into the robot's and to design a learning model to learn the grasp from those.