- Wednesday 9 December 2020, 1700-1800
- Zoom - e-mail email@example.com for access
Assistive robotic manipulators have the potential to support the lives of people suffering from severe motor impairments. They can support individuals with disabilities to independently perform activities of daily living, such as drinking, feeding, manipulation tasks, and opening doors, and to (re)integrate into the working life. An attractive solution is to enable motor-impaired users to teach the robot by providing demonstrations of manipulation tasks. The user controls “manually” the robot with an intuitive “hands-free” human-robot interface to provide a demonstration, which is followed by the robot learning of the performed task. However, the control of robotic manipulators by motor-impaired individuals is a challenging topic. In this talk, a novel head gesture-based interface for hands-free robot control and a framework for robot learning from demonstration is presented. The head gesture-based interface consists of a camera mounted on the user’s hat, which records the changes in the viewed scene due to the head motion. The head gesture recognition is performed using the optical flow for feature extraction and support vector machine for gesture classification. The recognized head gestures are further mapped into robot control commands to perform object manipulation tasks. The robot learns the demonstrated task by generating the sequence of actions and the Gaussian Mixture Model method is used to segment the demonstrated path of the robot’s end-effector. During the robotic reproduction of the task, the modified Gaussian Mixture Model and Gaussian Mixture Regression are used to adapt to environmental changes. The proposed framework was evaluated in a real-world assistive robot scenario in a usability study involving 13 participants; 12 able-bodied and 1 tetraplegic. The presented results demonstrate the potential of the proposed robot framework to enable severe motor-impaired individuals to demonstrate manipulation tasks.
Maria Kyrarini is a postdoctoral research fellow at the University of Texas at Arlington under the advisement of Professor Dr. Fillia Makedon. She is also assistant director of Heracleia Human-Centered Computing Lab. In 2019, she received her Ph.D. in Engineering from the University of Bremen under the supervision of Professor Dr.-Eng. Axel Gräser. The title of her Ph.D. Thesis is: "Robot learning from human demonstrations for human-robot synergy". Before that, she received her M.Eng. degree in Electrical and Computer Engineering and her M.Sc. degree in Automation Systems both from the National Technical University of Athens (NTUA) in 2012 and 2014, respectively. Her primary research interests are in the fields of Robot Learning from Human Demonstrations, Human-Robot Interaction, and Assistive Robotics with a special focus on Enhancing Human Performance.