- Time: 1800-2100
- Location: The Library Pub.
Speaker 1: James Martin (School of Electrical and Electronic Engineering, University of Leeds)
Title: Robotic Endoscope Autonomy
Abstract: Research into autonomy for medical robotics seeks to achieve clinically appropriate milestones, with each subsequent goal being separated by a significant increase in robotic intelligence. In the context of robotic control for magnetic endoscopes, autonomy is crucial for clinical viability as it allows users to easily command endoscope motion with competitive procedure times. Improving autonomy in this regard will allow patients to sooner receive the health-care benefits that robotic endoscopes aim to provide. This talk, therefore, presents work on autonomous navigation for robotic endoscopy, followed by more recent work into automatic tissue recognition and autonomous targeted biopsy. The talk concludes with a discussion on how to achieve the next level of autonomy for robotic endoscopy.
Speaker 2: Jose Sosa Martinez (School of Computing, University of Leeds)
Title: Unsupervised 3D Pose Estimation
Abstract: Representing articulated objects, like humans and animals, with a symbolic representation to estimate their 2D and 3D poses is a problem that arose since the early days of computer vision. Initial deep learning-oriented methods for solving this task usually represented it as a regression problem of the body joints positions. However, regression of keypoints and following state of the art approaches required having access to a considerable amount of annotated data, which become particularly challenging while attempting to estimate 3D pose. Contrarily to 2D data, the task of collecting 3D annotations for body keypoints is laborious, time-consuming, and expensive. Typically, this process requires complex experimental arrays for data acquisition, e.g. sensors, cameras, and wearable devices. Hence, the current challenge of deep learning approaches for 3D pose estimation is to exploit the availability of no annotated 2D data, e.g. images and videos, to learn accurate 3D representations. However, learning 3D poses from solely RGB images is trivial and still represents one of the biggest problems for unsupervised learning.
Speaker 3: Shuhao Dong (School of Mechanical Engineering, University of Leeds)
Title: How can machine learning and computer vision boost the development of telerehabilitation for upper limb?
Abstract: Rehabilitation after neurological diseases is crucial for patients to independently perform activities of daily living. With the development of low-cost robotic devices (myPAM at the University of Leeds), the delivery of rehabilitation interventions at home environment is possible with multiple challenges. The impairments after disease change dynamically throughout the rehabilitation process. Thus, a sensitive and objective performance evaluation method is needed to track the recovery process while a robotic device is applied. Besides, measuring the off-robot performance at home environment with high accuracy is another challenge to overcome. Finally, with the effect of COVID-19, telerehabilitation using IoT devices can be a solution for remote monitoring and evaluation. This talk covers two applications to provide potential solutions to the above challenges, machine learning-based performance evaluation using low-cost sensors and computer vision-based single camera system to enable off-robot performance tracking.