Project based at
Lead Partner Organisation
Multimodal Human Robot Collaboration
To improve the flexibility and adaptability of HRC, novel multimodal interfaces and gesture control devices are required. In these approaches conventional interaction techniques are augmented by vision and audio-based and haptic input/output technologies. Recent advances in speech and gesture interfaces are now widely used in domestic settings. Paired with vision-based and haptic interaction systems already in use in industrial settings, these technologies make possible new ways to control collaborative robots.
In this project the Louis will develop a vocabulary of multimodal HRC interaction techniques to accomplish tasks such as specifying the intended path for the robotic tool to follow by manually drawing onto the work surface, instructing the robot through naturalistic speech and gesture, and directly sensing feedback on the state of a robotic task through visual, auditory, and haptic feedback. He will also study the needs for human-robot interaction in an authentic work context to make sure that the interaction techniques are appropriate for use in that setting.
Progress so far
A literature review was conducted to identify challenges in integrating human action recognition in a cobotic system. Following this, Louis is developing a pipeline for training a machine learning model which can classify different assembly actions (e.g., using different tools (Allen key, screwdriver) to fasten a bolt). This pipeline will be used in the future to improve human robot collaboration.
Upon completing this research, we expect to develop:
- New tools for designing and prototyping multimodal human robot interactions
- Development of sensing and output capabilities for robotic equipment to employ multimodal interactions
- A framework to guide the design of multi-modal HRI including informing choices of modalities
Principal Supervisor: Dr Marc Carmichael