The REINS project is to design and investigate haptic communicational interfaces (reins) between a human agent and a mobile robot guide. The reins will facilitate joint navigation and inspection of a space under conditions of low visibility (occurring frequently in fire fighting). The focus is on haptic and tactile human robot cooperation as it has been found that a limited visual field and obscured cameras adds to the distress of humans working under pressure.The REINS project aims to map the communicational landscape in which humans (fire fighters, but also the visually impaired) might be working with robots, with the emphasis on tactile and haptic interaction. We adapt a semi-autonomous mobile robot for navigation in front of a human. The robot provides rich sensory data and is enabled to try the mechanical impedances of the objects it encounters. We also design and implement a soft rein (rope), a wireless rein and a stiff rein (inspired by the lead for guide dog) enabling the human to use the robot to actively probe objects. The project thus creates the means to explore the haptic Human-Robot Interaction landscape. A research question is whether the information should be explicitly encoded as messages or can remain implicit. In the initial phase of the project the robot is adapted and the first prototypes of the reins are implemented; the emphasis in this phase is on providing rich data to the human. The second phase is dedicated to surveying the communicational landscape.

Main objectives are:

1. To understand how human participants can learn to associate a perturbation pattern sent from a robot via a soft/hard/wireless rein with a given message under varying degrees of audio distraction levels (maximum amplitude of the audio distraction to be 70dB)

2. To understand the variability of timing and magnitude of a perturbation pattern encoded by a human in a soft/hard/wireless rein for a given message under varying degrees of audio distraction levels (maximum amplitude of the audio distraction to be 70dB).

3. To understand how a robot could learn to interpret messages encoded by a human under varying levels of distraction levels.

Collaborative project of Sheffield Hallam University and King’s College London.

Grant Number EP/I028757/1