Conference paper Open Access
Full teleoperation of mobile robots during the execution of complex tasks not only demands high cognitive
and physical effort but also generates less optimal trajectories compared to autonomous controllers. However, the use of the
latter in cluttered and dynamically varying environments is still an open and challenging topic. This is due to several factors
such as sensory measurement failures and rapid changes in task requirements. Shared-control approaches have been introduced
to overcome these issues. However, these either present a strong decoupling that makes them still sensitive to unexpected events,
or highly complex interfaces only accessible to expert users. In this work, we focus on the development of a novel and
intuitive shared-control framework for target detection and control of mobile robots. The proposed framework merges
the information coming from a teleoperation device with a stochastic evaluation of the desired goal to generate autonomous
trajectories while keeping a human-in-control approach. This allows the operator to react in case of goal changes, sensor
failures, or unexpected disturbances. The proposed approach is validated through several experiments both in simulation and
in a real environment where the users try to reach a chosen goal in the presence of obstacles and unexpected disturbances.
Operators receive both visual feedback of the environment and voice feedback of the goal estimation status while teleoperating
a mobile robot through a control-pad. Results of the proposed method are compared to pure teleoperation proving a better
time-effciency and easiness-of-use of the presented approach.
A Probabilistic Shared-Control Framework for Mobile Robots.pdf