Subproject 4 focuses on improving human-robot interaction. The aim of the subproject is to increase the acceptance of the robot through mutual understanding and simple user interfaces. For this purpose, various user interfaces for human input as well as the representation of the state of the robot system are to be examined, evaluated and implemented. In addition to the lack of user interfaces, the acceptance of robotic systems suffers from the lack of cognitive skills that make interaction more difficult. Therefore, another goal is to give the robot a certain adaptability to humans.
The first work package deals with the flow of information from humans to robots. Intuitive user interfaces, such as gesture control or programming by demonstration, are evaluated and implemented.
The second work package deals with the information flow from the robot system to the human being. In order to simplify the interaction with the robot for the user, methods and technologies are examined, which the system state of the robot can be represented with. Projections, for example, allow people to see areas where they should not be. This should give the user a better understanding and thus facilitate the interaction.
In this work package, the results from WP 1 and WP 2 will be combined in order to facilitate a dialogue between humans and the robot system. By direct feedback of the system to an input of the human, the human being can quickly grasp reactions and react accordingly. Haptic feedback for example could indicate a collision of the system.
In the fourth work package, the capabilities of the robot system are to be extended so that the intentions of humans can be interpreted from the movements of the human. Thus, the next steps of the robot will be initiated. If the robot is about to hand over a component to a worker, it must be interpreted from the state of the person and his movements when he is ready to accept the component. This is to ensure that the human being in his work is not disturbed by the robot system and thus the acceptance is increased.
The fourth subproject faced the challenge to provide a possibility for the communication between human and robot to enable human-robot interaction. Several communication options entering input information like gesturing, speaking or via a display were investigated and some of them were implemented. The output information given by the robot was displayed on the floor by an integrated projector or shown on a display. Using one of the implemented channels a human can give ad-hoc orders to the robot system. Furthermore we worked on a programming system which can integrate the robot program. For possible Havarie-scenarios of the platform, we studied also if the platform is able to be integrated into an augmented reality teleoperation. In this scenario the teleoperator is integrated so that he can telecommand the system and step in if there are any errors. At the beginning of the project, the demonstrator had no abilities to recognize humans and their activities in its environment. As a worker in a shelf warehouse could be near a robot without noticing it, we worked on a method to recognize humans and their activities. The human is initially rough recorded and his movements are analyzed camera based.