Successful team collaboration, interaction with people and other resources in the production environment require flexible and powerful ad-hoc networking, holistic environmental awareness and interpretation, as well as motion planning tailored to the respective team constellation and task. Subproject 2 addresses existing challenges in this field and provides appropriate solutions for ad-hoc co-operating robotic teams in the production environment. These will be evaluated in cooperation with Subproject 5 and implemented in the context of the FORobotics demonstrators (see Subproject D).
The goals of the subproject include the realization of a service-based, interoperable communication architecture for the various resources, a real-time and holistic environment modeling by fusion of the platform-near sensors (including 3D cameras, laser scanners, ultrasonic and tactile sensors, as well as system encoders), and a fusion of these sensor data with the information stemming from infrastructure-based localization. The latter includes both ceiling cameras and radio-based systems. Trajectories for the motion of both the mobile platform and the manipulator coupled to the platform are derived based on the information obtained from the environment model. The planning concept is then further extended to the form of a combined motion planning approach to allow for optimal interaction. In addition to an execution of movements optimized for the respective task, one of the focal points is the developments of a movement behavior that is particularly suitable for people.
The aim of the first work package is to provide an interoperable communication and processing structure for the various systems and resources in the production environment of the mobile platforms. Based on investigations of the basic actors (e.g. production control system, other mobile or stationary robots, machining centers as well as workers with smart devices), boundary conditions determined by the production task (e.g. determinism), environmental constraints (e.g. electromagnetic fields) as well as technological boundary conditions (e.g. data throughput), a corresponding implementation takes place. In addition to the consideration of real-time-capable protocols, aspects of preprocessing as well as information filtering and prioritization of communication contents are also addressed in this WP.
The aim of the second work package is a robust and, as far as possible, comprehensive platform-based environment perception and its interpretation for downstream processes. These include the detection and localization of objects as well as the prediction of objects’ movements. The environment modeling is achieved by suitable fusion of sensor data of the mobile platform (e.g. 3D cameras, laser scanners, ultrasound, encoders). In particular, this work package addresses a rating and stochastic modeling of the sensors, the detection of non-mapped objects (identification, classification) as well as the localization and tracking of moving objects (people, mobile platforms) from the perspective of the mobile platform.
At critical, dynamically changing locations of the shop floor such as intersections and crosswalks, information from the complementary perspective of the infrastructure can help to avoid or resolve potentially dangerous conflict situations. Work package 3 examines how information from the sensors permanently mounted in the infrastructure (e.g. ceiling cameras and radio nodes) can be used to expand the situation image locally perceived by the platform.
The automated local path planning of the platform manipulator and the mobile platform based on a task represent an essential key for autonomous and value-adding systems. The aim of these two work packages is therefore the automated scheduling of movements for team collaboration under real-time conditions, the planning of avoidance strategies (e.g. speed reduction, trajectory modification) including appropriate collision detectors and continuous motion prediction at the platform level. In addition, automated consideration of movement restrictions (e.g. due to the handling object, the environment or the task) is addressed. Compared to previous approaches, the human-robot team aspect should be taken into account to a special degree in this work package.
The aim of the last work package within Subproject 2 is to increase the overall responsiveness of the platform and the associated manipulator as well as the different platforms to each other by combining the two local planning processes as well as synchronization between local and global path planning. Another key objective of this work package is the optimization of team and individual movements with regard to high user acceptance. This is accomplished by implementing behaviors for the mobile platform and the team derived from user studies (see SP 5). Finally, WP6 examines strategies to avoid system deadlocks (e.g. through the appropriate use of right of way rules for higher prioritized systems).
Subproject 2 looked into and developed substantial parts of necessary autonomy abilities of mobile robot teams as well as therefore necessary research issues regarding to sensoric perception, the fusion of sensoric data and the merger to a local environmental modeling. Also developed were necessary skills for calculating the driving trajectory and navigation. Furthermore we worked on, evaluated and realized a security system for mobile robotics and parts of it were tested at the demonstrator. Next to that we developed and tested combined networking and communication methods using innovative robot frameworks (like ROS) under compliance of industry standards (like UPC UA) in a heterogeneous allocated system. The provided autonomous abilities serve in the use case for recognition, localization and estimation of objects and humans, the environment perception and thereof derived intelligent local navigation of the platform and planning the trajectory of the applied manipulators. The research takes single mobile robots as well as the by FORobotics addressed human-robot and robot-robot team constellations into account. The research and development results were integrated into the realization of the demonstrator and were evaluated in an application-oriented way in the use case scenarios.