In recent decades, the path has been paved for complementary approaches to supervision and control of robot systems mixing automatic control, IT and artificial intelligence. The resulting methods are not limited to carrying out actions at a low level of abstraction, such as control of robot actuators and safe navigation in domestic / external environments, more tackling the decision-making of higher level, such as the allocation of tasks in a team of multi-robot systems, then the planning of the task and the resulting plan.
This new paradigm has been successfully carried out by different means, such as the approach of the problems of planning and execution of tasks using discrete event systems (of), or combining continuous systems focused on space in space in hybrid systems, to solve problems of task and movements planning. Logic -based specifications for these planners have been introduced, using different temporal logic classes (TL). And, even more recently, the control community uses automatic learning (ML), either by using strengthening learning methods (RL), or by revisiting the use of neural net for non -linear control.
The objective of this research subject is to display recent work surrounding the supervision, control and learning robot tasks, mixing automatic control and artificial intelligence approaches.
Four articles were accepted for the research subject.
“”Sharing of social drones to increase the autonomy of UAV patrols in pre and post-urgency scenarios,” by Bisio et al.Access the problem of coordinating autonomous drone teams performing tasks in a field where social interactions with people will help improve the autonomy and general efficiency of missions. The article presents an architecture based on the cloud for social drone missions, algorithms to optimize the performance of tasks and experimental results evaluating the performance of the proposed solution.
In “Learning relationships variable by the State in PomCP: a frame for mobile robots“, Zuccotto et al. Approaching the improvement of planning performance in uncertainty by determining the correlations between the hidden state variables of a partially observable Markov decision process (POMDP). The proposed method exploits this correlation by transforming the information acquired by observing certain state variables to their non -observed correlated state variables. The POMDP is implemented as a partially observable Monte Carlo (POMCP) planning and the acquired knowledge is represented with a random field of Markov (MRF). The results of realistic simulations in two areas show Improvements of MRF and POMCP adapted online compared to the unsuitable version. The document also has an architecture based on ROS which allows you to execute the method proposed in real robot systems.
“”Road tasks: acceleration of the replanning task,” by Lager et al. Focusing on the replanning of optimal and effective robot tasks, using the task of the task which contains information to make replanning very effective. The proposed method surpasses linear programming (MILP) and planning language planning planners (PDDL) in an interesting case of use where a mobile manipulator is involved in delivery tasks in a warehouse scenario.
The use of deep reinforcement learning (DRL) to learn general starts from start to finish is currently a very fashionable research subject. In “Analysis of the network layer for a RL robotic task“” Feldotto et al. Propose methods to accelerate learning the neural networks used in DRL, while keeping their performance levels, as well as to reuse them as pre-formed networks in similar tasks performed by robot manipulators with similar kinematics but a different number of joints. To achieve this objective, the authors introduce metrics that evaluate the activation of individual neurons and make it possible to compare these neurons activity with other neurons of the network. These measures are used to reduce the redundancy and size of the neural network without considerably reducing their performance.
Contributions from authors
All the authors listed made a substantial, direct and intellectual contribution to the work and approved it for publication.
Conflict of interest
The authors declare that research has been carried out in the absence of commercial or financial relations which could be interpreted as a potential conflict of interest.
Publisher’s note
All complaints expressed in this article are only those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, publishers and examiners. Any product that can be evaluated in this article, or complaint that can be made by its manufacturer, is not guaranteed or approved by the publisher.
Keywords: Robot control, artificial intelligence, strengthening learning, formal methods, task planning
Quote: Lima Pu and Iocchi L (2022) Editorial: supervision, control and learning of smart robots systems. In front. Robot. IA 9: 1050237. Doi: 10.3389 / Frobt. 2022.1050237
Received: September 21, 2022; Accepted: September 27, 2022;
Posted: October 12, 2022.
Edited and examined by:
Ahmed ChemoriUMR5506 Laboratory of computer, robotics and microelectronics of Montpellier (LIRMM), France
Copyright © 2022 Lima and Iocchi. This is an article in free access distributed under the terms of the Creative commons attribution license (CC by). The use, distribution or reproduction in other forums is authorized, provided that the authors of origin and the copyright (s) are credited and that the original publication in this review is cited, in accordance with the academic practice accepted. No use, distribution or reproduction is authorized which does not respect these terms.
*Correspondence: Pedro U. Lima, CGVKCM8UBGLTYUB0ZWNUAWNVLNVSAXNIB2EUCHQ =