
As human-robot interactions become commonplace, the MEPS stresses that EU levels are necessary to guarantee a standard level of safety and safety. © AP Images / European Union – EP
Robots are the technology of the future. But the current legal system is unable to manage them. This generic declaration is often the premise of considerations on the possibility of granting rights (and liabilities) to these machines to some, less identified, not in time. It is certainly necessary to discuss the adequacy of existing regulations in the adaptation of new technologies, but the ontological approach is incorrect. Instead, a functional approach must be adopted, identifier:
- What rules can be applied to robots (such);
- What incentives provide these rules;
- If these incentives are desirable.
The recent European Parliament resolution (now resolution) has great political relevance and strategic importance in the development of a European robotic industry. His considerations and conclusions will be taken into account in the current position document.
Problems
The first problem during the discussion of the regulation is that of definitions, because one cannot regulate something without defining it first. However, the term robot is technical and encompasses a wide range of applications that are very common. For this reason, it is impossible to develop a set of unit rules applicable to all kinds of robotic applications, rather different rules should apply to different classes of devices.
The main question when discussing the rules of civil law on robotics is that of responsibility (for damages). Automation could, to a certain extent, question some of the existing paradigms; And the increase in human-machine cooperation could lead to overlapping of the various sets of existing rules, leading to uncertainty, which has therefore increased disputes and the difficulties in ensuring new products.
Connected to the above is the robot test. A clear legal framework for robot tests outside the restricted environment of the laboratory is necessary to assess the type of danger that could emerge with use and their statistical frequency (also for insurance purposes). Likewise, the normalization and development of adequate technical standards and with close characteristic for different types of robots are a major concern, both to guarantee the safety of products and the adoption of possible alternatives to the rules of existing responsibility.
A possible non-professional when discussing the rules of robotics is that of the attribution of the personality. This, if planned in an ontological way, is deprived of any reasonable land in technical, philosophical and legal considerations. Instead, if it is understood in a purely functional way, the allocation of legal personality (as in the case for companies) could be open to discussion (in certain cases). Given certain more specific types of applications, in particular biorobotic devices and the question of human improvement, its regulation and management become of the greatest importance and probably the most relevant bioetque problems of the neighboring future, requiring ad hoc regulation to adopt.
Finally, confidentiality regulations, access to data and data use is of central importance, not only for the development of a European robotics industry, but more broadly for a digital market. All the problems mentioned could be of certain skills – direct or indirect – of the EU and would certainly benefit from the regulations adopted at the supranational level (from European).
Responses
The resolution addresses all the problems mentioned above with coherent considerations, representing an adequate framework for a technical – legal – debate – on the sets of close -up rules should be adopted at the level of the EU. Overall, it is of the greatest political and strategic importance to define a modern legal system, favorable to the immersion of new technologies and the proliferation of new businesses.
More precisely:
Definitions: It takes a definition of “robot” which must be inclusive. What must be avoided are nominalist discussions that would inevitably emerge as soon as a regulation has been adopted (if the concept of robot was too narrow). The debates on whether a robot must be autonomous or not, controlled or not, embodied or not are not relevant from a legal point of view. Instead, these characteristics should make it possible to distinguish the subclasses of robots which could be regulated unitary. From there, alongside a broader and global definition of the robot (which should include software and an unprotected AI), the closer definitions should be developed, bringing together applications which show certain relevant similarities and which can be regulated unitely .
Responsibility: Human-Machine Cooperation will lead to various sets of rules to overlap (namely the rules of product responsibility and the traditional principles of the law of tort liability). This will lead to high levels of uncertainty and disputes, delaying innovation. With regard to remuneration, it is, in many cases, sensitive to separating the function of ensuring the safety of the products from that of providing the victim compensation. This could justify different alternative solutions to adopt: liability exemptions for users and / or manufacturers; Creation of automatic remuneration funds (financed in private or publicly); Compulsory insurance provisions. More broadly, the insufficiencies of existing rules (in particular the rules of product responsibility) could radically suggest a rule based on defects by a risk management approach (based on absolute rules of responsibility) held to the party which is better Placed to minimize the cost and acquire insurance (resolution nn. 53, 55). A one -shop approach can be reasonable, preventing complex disputes to distribute the responsibility of the various players involved. What solution is preferable depends on the class of applications considered, the market of these products and the possibility of treating these risks by insurance (resolution nn. 57-59).
Essay: A uniform set of rules allowing tests outside of laboratories and even in human environments must be adopted, defining clear standards (in particular with regard to safety, insurance and management of experience) thus reducing the discretionary powers of local authorities (resolution n. 23).
European standardization and robotics agency: Standards represent the most effective way to ensure high levels of product safety and provide ex ante certainty to the manufacturers that are in accordance with them (resolution n. 22). However, the time required to adopt a new standard and its extent is incompatible with the current rhythm of technological innovation. A European robotic agency, such as that suggested by resolution (nn. 15-17), could have strategic importance in establishing a supranational standard, which could be useful beyond European borders. Otherwise, other leading savings will try to do the same.
Electronic personality: Spected by resolution, this concept is purely functional and intends to facilitate the recording, insurance and management of certain devices (in particular unmistred AI) with a legal tool that is equivalent to that used for companies (thus called legal personality), see resolution n. 59, Let. E) and f).
Human improvement: The use of robotics to overcome human limits could become problematic given the absence of clear decor or rules and criteria that could help discern what type of manipulation of the human body should be authorized. The constitutional principles of human dignity, equality and freedom of self -determination, as included today in the broader bioethical debate, are in itself insufficient, and closer criteria must be adopted. The legal reasons to justify an intervention of the EU in this field are less obvious than in all the other questions mentioned, but they can be found in the freedom of movement of the citizens of the EU, which suggests, to a certain extent , a uniform framework. With regard to the content of these principles, human dignity must be understood as objective and external, limiting the self -determination and reversibility of the intervention on the body must also be taken into account.
Confidentiality and free data flow: Privacy cannot simply be granted by informed consent. Consent is almost never really informed and the very possibility of dissent is limited, if you want to use the service or the device requiring the collection of personal data for its operation. On the one hand, the current EU regulations set out the “-Par Design” principle should be reduced by the adoption of specific standards, specifying what meets these criteria in different application classes (see recommendation nn. 17 -21). On the other hand, consumers must be offset to have enabled access and use of data – private and anonymized – via post -sales services, enriching after -sales tasks imposed on the producer.
Tags: Colitic C-Politics-Society,, Culture and philosophy,, CX-Education-Diy,, Human-robot interaction,, Laws,, notice,, policy,, Robolaw,, Social aspect
Andrea Bertolini
Is assistant professor by private law at the Dirpolis Institute of the Scuola Superiore Sant’anna in Pisa …
Andrea Bertolini is a deputy professor of private law at the Dirpolis Institute of the Scuola Superiore Sant’anna in Pisa …