Brief history of the electronic personality

We live in a time when robots and systems equipped with artificial intelligence (AI) have started to replace humans, which is particularly evident in the labour market. Automation makes it possible to replace not only physical but also mental workers – lawyers or brokers. In 2017, Goldman Sachs in New York left two out of 600 employed brokers and entrusted the rest of the work to about 200 computer engineers. According to one of the bank’s board members, Marta Chavez, four brokers can be replaced by one computer and a programmer – the effect will be the same or even better.

Robots can do a lot, they can be trained, and more and more of them can make decisions on their own, as well as perform creative activities that were previously only a human domain – making music, writing articles or painting. This made us wonder how to prepare society for the changes associated with the introduction of the AI to larger and larger areas of our lives – including the changing in labor market. Bill Gates suggested taxing robots, explaining that work done by a machine should be taxed in the same way as that done by a human being. Some, like Elon Musk, advocate a solution to grant people so-called unconditional basic income. The idea is that every person should receive the same amount of money – regardless of age, material situation, earnings that will ensure the living wage, so that everyone can afford to guarantee basic needs such as housing, food, clothing. Musk predicts that this solution will be necessary because there will be less and less work and less professions that a person can do.

From January 2017 to the end of 2018, an experiment was conducted in Finland, where participants were provided with a basic income of 560 euros per month. In 2019 Finland presented the first results of the experiment. The introduction of the basic income did not have a positive effect on employment, but improved the sense of well-being of the beneficiaries. The experiment failed because the participants were not motivated to seek work.

Automation is predicted to change the whole world, and the new industrial revolution we are currently witnessing is likely to affect every layer of society. The AI and the autonomous systems (those that are capable of making decisions and implementing them, regardless of external control or influence) also give rise to a number of new legal challenges.

International and EU organizations are also analyzing the problems that artificial intelligence can bring. In 2017, the Civil Law Rules on Robotics were created – European Parliament resolution of 16 February 2017 containing recommendations to the Commission on civil law rules on robotics (2015/2103(INL), which explicitly states that, given that artificial intelligence is likely to surpass human intellectual capacity in the long term, it is necessary to consider whether to introduce a legal instrument giving robots a special legal status in the long term so that at least the most developed autonomous robots can be given the status of electronic persons responsible for remedying any damage they might cause and the possible use of electronic personality when robots make autonomous decisions or interact independently with third parties. The document also calls the machine builders to apply the robotics laws of Isaac Asimov, as presented in his short story, Playing the Berka (1942). Sounds as follows:

1.The robot must not injure a person, nor, by failing to act, allow that person to be harmed.
2.The robot must obey human orders, unless they conflict with the First Law.
3.The robot must protect itself unless it is in conflict with the First or Second Law.
The act presented in the EU also includes a code of conduct for scientists and designers, which sets out basic ethical principles to be observed when creating, programming and using robots. It is proposed to incorporate these principles into Member States’ legislation.

The document was challenged. Experts dealing with AI in areas such as ethics, law or medicine sent an open letter to the European Commission expressing their concern about granting electronic personality status to robots because the legal construction of legal personality is linked to a natural person (and to a legal person that is directly related to a human being), which would mean that a robot would have human rights, such as the right to dignity, the right to citizenship. This would be contrary to the EU Charter of Fundamental Rights or conventions protecting human rights. You can read the letter here.

In order to continue the discussion on electronic personality, the question of what this term actually means should be answered.

The term legal personality is commonly used to refer to the ability to be a subject of rights and obligations and to perform legal actions on one’s own behalf 1. Currently, it may concern a human person (natural person) and legal persons (e.g. companies) acquire (as a rule) legal personality once they are entered in the relevant register. Legal persons act (shape their rights and obligations) through their managers (as bodies of a legal person).

For further consideration, it should be stressed that the term electronic personality will only refer to robots that have been equipped with artificial intelligence.

Why is it important to give robots this status? There are several reasons for that. The main one is to determine who is responsible for the damage caused by the robot. Until now, this construction was only intended for humans. Can a robot that makes autonomous decisions be responsible for its actions? Can it be held responsible for damage to property, for harming people?

Some people think that responsibility could apply to the person who uses the thing, but is it the right construction when the robot makes the decisions itself and the owner has no control over it? The second option is to hold the creator of the machine responsible. But can it really be held responsible when the robot makes decisions in cases unforeseen by its creator? Or wouldn’t be the third option the easiest and fairest when a robot be responsible for its actions ? Granting electronic personality to robots would raise further legal questions – does the robot then have rights and obligations? Can it be said that a robot may have property rights? Can it pay taxes like humans?

Why can liability be complicated? Because there can be two cases – the first one where the operator of the machine uses it carelessly or uses it to cause damage – then the responsibility is easy to determine and concerns the person responsible for the robot. The second case can be more complicated and occurs when the robot becomes independent and behaves in a different way than the creator anticipated. In this case, responsibility could be carried by the insurance company. It is proposed that a special insurance fund be set up, which would be linked to the machine and from which compensation would be paid to those affected by the robot’s operation. It is also proposed to create a special register of robots, which would show who is responsible for the machine and who pays for the insurance.

But it is not only responsibility that is problematic. The issues of obligations, e.g. contracts concluded by robots would also be connected with electronic personality. Would it be binding if a robot entered into a contract with a human being? Can the owner be represented by his robot when he concludes a contract based on an autonomous decision? Granting legal personality status would make it very easy to resolve these problems. Perhaps it would be possible to consider the robot’s actions as those of a legal entity, regardless of its owner. A machine, like a legal entity, would serve its owners to achieve a certain purpose. Their personality – just like in the case of legal persons – would be a kind of fiction.

In 2017, a humanoid robot equipped with artificial intelligence – Sophia, produced by Hanson Robotics from Hong Kong, became a citizen of Saudi Arabia, thus following the trend of empowerment. The robot quickly became popular and began to give interviews all over the world. You can see one of the interview here.

Recently – at the end of 2019 – the European Commission’s European Expert Group on Responsibility and New Technologies published the report LIABILITY FOR ARTIFICIAL INTELLIGENCE AND OTHER EMERGING DIGITAL TECHNOLOGIES, which states that it is not necessary for the purposes of liability to give autonomous legal systems personality. The damage caused by fully autonomous technologies may be reduced to risks attributable to natural persons or legal persons. The approach is the one proposed in Poland. On the Polish ground, the draft Policy for the Development of Artificial Intelligence in Poland for 2019-2027 states that: Poland stands for and supports those countries that refuse to grant AI systems citizenship or legal personality status. This concept is contrary to the idea of Human Centric AI and the state of development of AI systems and finally the higher status of animals over machines. Moreover, Poland is in favour of the concept of human supremacy over AI systems, and thus of human responsibility or legal persons of which a person is founder and manager. The private international law regime should also not allow for the active participation of an artificially intelligent legal person in legal transactions.

This matter may have many consequences. Some scientists oppose the granting of rights and duties to robots, which so far can only be granted to humans, explaining that it would be difficult to determine whether they can carry such legal goods as physical integrity and privacy. They also argue that it is only when the AI develops to the point where the robot can actually understand the meaning of his actions and guide his conduct that he will be able to bear the guilt of a human being. However, at this stage of development, there is neither the need nor the basis for giving AI a legal personality

It is possible that the legislator will have to face this problem, but not yet – only when the artificial intelligence is equal to human beings.

[1] legal action – a conventional action (constructed by a legal norm) of a civil law entity.

en_US