MOSCOW, August 24 – PRIME. Russian developers, when working with artificial intelligence, should use special guidance that will prevent the emergence of subsequent ethical problems. This was stated by Elena Suragina, head of the regulatory support group for the MTS ecosystem, at the Technoprom-2022 forum.
According to her, the problems of the ethics of artificial intelligence are to determine in a new situation under what circumstances and under what conditions this situation is acceptable in human society.
“The problem that I would like to identify is the difficulty of translating social concepts – goodness, justice – into a formalized area of engineering practice. The developers themselves say that the AI code of ethics is not enough, an unambiguous understanding of ethical problems and guidance on how to act as a developer are required in order not to bring discord into society and be correct to the person,” the expert noted.
As an example, Suragina cited a situation in one of the foreign companies that allowed artificial intelligence to monitor the personal content of smartphone users for the appearance of scenes of violence against children.
“It is very important to determine where the ethical line passes, because on the one hand, the right to privacy of the user, on the other hand, the right of children to safety,” she explained.
The Code of Ethics in the field of artificial intelligence was adopted in October 2021 as part of the international forum “Ethics of artificial intelligence: the beginning of trust”. The document establishes general ethical principles and standards of conduct that should guide companies that use artificial intelligence technologies in their activities or create products based on such technologies. In addition, the recommendations given in the code are designed for neural network systems used exclusively for civilian purposes.