Open access peer-reviewed chapter

Human Rights in the Implementation of Artificial Intelligence

Written By

Tatiana Dulima Zabala Leal and Paola Andrea Zuluaga Ortiz

Submitted: 11 November 2022 Reviewed: 14 January 2023 Published: 08 March 2023

DOI: 10.5772/intechopen.1001103

From the Edited Volume

Ethics - Scientific Research, Ethical Issues, Artificial Intelligence and Education

Miroslav Radenkovic

Chapter metrics overview

93 Chapter Downloads

View Full Metrics

Abstract

The use of artificial intelligence (AI) has raised human rights concerns. Numerous debates on the moral and ethical principles necessary for the development of AI and its products have been generated. Therefore, the main objective of this research is to present the generalities of the reports and recommendations to be considered in some countries when implementing AI models such as: regulating the design, development and implementation of AI and its products, establishing a code ethics for the prevention of bad practices and excessive use of technologies, prohibit the use of weapons, personal data, and abusive clauses and limit the scope of the position of dominance in interactions with living beings. Likewise, freedom and autonomy must be maintained as an exclusive characteristic of the human being and subject to permanent control. Its operation must be parameterized within the respect for life, dignity, justice, equity, non-discrimination, peace, and the prohibition of the monopoly of sciences, disciplines, trades, or trades.

Keywords

  • artificial intelligence
  • intelligent robots
  • autonomy
  • moral and ethical principles
  • human rights

1. Introduction

Artificial intelligence (AI) has managed to impose significant breakthroughs in human beings’ habitual life thanks to the extensive variety of services it offers in the public and non-governmental business sector, including domestic and leisure, and many others. However, one of AI’s most useful aspects is robotics, permanently revolutionized based on multiple needs in fields such as electronic engineering, mechatronics, safety systems, expert legal models, infrastructure design, education.

Authors such as Granell [1] consider that the Fourth Revolution we are living in nowadays can also be known as the “algorithmic society” or “digital society,” because the use of technology has permeated each sphere of societal relationships, turning it into a customary and almost natural interaction. For instance, today’s refrigerators, televisions, and speakers are intelligent; robots can clean, cook, dispense information, sustain conversations with their users, keep up with people’s health, deliver surgical, entertainment, and recreational services, control property’s security systems and even weapons, just to name a few.

Commercially, the use of AI technology is commonplace, and it has permanently assisted internal and external risk mitigation (blockchain, token, big data, etc.) with the purpose of setting up rules to get to know clients’ economic behavior to automatically generate alerts based on unusual or suspicious operations, or to identify consumption preferences to offer goods and services, thus decreasing the likelihood of errors inherent in markets and gradually minimize human talent contracting.

Authors such as Gutiérrez [2] affirm that these sectors will be the first to experience the negative effects of the almost indispensable and dependent use of AI. Unlike the opinion of García [3], who considers that benefits will be considerable health wise, for example, in psychology, optimum predictions by AI will allow to foresee risk events in patients or users with an almost nonexistent range of error compared with logical processes by humans or machines lacking heuristic capacity “not foreseen to be able to be done by machines”. Regarding the broad view of the use of AI and its products, it can be said that according to Hawksworth et al. it will not be generalized nor standardized, since some parts of the world are more prone to the use of these innovative technologies based on their academic advancement and economic stability, leading to the belief that poorer countries will be less exposed to it “in terms of regions, some countries are more given to successfully implementing the new technologies.1

Psychology has a widespread position regarding the uses of AI in discipline, a stance that agrees with the thesis by Penrose [4], concerning the fact that AI is in permanent evolution and tends to perfect its design and operation to offer better service to individuals, just as neural network systems, expert systems, Turing’s computational models and robotics, among others; all of which merge the technical and cognitive in an interdisciplinary way with the aim of “understanding, adjusting, and responding intelligent and cognitive methods, involving mathematic, logical, mechanical variables and biological elements and processes” ([5], p. 3).

For author Cairó [6], AI must be analyzed from the point of view of bio life (entailing nanotechnology, information and communications technology, and epistemology) per the statement that the context in which AI is developed is based on “understanding the scientific perspective of components that consolidate intelligent thinking and human behavior and on how they are written in machines” (p. 3). This thesis was confirmed by Henk ([7], p. 82), who claimed that the goal of AI is to “acknowledge the degree of machines’ correlation and amplitude in an argument that is particularly alternate to the service of human development.”

Therefore, Gardner [8] suggests that the study of AI must be undertaking accepting that it has its own knowledge and that it learns from its environment, which means that it is developed thanks to interactions with other complex contexts that require analysis to solve problematic situations. This is how it has been evinced that AI’s cognitive thinking models have perceptive sensory systems that enable it to adapt to the habitat and to the occurrence of events by attaining information to select it, code it, organize it, eliminate it, recover it, and store it [9].

Thanks to nowadays AI is considered an autonomous cognitive or “multiple intelligences” model, since it is made up of mathematical, interpersonal, social, emotional logics, linguistics, and fluid intelligence, in addition to its goal to simulate natural human neural functioning [9]. Thus, AI simulates heuristic processes of human neural networks to add thinking coherence along with corporal functioning in its social environment [10], and this is precisely why AI can evaluate circumstances to suggest the most adequate alternatives to the conflict faced, using only information regarded as useful in its optimum decision-making process.

However, an in-depth study of the traits became the foundation for designers to diversify intelligent algorithm models and implement them in robots with significant diversity of services as predictions of market behavior, likelihood of crime occurrence, diagnosis of the origin of diseases or existence of latent illnesses, etc., known as Dominions [9]. Dominions are an “exact group of norms or a systematic succession of steps that can be taken to undertake calculations, solve problems and make decisions. The algorithm is the formula that is used to make the calculation” [11].

Therefore, current AI models have managed to be highly automated, oftentimes confused with autonomy due to their adaptability, survival, negotiation capacity, self-dependence, and cooperation, despite lacking self-determination [12]. This is how these models, which can make someone believe they are interacting with a human, became so sought after by those who require mechanisms to solve conflicts or assistance in operational activities [13], aside from the fact that AI models allow to accelerate information processing procedures maximizing objectives [9].

Advertisement

2. Artificial intelligence and human rights

An existing theory considers AI is applied to activities that require levels of understanding that are typical of the human mind; thus, its development models or information systems are revolutionary and transformative for science and society alike by creating non-human cognitive models Appenzeller [14].

Consequently, numerous debates have risen in terms of the moral and ethical principles that need to be present in the development processes of AI and its products, seeing as these are capable of undertaking activities that may lead to positive and negative legal results, e.g., modifying the dynamics of human beings’ affective interaction causing physical and emotional disorders (such as ostracism), developing dependence to the use of applications that offer unreal but satisfying experiences (metaverse), replacing human labor, widening socioeconomic gaps, AI automation, etc. [15].

To address the potential risks mentioned in the previous paragraph, international organizations have been created with AI experts to evaluate the scope of the effects of its evolution. To date, these organizations (made up by countries with which Colombia has international relationships) include the following: the Advanced Technology External Advisory Council, Singapore’s Advisory Council on the Ethical Use of AI and Data, the Select Committee on Artificial Intelligence appointed by the House of Lords in the UK, and groups of experts of the European Commission and the OECD, which have issued reports with important recommendations to be taken into account by countries in their implementation of AI models [16].

A bibliographic analysis by Jobin et al. [16] identified 11 principles and values of ethical nature that must be considered by countries, introduced inFigure 1.

Figure 1.

Ethical principles identified in existing guidelines. Source: Own elaboration, information obtained from Jobin et al. [16].

Author Asís [17] asserts that ethical axioms in the evolution of sciences and technologies that limit social problems do not exist, making human rights prominent in terms of protecting guarantees such as universal values.

Hence, the Toronto Declaration published in May 2018 emphasizes on three aspects to leverage the use of AI’s regulation and production: The first is related to non-discrimination and the prevention of discrimination in design and implementation of automated learning systems for public environments; the second has to do with defining accountability models for AI developers, implementers, and users; and finally, the third aspect is the implementation of efficient and effective oversight to follow up, identify, and individuate transgressors of people’s rights by means of AI and/or its products [17].

Accordingly, the Toronto Declaration intends to protect individual rights in terms of diversity and inclusion, promoting models and AI in inclusive education with automated learning systems that allow people with special needs or vulnerabilities to access services and guarantees available to others with justice and equality (Naciones [18]).

For its part, The Public Voice [19], an international organization, made a significant contribution by suggesting Universal Guidelines for the development, not, implementation and use of AI, aimed at streamlining the benefits it offers to natural persons under the criteria of risk minimization and protection of human rights, as shown in Figure 2.

Figure 2.

Universal guidelines for AI. Source: Own elaboration, information obtained from compiled by the authors with data taken from The Public Voice [19].

On the other hand, the European Parliament, European Economic and Social Committee, and the European Committee of the Regions were informed by the European Commission [20] about a strategy to face the damaging effects of AI’s implementation and thus yield its results in their territories in terms of increased industrial and technological capacity, economic boost in the international framework, decreased socioeconomic gaps, and designing a legal and ethical framework fit for the needs of the region, with its main axis being AI’s development for human beings.

For this reason, trust became the strategy’s core to generate individuals’ safety and complaisance, since the main objective is the satisfaction of human needs that provide well-being related to the States’ social and essential purposes, namely: rule of law, democracy, freedom, equality, respect for human dignity, and respect for human rights [20].

The following compiles the ethical guidelines suggested by the European Commission [20] to design a regulatory framework that must be adhered to by developers, suppliers, and users of AI, nationally and internationally, to uphold social, political, moral, and legal order. The guidelines have been organized into seven groups that give reliability to AI (Figure 3).

Figure 3.

Guidelines to build trust in AI. Source: Own elaboration, information obtained from compiled by the authors with data taken from the European Commission [20].

Regarding human rights, Corvalán [9] stresses that these should be applied from the viewpoint of international law, both public and private, from human dignity, equality, and unrepealable rights as its bases, based on a direct and analogous relationship between dignity, human rights, and the protection of the weakest.

Inclusion as a right must be specifically evident in the development, implementation, and use of ICT because innovation’s goal is to eliminate the boundaries of inequality in accessing information and in the promotion of sustainable social development; thus, Corvalán [9] suggests three categories of connected principles to attain it (Figure 4).

Figure 4.

Fundamental categories for the development of AI. Source: Own elaboration, information obtained from compiled by the authors with data taken from Corvalán [9].

Regarding AI’s validation and verification, Corvalán [9] manifests that it is up to public authorities to have authority in the matter, only they have punitive power over the illegal actions against the safety, life, liberty, and health of people and the ecosystem, which must also be applicable to the algorithm developers, implementers, and users who use it for different purposes than allowed.

In terms of processes, there are three factors that need to be considered to guarantee the quality and transparency of AI and its algorithmic processes, explained (Figure 5).

Figure 5.

Factors to ensure quality and transparency of algorithms. Source: Own elaboration, information obtained from compiled by the authors with data taken from Corvalán [9].

However, to typify AI in the law, the legislature needs to consider these suggestions, although these have legal significance, they are not a bill, essential reflections of interdisciplinary and transdisciplinary nature, which are necessary to prepare an efficient and effective norm (Figure 6).

Figure 6.

Recommendations to regulate AI in the law. Source: Own elaboration, information obtained from Corvalán [9].

Advertisement

3. Conclusions

From a legal perspective in Colombia to date, the issue of AI, its products, and its code of ethics has not been studied or at least considered as an object of regulation by the legislature, being so, everything related to the contractual and non-contractual civil liability that derives from it is assigned to whoever its developer or owner is, ignoring the scope of AI automation.

In addition to the above, to start the projection of regulation it is necessary to define whether or not AI and its products are subject to rights and obligations and at what level they are with respect to the human, what is the scope of their responsibility, the determination of its attributes, and if its operability classifies or not within legal acts; because this would be the appropriate means to assign them exclusive regulations in matters of civil, criminal, disciplinary liability, among others.

As far as transhumanism is concerned, this should be limited to the legal parameters of use applicable to humans, since it would be the individual who would determine the use of the technological elements that have been implanted and not the other way around, which means that it should be prohibited the installation of technological elements that limit or cloud the self-determination of the person.

That is why it is necessary to regulate the design, development, and implementation of AI and its products, establish a code of ethics for the prevention of malpractice and overreaching the use of technologies, and expressly prohibit the use of weapons, the use of personal data, the application of abusive clauses, and limiting the scope of the dominant position in interactions with living beings.

Likewise, freedom and autonomy must be stabilized as exclusive characteristics of the human being and subject to permanent control. Its operation must be parameterized within the respect for life, dignity, justice, equity, non-discrimination, peace, and the prohibition of monopoly in the praxis of sciences, disciplines, occupations, or trades.

Advertisement

Conflict of interest

The authors declare no conflict of interest.

References

  1. 1. Granell F. Los retos de la cuarta revolución industrial. In: Perspectivas económicas frente al cambio social, financiero y empresarial: solemne acto académico conjunto con la Universidad de La Rioja y la Fundación San Millán de la Cogolla/coord. por Real Academia de Ciencias Económicas y Financieras; 2016. pp. 57-74 Disponible en: https://dialnet.unirioja.es/servlet/articulo?codigo=6074162
  2. 2. Gutiérrez Á. YoTubeTrabajo: Transformación Digital; susto o muerte. Jornadas Yo, Robot: puestos de trabajo que van a desaparecer. Madrid: ESIC; 2017 Disponible en: https://ecommerce-news.es/robot-puestos-van-desaparecer/
  3. 3. García C. Principios para la Era Cognitiva. Jornadas Yo, Robot: puestos de trabajo que van a desaparecer. Madrid: ESIC; 2017 Disponible en: https://www.laopiniondemalaga.es/malaga/2017/04/27/carmen-garcia-digital-oportunidad-mujeres/926379.html
  4. 4. Penrose R. The Emperor’s New Mind. México: FCE. Editorial Oxford University Press; 1996
  5. 5. Frankish K, Ramsey W. The Cambridge Handbook of Artificial Intelligence. Cambridge University Press; 2015. DOI: 10.1017/CBO9781139046855
  6. 6. Cairó O. El hombre artificial: el futuro de la Tecnologica. Alfaomega; 2011
  7. 7. Henk A, editor. Nanotechnologies, Ethics, and Politics. Ethics series. UNESCO publishing; 2007 Disponible en: https://unesdoc.unesco.org/ark:/48223/pf0000150616
  8. 8. Gardner H. La inteligencia reformulada. Madrid: Paidós; 2010
  9. 9. Corvalán J. Inteligencia artificial: retos, desafíos y oportunidades – Prometea: la primera inteligencia artificial de Latinoamérica al servicio de la Justicia. Revista de Investigações Constitucionais, Curitiba. 2018;5(1):295-316. DOI: 10.5380/rinc. v5i1.55334. disponible en
  10. 10. Gerard M, Gerald G. El libro de la biología. Madrid: Ilus Books; 2015
  11. 11. Zabala Leal T, Zuluaga Ortiz P. Los retos jurídicos de la inteligencia artificial en el derecho en Colombia. JURÍDICAS CUC. 2021;17(1):475-498 Recuperado a partir de: https://revistascientificas.cuc.edu.co/juridicascuc/article/view/3141/3341
  12. 12. Barrat J. Nuestra invención final. Madrid: Planeta Publishing; 2015
  13. 13. Serrano A. Inteligencia Artificial. Madrid: RC; 2016
  14. 14. Appenzeller T. The AI revolution in science. Science. 2017. DOI: 10.1126/science.aan7064. Available from: https://www.science.org/content/article/ai-revolution-science
  15. 15. Brundage M et al. The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. Future of Humanity Institute; University of Oxford; Centre for the Study of Existential Risk; University of Cambridge; Center for a New American Security; Electronic Frontier Foundation; OpenAI; 2018 Available from: https://arxiv.org/ftp/arxiv/papers/1802/1802.07228.pdf
  16. 16. Jobin, A., Ienca, M.y Vayena, E. Inteligencia artificial: el panorama global de las directrices éticas. 2019. Available from: https://www.researchgate.net/publication/334082218_Artificial_Intelligence_the_global_landscape_of_ethics_guidelines/citation/download
  17. 17. Asís R. Inteligencia artificial y derechos humanos. Materiales de Filosofía del Derecho. Instituto de Derechos Humanos Bartolomé de las Casas. Universidad Carlos III de Madrid; 2020 Available from: https://e-archivo.uc3m.es/bitstream/handle/10016/30453/WF-20-04.pdf?sequence=1&isAllowed=y
  18. 18. Naciones Unidas (2018). The Toronto Declaration: Protecting the Right to Equality and Non-discrimination in Machine Learning Systems. Recuperado de: https://www.accessnow.org/cms/assets/uploads/2018/08/The-Toronto-Declaration_ENG_08-2018.pdf
  19. 19. The public voice 2018). Directrices universales para la inteligencia artificial [en línea] (consulta realizada el 20 de febrero de 2020). Recuperado de: https://thepublicvoice.org/ai-universal-guidelines/
  20. 20. Comisión Europea. Grupo Europeo Sobre Ética de la Ciencia y las Nuevas Tecnologías. (2018): Statement on Artificial Intelligence, Robotics and ‘Autonomous’ Systems, European Group on Ethics in Science and New Technologies. Disponible en: https://op.europa.eu/en/publication-detail/-/publication/dfebe62e-4ce9-11e8-be1d-01aa75ed71a1https://docplayer.es/26511059-Los-paradigmas-de-investigacion-en-las-ciencias-sociales.html

Notes

  • Per the Advanced Technology External Advisory Council, Singapore’s Advisory Council on the Ethical Use of AI and Data, and the Select Committee on Artificial Intelligence appointed by the House of Lords in the UK, groups of experts of the European Commission, and the OECD.

Written By

Tatiana Dulima Zabala Leal and Paola Andrea Zuluaga Ortiz

Submitted: 11 November 2022 Reviewed: 14 January 2023 Published: 08 March 2023