Open access peer-reviewed chapter - ONLINE FIRST

Dissecting the Paradox of Progress: The Socioeconomic Implications of Artificial Intelligence

Written By

Kevin Sevag Kertechian and Hadi El-Farr

Submitted: 27 July 2023 Reviewed: 23 February 2024 Published: 06 May 2024

DOI: 10.5772/intechopen.1004872

The Changing Landscape of Workplace and Workforce IntechOpen
The Changing Landscape of Workplace and Workforce Edited by Hadi El-Farr

From the Edited Volume

The Changing Landscape of Workplace and Workforce [Working Title]

Hadi El-Farr

Chapter metrics overview

20 Chapter Downloads

View Full Metrics

Abstract

The rapid ascent of artificial intelligence (AI) and other general-purpose technologies has marked the advent of the fourth industrial revolution, triggering substantial transformations in business practices and productivity potential. While these emerging technologies offer numerous benefits, they also present a range of threats, concerns, and challenges. This chapter aims to investigate the dark side of the fourth industrial revolution, based on the available literature. One major concern revolves around employment, encompassing the potential rise in unemployment rates and the emergence of structural unemployment. The set of skills needed for the changing nature of work is significantly different, thus there is a need for rapid reskilling and deskilling to ensure the future employability of the existing workforce. Furthermore, high dependence on machines might lead to major ethical concerns, including, but not limited to, breaches of privacy and discrimination. More so, high unemployment might lead to further social and income inequalities, relegating many to the lower class and decreasing their purchasing power, while placing few in the upper class.

Keywords

  • artificial intelligence
  • fourth industrial revolution
  • discrimination
  • income inequalities
  • unemployment

1. Introduction

According to a survey conducted by McKinsey in 2022, there has been a remarkable surge in the adoption rate of artificial intelligence (AI). In 2017, a mere 20% of the respondents reported incorporating AI in at least one business area. However, this figure has more than doubled since then, currently standing in 2022 at 50% [1].

Undoubtedly, AI is poised to become the dominant force in the business landscape in the foreseeable future. AI has emerged as a highly disruptive innovation, it offers vast and unparalleled possibilities for transformative advancements and substantial enhancements across numerous industries [2]. Consequently, an imperative need arises for a meticulous examination of the ramifications of AI across a wide spectrum of organizational hierarchies and societal domains. Equally crucial is the thorough identification and comprehension of the prevailing downsides inherent to AI, including but not limited to the potential reduction in employment opportunities and the emergence of ethical dilemmas (i.e. socioeconomic factors) [3].

Socioeconomic factors are key elements that significantly impact individuals, communities, and societies. These factors shape various aspects of life, including opportunities, well-being, and development. They encompass social and economic conditions that define the fabric of societies, influencing areas such as education, employment, income distribution, healthcare, housing, social mobility, equity, economic development, and environmental sustainability. By understanding and analyzing these factors, we gain valuable insights into the intricate dynamics that affect people’s lives and society. This chapter will center on AI and its impact on various work-related factors, including unemployment, wage disparities, and discriminatory practices.

The development of AI occurs within the context of what Klaus Schwab suggests as the emergence of the fourth industrial revolution (4IR) [4]. This ongoing revolution has led to the convergence of multiple interrelated digital technologies, facilitated by remarkable advancements in computing power and the cost-effective networking of objects. This convergence, combined with the massive volumes of data being generated, catalyzes the rapid growth of AI technologies. As a result, the automation of numerous complex tasks has become possible.

The objective of this chapter is to shed light on the negative aspects that AI brings to various socioeconomic domains. To achieve this, we conducted a comprehensive review of the literature to present a broader perspective on the potential landscape of AI and its adverse consequences on different socioeconomic factors. It is crucial to address these negative effects, as previous studies have predominantly emphasized the positive impact of AI [5, 6, 7, 8], leaving the examination of its negative aspects on socioeconomic factors relatively unexplored.

Advertisement

2. The fourth industrial revolution

Modern history has witnessed three influential industrial revolutions that have had a profound impact on our society [9]. The initial revolution (1760), characterized by the utilization of water and steam power, revolutionized mechanical production. One notable example of this revolution is the development of the steam engine by James Watt, which paved the way for advancements in transportation, manufacturing, and agriculture. The second industrial revolution (1870) emerged with the widespread adoption of electricity and the implementation of mass manufacturing techniques through the division of labor. This period saw remarkable innovations, such as the assembly line introduced by Henry Ford, which revolutionized the automotive industry and accelerated production rates. The third industrial revolution (1969), driven by the integration of information communications technology (ICT) and electronics, ushered in an era of customized and specialized manufacturing processes. One significant example of this revolution is the rise of 3D printing, enabling the production of intricate and tailor-made products with reduced costs and increased efficiency [10]. These industrial revolutions have not only transformed the manufacturing landscape but have also had profound societal implications, shaping economies, lifestyles, and global interconnectedness [9, 11]. They serve as milestones in our history, marking significant shifts in technological advancements and paving the way for further progress in various sectors of the economy.

The fourth industrial revolution is now underway, building upon the digital revolution of the past century. It is characterized by the convergence of technologies that blur the boundaries between the physical, digital, and biological realms. This ongoing transformation represents not merely an extension of the third industrial revolution, but rather a distinct and separate phase, driven by three key factors: velocity, scope, and systemic impact [12]. The velocity of progress in the fourth industrial revolution surpasses anything seen in history. Unlike previous industrial revolutions that followed a linear trajectory, this revolution is unfolding at an exponential pace. Its impact extends far beyond individual industries, disrupting and reshaping virtually every sector worldwide. The breadth and depth of these changes herald a complete transformation of entire systems of production, management, and governance [4]. However, caution is advised. As Schwab aptly emphasizes, the fourth industrial revolution embodies a realm of boundless promises intertwined with an array of potential perils [4].

In his 1907 book “L’évolution créatrice”, Bergson astutely pointed out that “intelligence… is the faculty of making artificial objects, especially tools to make tools, and to vary its production indefinitely” [13]. Taking everything into account, it can be argued that humans were inherently destined to develop artificial intelligence, as we are what Bergson referred to as homo faber. Similarly, Benjamin Franklin used the term “toolmaking animals” to describe the human condition. Sooner or later, AI would have inevitably emerged, although neither Bergson nor Benjamin Franklin could have envisioned the remarkable achievements of contemporary AI.

In the context of the fourth industrial revolution, artificial intelligence emerges as the leading force, offering a multitude of possibilities. While it is important to recognize the presence of other general-purpose technologies (GPTs) in Industry 4.0, such as big data, the Internet of Things (IoT), machine learning, and cyber-physical systems [13, 14], our primary focus in this chapter is on AI. It is worth noting that Industry 4.0 refers to the manufacturing and production systems established during a period dominated by the fourth industrial revolution, and although Industry 4.0 and 4IR are not synonymous, they are complementary as Industry 4.0 is a part of the broader 4IR [13]. Generally speaking, AI is defined as “the knowledge and techniques developed to make machines “intelligent,” that is to say able to function appropriately also through foresight in their environment of the application” [15]. Alternatively, AI can be described as a multifaceted interplay with humans, encompassing four key characteristics [13]:

  1. Standards: AI possesses advantages over humans in certain areas, allowing it to cover a broader range of tasks, particularly evident in sectors like production.

  2. Substitution: AI serves as a complete replacement for human tasks, particularly evident in the automation of low-skilled jobs.

  3. Superiority: human intelligence, including emotional intelligence such as empathy, still surpasses AI because these traits are intricate and challenging to replicate.

  4. Synthesis: humans and AI collaborate harmoniously, combining their strengths to yield enhanced business outcomes.

In the realm of human resource management (HRM), the emergence of artificial intelligence (AI) has proven to be a game-changer, significantly boosting efficiency and effectiveness [14]. Rather than viewing AI as a replacement for human interaction, it should be seen as a strategic tool that enhances it [16]. One notable example is the integration of smart chatbots, which can provide invaluable support in making strategic HRM decisions [17]. By leveraging AI technologies strategically, organizations can revolutionize their HR practices, streamline processes, and unlock new possibilities for success.

The upcoming section will examine the influence of AI on individuals, initially by exploring its impact on HRM in organizations and subsequently by taking a broader perspective to analyze how AI affects socioeconomic factors.

Advertisement

3. AI impact on human resources

3.1 Artificial intelligence and impacts in HRM

The HR function within organizations holds a pivotal role that extends far beyond mere administrative tasks. In today’s context, HR is viewed as a strategic business partner, tasked with ensuring an organization’s success and competitive edge through its workforce. However, the advent of AI has significantly disrupted this traditional paradigm. As tasks that were once performed exclusively by humans are now being efficiently completed by AI, the HR function must adapt to this transformation to remain effective in its mission of providing a skilled and equipped workforce.

The field of artificial intelligence offers a vast array of possibilities, and significant advancements have already been made in various domains, such as video games [14], vehicles [12], and content generation [15]. Beyond just a few areas, AI can be likened to a true “tidal wave” in the context of the fourth industrial revolution, exerting its influence on every aspect of our lives, both near and far [16]. Hence, a more nuanced perspective emerges regarding the impact of AI in human resource management (HRM).

The utilization of AI in HRM is situated within a broader framework known as “algorithmic management,” which facilitates the implementation of “people analytics” [17]. People analytics, a practice adopted by organizations globally, involves leveraging employee-related data to enhance workforce management [18]. Through the integration of AI, decision-making processes in HRM have been elevated by leveraging people analytics, leading to an intensified approach known as learning analytics. This approach entails continuous analysis of human tasks within organizations to identify patterns and make predictions [19, 20]. It is safe to say that both scholars and practitioners in the field of AI in management anticipate significantly more positive organizational outcomes from AI than negative ones [21].

Several recent studies have highlighted significant limitations in the impact of decision-making processes within work settings, especially in the realm of HRM functions that directly influence individuals’ socioeconomic facets, such as job promotions and employment prospects.

In their study on AI and decision-making, Bankins et al. [22] discovered that AI is generally perceived as an inadequate decision-maker when the outcome is negative. Participants specifically cited AI’s reliance on incorrect or irrelevant data, its lack of respect or ability to express respect/disrespect, its deficiency in emotional intelligence, and its perceived incompetence in making decisions [22]. However, when AI produced a positive outcome, regardless of whether it contradicted a negative outcome from a human or another AI decision, participants were more inclined to trust it and had higher perceptions of interactional justice [22]. In their 2 × 2 × 6 design, Bankins et al. [22] did not find any significant differences among the six evaluated HRM functions, which included recruitment and selection, training, performance management, work allocation, firing, and promotion. These results are crucial as they offer more insights into the theory of machines [23], which pertains to how people perceive the distinctions between human and algorithmic judgments in terms of input, process, and output [23].

In their systematic literature review, Giermindl et al. [21] raised concerns among scholars and practitioners regarding the limitations of people analytics within the realm of AI and human-algorithmic management. They emphasized that people analytics should not be seen as a one-size-fits-all solution for improved management. Instead, they acknowledged the shift toward AI-driven people-centered analytics but cautioned that it currently creates an illusion of control by providing a false sense of certainty and reductionism through the erroneous linking of unrelated events. Giermindl et al.’s [21] research also revealed that the extensive amount of data gathered and analyzed by AI and algorithms tends to reinforce a self-fulfilling prophecy rather than focusing on employee competencies. Furthermore, this reliance on past data to predict future events results in a “profound deference to precedent” [24], making the deciphering of gathered data increasingly challenging, even for specialists.

Moreover, people analytics significantly diminishes employee autonomy, creativity, and decision-making latitude, leading to “reactive chains of action” instead of self-reliant and self-organized behavior aligned with employees’ self-determination [21]. Over time, this erosion of trust among employees and in their decision-making abilities becomes increasingly apparent as they feel socially pressured to “listen” to machines, gradually leading to a form of digital Taylorism, where creative and intellectual tasks are subjected to the same rigid processes as routine work [25]. Nowadays algorithmic management is applied in HRM for resume screening [26], assessing the fit between employees and tasks [27], establishing performance management [28], and dealing with compensation [29]. Problems can arise within these processes such as discrimination [30, 31], due to the reinforcement of biases that in turn foster inequalities [32].

Research has shown that AI-based hiring tools may inadvertently discriminate against certain demographic groups. For example, as reported by Dastin [33], Amazon’s AI recruitment tool displayed gender bias, penalizing resumes that included terms commonly used by women. Similarly, a study by Obermeyer et al. [34] revealed that an AI-based algorithm used in healthcare was racially biased, leading to lower care recommendations for Black patients compared to white patients with the same level of health. The issue of discrimination in AI extends beyond hiring. For instance, AI-driven performance evaluation systems might inadvertently reward or penalize employees based on factors unrelated to their actual performance. This can have adverse effects on employee morale, engagement, and career advancement opportunities.

From the HR perspective, there is notable resistance among employees toward AI and algorithmic management in general [35, 36]. This resistance stems from employees’ inherent distrust toward AI, as its rapid integration into various industries has disrupted established working routines [37, 38]. As a result, employees may resort to deviant behaviors, such as knowledge hiding [39], as a means of coping. Unfortunately, this resistance and disruption also take a toll on employees’ overall well-being [40, 41]. Recognizing and addressing these concerns is crucial for fostering a harmonious and productive work environment.

Advertisement

4. AI socioeconomic impacts

As noted by Giermindl et al. [21], basing hiring decisions solely on past data has resulted in the formation of groups composed of individuals from similar social backgrounds. This phenomenon, known as homosocial reproduction, directly promotes homophily and contributes to the emergence of discriminatory practices. The decision-making process is highly biased, further exacerbating these issues [2942]. The increasing presence of algorithms in the hiring process poses an even greater risk of discrimination. Similarly, we found in the literature that AI-based surveillance technologies can disproportionately target and marginalize certain communities, leading to privacy concerns and potential discrimination. For example, facial recognition systems have been criticized for their higher error rates on individuals with darker skin tones, leading to potential adverse consequences for marginalized communities [43].

As algorithms become more omnipresent, the aforementioned tendencies are expected to intensify. Consequently, social and economic categorization will be reinforced, exacerbating overall inequality and leading to increased social and economic isolation for those who are already marginalized [24, 29, 44]. In the same vein, the digital divide can result in marginalized communities having limited access to AI technologies and the opportunities they provide. Lack of access to AI-driven services and resources can further marginalize already disadvantaged individuals [45].

The widespread adoption of AI technologies in various domains has raised significant privacy concerns. AI systems often require vast amounts of data to train and improve their performance, and this data can contain sensitive personal information. This raises potential risks of data breaches, unauthorized access, and misuse of personal data. One notable area of concern is AI-powered surveillance technologies. Facial recognition systems, for example, can capture and analyze people’s faces in real-time, leading to concerns about privacy and surveillance. A study by Kroll et al. [46] highlighted that facial recognition systems can suffer from racial and gender bias, potentially exacerbating existing privacy and discrimination issues. AI-powered personal assistants, like virtual voice assistants, have also raised privacy concerns. These systems typically process voice recordings to improve their performance, but this data collection can raise questions about user privacy and data retention policies [47]. Nonetheless, AI-driven cybersecurity systems may raise privacy concerns as they often process vast amounts of data, including personal information, for threat analysis. It is essential to ensure that privacy regulations are upheld while using AI in cybersecurity [48], therefore, keeping safe individual-level information is considered a key element [49].

The rapid computerization witnessed at the turn of the twenty-first century has exacerbated wage inequality, creating a divide between those who have embraced technology and those who have not [50, 51]. A similar trend is expected to emerge with the rise of Artificial Intelligence [52]. In a 2016 paper by Brougham and Haar, they introduced the concept of “Smart Technology, Artificial Intelligence, Robotics, and Algorithms” (STARA), estimating that by 2025, one-third of existing jobs could be replaced by STARA. While the previous discussion focused on the impact of AI and algorithmic management on HRM professionals and employees, it is crucial to recognize that individuals’ social lives will be greatly affected. Surviving and thriving in the fourth industrial revolution (4IR) era will require a necessary readjustment. This leads us to ponder whether AI will surpass humans in daily tasks to the extent that a significant portion of the workforce will become obsolete, or if alternative solutions can be envisioned to complement the existing workforce. A significant illustration of potential employment disparities arises from the replacement of skilled and unskilled jobs, which could lead to increased unemployment rates [53]. Also, according to Su [54], AI will be a major contributor to structural unemployment, which refers to a type of unemployment that arises from a mismatch between the skills and qualifications of job seekers and the requirements of available job opportunities within an economy [55].

The previous three industrial revolutions provide some evidence of shifting employment patterns. Initially, the farming sector dominated, but with the advent of the third industrial revolution, there was a notable transition toward the service industry. As low-skilled jobs gradually disappeared, there was a significant surge in management and clerical positions [56]. However, the creation of new jobs to compensate for those currently being displaced by the fourth industrial revolution (4IR) and the omnipresence of AI is no longer a natural process as it was in the past. Scholars are expressing more pessimism, foreseeing a potential intensification of inequalities among workers [57]. This pessimistic outlook is supported by recent studies indicating that the automation of tasks could primarily eliminate lower-skilled jobs, with new jobs emerging predominantly in the technology sector [52, 58]. Similarly, certain jobs have become declassified or faded out as they were once considered high-skilled, but with the introduction of AI, human intervention has gradually become redundant over time [59].

It is crucial to recognize the challenges posed by these changes in the job market [60] (e.g. automation technologies as a substitute for human intervention) and address them proactively. Rather than relying on automatic job creation, deliberate efforts are required to reskill and upskill the workforce, ensuring that individuals can transition into the new roles demanded by the evolving economy. Thus, we align with Autor who believes that “AI will change the labor market, but not in the way Musk and Hinton believe. Instead, it will reshape the value and nature of human expertise” [61]. As Artificial Intelligence (AI) develops alongside the 4IR, often seen as a major change in services described by Baldwin, experts agree on a common transformation process that could benefit the job market. The first step starts a chain of changes, leading to a shake-up phase. Then, the third step deals with possible negative reactions or unexpected results of AI. Despite these challenges, with strong and well-thought-out rules, we can expect good results in the end [62, 63]. Therefore, organizations and practitioners can adopt AI in various ways that enhance their workforce’s value and minimize the potential negative impacts of AI.

Practically speaking, initiatives should focus on providing accessible and comprehensive training programs that equip workers with the necessary skills to thrive in the technology-driven landscape of the 4IR. In the context of the 4IR, the future workforce must possess specific skills to thrive in the evolving landscape. These skills are crucial for individuals to adapt and excel in a rapidly changing environment [64]. First and foremost, analytical thinking and innovation will be paramount. The ability to analyze complex information, think critically, and generate innovative solutions will differentiate successful professionals from their counterparts. This skill set enables individuals to navigate the complexities of the modern workplace and identify new opportunities for growth and improvement. The second essential skill is active learning and learning strategies. As technology continues to advance at an unprecedented rate, the acquisition of new knowledge and the ability to learn continuously becomes crucial. Embracing a mindset of lifelong learning and developing effective learning strategies will empower individuals to stay ahead of the curve and remain adaptable in the face of technological advancements. Lastly, complex problem-solving skills will be in high demand. The fourth industrial revolution brings forth a multitude of intricate challenges that require creative and systematic problem-solving approaches. Individuals who can tackle complex problems by breaking them down into manageable components, analyzing various perspectives, and generating innovative solutions will be highly sought after.

Reskilling and upskilling will play a pivotal role in shaping the future of the workforce in the coming years. Reskilling involves a complete transition to a new set of skills required for a different occupation or industry, while upskilling refers to updating an employee’s skill set to address new occupational challenges and demands [65]. According to the Future Jobs Report by the World Economic Forum, around 50% of workers will be affected by reskilling by 2025. Additionally, a 2016 report from the World Economic Forum stated that 65% of today’s primary school students will work in jobs that do not yet exist. Furthermore, it is projected that 14% of the global workforce will need to change their occupation due to the influence of AI [65]. This transition to new technologies follows a historical pattern observed during the shift from agriculture to assembly lines. However, the current shift toward AI integration is likely to impact the working class more significantly, potentially exacerbating the gap between blue-collar and white-collar workers. Therefore, education and organizations have crucial roles in facilitating the reskilling and upskilling processes. Education institutions will need to provide updated skills to prepare the next generation of workers for the creation of new jobs in the coming years, which will require a fresh set of skills [66].

It is obvious that governments around the world will have a role to play in establishing policies around a more sustainable AI practice. Indeed, governments recognized the need to regulate AI technologies to ensure responsible, ethical, and sustainable development and deployment. The focus was on addressing potential risks and ensuring that AI applications align with societal values, human rights, and environmental concerns. Table 1 provides an overview of these regulatory processes:

Government bodyType of policyDateObjective
European Union*AI actApril 2021This act aimed to regulate AI systems’ development, deployment, and use across the EU member states. It focused on high-risk AI applications, such as those used in critical infrastructure, healthcare, and law enforcement, and required developers to follow specific requirements to ensure the safety, transparency, and ethical use of AI technologies.
United States of America**Executive Order on AIFebruary 2019The United States government issued an executive order called “Maintaining American Leadership in Artificial Intelligence.” The order aimed to enhance AI research and development in the country and promote responsible AI practices. The order also directed federal agencies to prioritize AI funding and incorporate AI considerations into their strategies.
Canada***Directive on Automated Decision-MakingApril 2020The directive required federal agencies to implement specific practices when using automated decision-making systems, including AI. It emphasized the importance of transparency, accountability, and human oversight in AI systems to avoid biases and ensure sustainable AI practices.
United Nations****AI for good initiativeThe United Nations launched the “AI for Good” initiative to encourage the use of AI technologies for positive societal impact and sustainable development. This initiative brings together stakeholders from governments, academia, industry, and civil society to collaborate on projects that leverage AI for achieving the UN’s Sustainable Development Goals.
Singapore*****Model AI Governance FrameworkJanuary 2019Singapore’s government published the Model AI Governance Framework to guide organizations in developing and deploying AI technologies. The framework outlines key principles, such as fairness, accountability, and transparency, to promote responsible and sustainable AI practices in the country.

Table 1.

Example of government policies toward a sustainable AI practice.

Artificial Intelligence Act, COM/2021/206 final—2021/0106 (COD).


Executive Order 13859 of February 11, 2019—Maintaining American Leadership in AI.


Directive on Automated Decision-Making, Treasury Board of Canada Secretariat, April 2020.


AI for Good, United Nations.


Model AI Governance Framework, Personal Data Protection Commission.


Simultaneously, organizations must take responsibility for providing necessary training to the existing workforce [67]. By acknowledging the potential inequalities and actively working toward inclusive strategies, we can strive for a future where the benefits of technological advancements are distributed more equitably. Collaboration between policymakers, industry leaders, and educational institutions is vital to facilitate a smooth transition, empower workers, and build a resilient workforce capable of embracing the opportunities presented by the 4IR.

In response to these multiple concerns, industry experts have joined forces to deliberate and propose essential measures for effectively navigating the transition toward the disruptive technologies offered by the fourth industrial revolution.

Advertisement

5. Minimizing AI negative impacts

In March 2023, companies specializing in AI embarked on a scheduled pause to deliberate on ethical concerns surrounding AI and call for regulations governing its use. On the Future of Life website, an insightful statement is made: “Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.” This thought-provoking premise serves as the foundation for an open letter that has garnered an impressive 33,002 signatures of important stakeholders as of July 10th, 2023, signifying the widespread concerns and urgent call for regulations surrounding AI due to the prevailing level of distrust. Among the signatories are renewed CEOs and scientists in the field of AI such as:

  • Anthony Aguirre, University of California, Santa Cruz, Executive Director of Future of Life Institute.

  • Yoshua Bengio, Founder and Scientific Director at Mila, Professor at the University of Montreal.

  • Chris Larsen, Co-Founder, Ripple.

  • Emad Mostaque, CEO, Stability AI.

  • Elon Musk, CEO of SpaceX, Tesla & Twitter.

  • Stuart Russell, Berkeley, Professor of Computer Science, Center for Intelligent Systems.

  • Jaan Tallinn, Co-Founder of Skype, Centre for the Study of Existential Risk, Future of Life Institute.

  • Max Tegmark, MIT Center for Artificial Intelligence & Fundamental Interactions.

  • Steve Wozniak, Co-founder, Apple.

Not only are leaders in the field highlighting the potential adverse impacts of AI, but they are particularly emphasizing the looming danger of uncontrolled harmful effects: “Powerful AI systems should be developed only once we are confident that their effects will be positive, and their risks will be manageable.” The Asilomar AI principles offer governance mechanisms that have been identified at the Beneficial AI 2017 conference and it focused on: research issues (Table 2), ethical values (Table 3), and long-term issues (Table 4).

ScopeDomainSolution
Research issuesResearch goalThe goal of AI research should be to create not undirected intelligence, but beneficial intelligence.
Research fundingInvestments in AI should be accompanied by funding for research on ensuring its beneficial use … in computer science, economics, law, ethics, and social studies.
Science-policy linkThere should be constructive and healthy exchanges between AI researchers and policymakers.
Research cultureA culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.
Race avoidanceTeams developing AI systems should actively cooperate to avoid corner-cutting safety standards.

Table 2.

Research issues and AI principles for governance from the Future of Life Institute.

ScopeDomainSolution
Ethical and valuesSafetyAI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.
Failure transparencyIf an AI system causes harm, it should be possible to ascertain why.
Judicial transparencyAny involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.
ResponsibilityDesigners and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.
Value alignmentHighly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.
Human valuesAI systems should be designed and operated to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.
Personal privacyPeople should have the right to access, manage, and control the data they generate, given AI systems’ power to analyze and utilize that data.
Liberty and privacyThe application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.
Shared benefitAI technologies should benefit and empower as many people as possible.
Shared prosperityThe economic prosperity created by AI should be shared broadly, to benefit all of humanity.
Human controlHumans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.
Non-subversionThe power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.
AI arms raceAn arms race for lethal autonomous weapons should be avoided.

Table 3.

Ethical values and AI principles for governance from the Future of Life Institute.

ScopeDomainSolution
Long-term issuesCapability cautionThere being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.
ImportanceAdvanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.
RisksRisks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.
Recursive self-improvementAI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.
Common goodSuperintelligence should only be developed in the service of widely shared ethical ideals and for the benefit of all humanity rather than one state or organization.

Table 4.

Long-term issues and AI principles for governance from the Future of Life Institute.

The AI research goal is to create beneficial intelligence, not undirected. Funding should support research on its beneficial use, addressing key areas: robust AI systems, managing automation’s impact, updating legal systems, and defining AI’s values. The constructive exchange between researchers and policy-makers is crucial, fostering cooperation, trust, and transparency. Teams must cooperate to ensure safety standards.

Ethics and values in AI entail prioritizing safety and security throughout AI’s operational lifetime. Transparency is essential, in explaining AI-caused harm and judicial decisions involving AI. Designers have a responsibility to align AI goals with human values while safeguarding privacy and ensuring shared benefits. Human control over AI decisions is vital to prevent undermining societal processes. An AI arms race, especially with lethal autonomous weapons, should be avoided.

In the development of AI, caution is necessary regarding assumptions about future capabilities. Advanced AI could have profound implications for life on Earth, requiring careful planning and allocation of resources. Mitigating risks, especially catastrophic ones is vital and should match the potential impact. AI systems with recursive self-improvement capabilities must adhere to strict safety measures. Superintelligence should serve shared ethical ideals and benefit all of humanity, rather than being controlled by a single state or organization.

Advertisement

6. Conclusion

The rapid ascent of AI and other general-purpose technologies has marked the advent of the fourth industrial revolution, triggering substantial transformations in business practices and productivity potential. While these emerging technologies offer numerous benefits, they also present a range of threats, concerns, and challenges. This chapter has delved into the dark side of the fourth industrial revolution, exploring its negative implications based on the available literature.

One major concern revolves around employment, encompassing the potential rise in unemployment rates and the emergence of structural unemployment. As the nature of work changes, there is a need for rapid reskilling and upskilling to ensure the future employability of the existing workforce. Moreover, the high dependence on machines raises major ethical concerns, including privacy breaches and discrimination. The impact of AI on socioeconomic factors is also worrisome, as it can exacerbate social and income inequalities, further dividing society into different socioeconomic classes. Also, the development of AI and its integration into human resource management (HRM) has brought both opportunities and challenges. Algorithmic management and people analytics have the potential to enhance HR practices, but they also raise concerns regarding decision-making fairness, employee autonomy, and potential discrimination. Resistance from employees toward AI and algorithmic management further complicates the adoption and implementation process. The socioeconomic impacts of AI go beyond HRM, affecting various domains such as hiring practices and economic inequalities. Homosocial reproduction and biased decision-making perpetuate discrimination and reinforce social and economic categorization. The widening gap between technology-driven jobs and the disappearance of lower-skilled positions and fading jobs pose significant challenges, necessitating proactive measures to reskill and upskill the workforce. Education institutions and organizations have vital roles to play in facilitating the necessary training and skill development required to thrive in the technology-driven landscape.

By addressing the negative effects of AI and embracing inclusive strategies, we can strive for a future where the benefits of technological advancements are distributed more equitably. However, the uncontrolled and unregulated use of AI poses significant risks. Leaders in the field, along with stakeholders (e.g. governments) emphasize the importance of responsible AI development and the need for governance mechanisms to ensure positive outcomes and manage potential harms. These include considerations of safety, failure transparency, human values, shared prosperity, and the avoidance of AI arms races. Thus, navigating the challenges and maximizing the benefits of AI in the fourth industrial revolution requires a comprehensive approach. It demands collaboration among policymakers, industry leaders, educational institutions, governments, and society as a whole. By addressing concerns, promoting reskilling and upskilling, and implementing ethical frameworks, we can strive for a future where AI enhances the well-being of individuals and society as a whole, leading to a more inclusive and equitable socioeconomic landscape.

References

  1. 1. McKinsey. The state of AI 2022-and a half decade in review. 2023 [En ligne]. Disponible sur: https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2022-and-a-half-decade-in-review#/
  2. 2. Cheng X, Lin X, Shen X-L, Zarifis A, Mou J. The dark sides of AI. Electronic Markets. 2022;32(1):11-15. DOI: 10.1007/s12525-022-00531-5
  3. 3. Q. Ai, The Pros and Cons of Artificial Intelligence. 2022 [En ligne]. Disponible sur: https://www.forbes.com/sites/qai/2022/12/01/the-pros-and-cons-of-artificial-intelligence/?sh=1cb8010a4703
  4. 4. Schwab K. The Fourth Industrial Revolution. New York, USA: Crown; 2017
  5. 5. Tomašev N et al. AI for social good: Unlocking the opportunity for positive impact. Nature Communications. 2020;11(1):Art. no 1. DOI: 10.1038/s41467-020-15871-z
  6. 6. Vinuesa R et al. The role of artificial intelligence in achieving the sustainable development goals. Nature Communications. 2020;11(1):Art. no 1. DOI: 10.1038/s41467-019-14108-y
  7. 7. Braganza A, Chen W, Canhoto A, Sap S. Productive employment and decent work: The impact of AI adoption on psychological contracts, job engagement and employee trust. Journal of Business Research. 2021;131:485-494. DOI: 10.1016/j.jbusres.2020.08.018
  8. 8. García-Micó TG, Laukyte M. Gender, health, and AI: How using AI to empower women could positively impact the sustainable development goals. In: Mazzi F, Floridi L, editors. The Ethics of Artificial Intelligence for the Sustainable Development Goals, Philosophical Studies Series. Cham: Springer International Publishing; 2023. pp. 291-304. DOI: 10.1007/978-3-031-21147-8_16
  9. 9. Anshari M, Hamdan M. Understanding knowledge management and upskilling in Fourth Industrial Revolution: Transformational shift and SECI model. VINE Journal of Information and Knowledge Management Systems. 2022;52(3):373-393. DOI: 10.1108/VJIKMS-09-2021-0203
  10. 10. Davis N. What is the fourth industrial revolution? World Economic Forum [En ligne]. Disponible sur: https://alejandroarbelaez.com/wp-content/uploads/2020/10/What-is-the-fourth-industrial-revolution-WEF.pdf
  11. 11. Rainnie A, Dean M. Industry 4.0 and the future of quality work in the global digital economy. Labour and Industry: A Journal of the Social and Economic Relations of Work. 2020;30(1):16-33. DOI: 10.1080/10301763.2019.1697598
  12. 12. Xu M, David JM, Kim SH. The fourth industrial revolution: Opportunities and challenges. International Journal of Financial Research. 2018;9(2):90. DOI: 10.5430/ijfr.v9n2p90
  13. 13. Lichtenthaler U. Substitute or synthesis: The interplay between human and artificial intelligence. Research-Technology Management. 2018;61(5):12-14. DOI: 10.1080/08956308.2018.1495962
  14. 14. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):Art. 7553. DOI: 10.1038/nature14539
  15. 15. Martinelli A, Mina A, Moggi M. The enabling technologies of industry 4.0: Examining the seeds of the fourth industrial revolution. Industrial and Corporate Change. 2021;30(1):161-188. DOI: 10.1093/icc/dtaa060
  16. 16. Kurian N, Cherian JM, Sudharson NA, Varghese KG, Wadhwa S. AI is now everywhere. British Dental Journal. 2023;234(2):72-72
  17. 17. Gal U, Jensen TB, Stein M-K. Breaking the vicious cycle of algorithmic management: A virtue ethics approach to people analytics. Information and Organization. 2020;30(2):100301. DOI: 10.1016/j.infoandorg.2020.100301
  18. 18. Fernandez V, Gallardo-Gallardo E. Tackling the HR digitalization challenge: Key factors and barriers to HR analytics adoption. Competitiveness Review: An International Business Journal. 2020;31(1):162-187. DOI: 10.1108/CR-12-2019-0163
  19. 19. Cappelli P. Data science can’t fix hiring (yet). Harvard Business Review. 2019;1 [En ligne]. Disponible sur: https://hbr.org/2019/05/data-science-cant-fix-hiring-yet
  20. 20. von Krogh G. Artificial intelligence in organizations: New opportunities for phenomenon-based theorizing. Academy of Management Discoveries. 2018;4(4):404-409. DOI: 10.5465/amd.2018.0084
  21. 21. Giermindl LM, Strich F, Christ O, Leicht-Deobald U, Redzepi A. The dark sides of people analytics: Reviewing the perils for organisations and employees. European Journal of Information Systems. 2022;31(3):410-435. DOI: 10.1080/0960085X.2021.1927213
  22. 22. Bankins S, Formosa P, Griep Y, Richards D. AI decision making with dignity? Contrasting workers’ justice perceptions of human and AI decision making in a human resource management context. Information Systems Frontiers. 2022;24(3):857-875. DOI: 10.1007/s10796-021-10223-8
  23. 23. Logg JM, Minson JA, Moore DA. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes. 2019;151:90-103. DOI: 10.1016/j.obhdp.2018.12.005
  24. 24. Barocas S, Hood S, Ziewitz M. Governing algorithms: A provocation piece. Malte Ziewitz: SSRN Electronic Journal. 2013. DOI: 10.2139/ssrn.2245322
  25. 25. Holford WD. The future of human creative knowledge work within the digital economy. Futures. 2019;105:143-154. DOI: 10.1016/j.futures.2018.10.002. Available from: Governing Algorithms: A Provocation Piece by Solon Barocas, Sophie Hood, Malte Ziewitz :: SSRN
  26. 26. Cheng MM, Hackett RD. A critical review of algorithms in HRM: Definition, theory, and practice. Human Resource Management Review. 2021;31(1):100698
  27. 27. Rosenblat A, Stark L. Algorithmic labor and information asymmetries: A case study of uber’s drivers. International Journal of Communication. 30 July 2016;10:27. DOI: 10.2139/ssrn.2686227. Available from: https://ssrn.com/abstract=2686227
  28. 28. Jarrahi MH, Sutherland W. Algorithmic management and algorithmic competencies: Understanding and appropriating algorithms in gig work. In: Information in Contemporary Society: 14th International Conference, iConference 2019, March 31–April 3, 2019, Proceedings 14. Vol. 2019. Washington, DC, USA: Springer; 2019. pp. 578-589
  29. 29. Kellogg KC, Valentine MA, Christin A. Algorithms at work: The new contested terrain of control. Academy of Management Annals. 2020;14(1):366-410. DOI: 10.5465/annals.2018.0174
  30. 30. Lamers L, Meijerink J, Jansen G, Boon M. A capability approach to worker dignity under algorithmic management. Ethics and Information Technology. 2022;24(1):10
  31. 31. Rodgers W, Murray JM, Stefanidis A, Degbey WY, Tarba SY. An artificial intelligence algorithmic approach to ethical decision-making in human resource management processes. Human Resource Management Review. 2023;33(1):100925. DOI: 10.1016/j.hrmr.2022.100925
  32. 32. Selbst AD, Boyd D, Friedler SA, Venkatasubramanian S, Vertesi J. Fairness and Abstraction in Sociotechnical Systems. In: Proceedings of the Conference on Fairness, Accountability, and Transparency. Atlanta, GA, USA: ACM; 2019. pp. 59-68. DOI: 10.1145/3287560.3287598
  33. 33. Dastin J. Amazon’s surveillance culture is “breaking” its workers, Huck. 2023 [En ligne]. Disponible sur: https://www.huckmag.com/article/speaking-to-amazon-uk-workers-on-the-picket-lines-in-coventry-2023
  34. 34. Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science. 2019;366(6464):447-453. DOI: 10.1126/science.aax2342
  35. 35. Kordzadeh N, Ghasemaghaei M. Algorithmic bias: Review, synthesis, and future research directions. European Journal of Information Systems. 2022;31(3):388-409. DOI: 10.1080/0960085X.2021.1927212
  36. 36. Wang J, Zhang X, Zhang LJ. Effects of teacher engagement on students’ achievement in an online english as a foreign language classroom: The mediating role of autonomous motivation and positive emotions. Frontiers in Psychology. 2022;13:950652. DOI: 10.3389/fpsyg.2022.950652
  37. 37. Ferrari F, Graham M. Fissures in algorithmic power: Platforms, code, and contestation. Cultural Studies. 2021;35(4-5):814-832. DOI: 10.1080/09502386.2021.1895250
  38. 38. Goods C, Veen A, Barratt T. “Is your gig any good?” Analysing job quality in the Australian platform-based food-delivery sector. Journal of Industrial Relations. 2019;61(4):502-527. DOI: 10.1177/0022185618817069
  39. 39. Di Vaio A, Hasan S, Palladino R, Profita F, Mejri I. Understanding knowledge hiding in business organizations: A bibliometric analysis of research trends, 1988-2020. Journal of Business Research. 2021;134:560-573. DOI: 10.1016/j.jbusres.2021.05.040
  40. 40. Baiocco S, Fernandez-Macías E, Rani U, Pesole A. The algorithmic management of work and its implications in different contexts. JRC Working Papers on Labour, Education and Technology 2022-02, Joint Research Centre (Seville site). Available from: https://ideas.repec.org/p/ipt/laedte/202202.html
  41. 41. Parent-Rocheleau X, Parker SK. Algorithms as work designers: How algorithmic management influences the design of jobs. Human Resource Management Review. 2022;32(3):100838. DOI: 10.1016/j.hrmr.2021.100838
  42. 42. Hamilton RH, Sodeman WA. The questions we ask: Opportunities and challenges for using big data analytics to strategically manage human capital resources. Business Horizons. 2020;63(1):85-95. DOI: 10.1016/j.bushor.2019.10.001
  43. 43. Buolamwini J, Gebru T. Gender shades: Intersectional accuracy disparities in commercial gender classification. In: Conference on Fairness, Accountability and Transparency. New York, USA: PMLR; 2018. pp. 77-91
  44. 44. Simbeck K. HR analytics and ethics. IBM Journal of Research and Development. 2019;63(4/5):9:1-9:12. DOI: 10.1147/JRD.2019.2915067
  45. 45. Yuan X, Bennett Gayle D, Knight T, Dubois E. Adoption of artificial intelligence technologies by often marginalized populations. In: Yuan X, Wu D, Gayle DB, editors. Social Vulnerability to COVID-19: Impacts of Technology Adoption and Information Behavior, Synthesis Lectures on Information Concepts, Retrieval, and Services. Cham: Springer International Publishing; 2023. pp. 31-49. DOI: 10.1007/978-3-031-06897-3_3
  46. 46. Kroll JA, Huey J, Barocas S, Felten EW, Reidenberg, Joel R, et al. Accountable Algorithms (March 2, 2016). University of Pennsylvania Law Review, Vol. 165, 2017 Forthcoming, Fordham Law Legal Studies Research Paper No. 2765268. Available from: https://ssrn.com/abstract=2765268
  47. 47. Chung H, Iorga M, Voas J, Lee S. Alexa, Can I Trust You? Computer. 2017;50:100-104. DOI: 10.1109/MC.2017.3571053
  48. 48. Mittelstadt BD, Allo P, Taddeo M, Wachter S, Floridi L. The ethics of algorithms: Mapping the debate. Big Data & Society. 2016;3(2):2053951716679679
  49. 49. McMahan HB, Moore E, Ramage D, Hampson S, Arcas BAY. Communication-efficient learning of deep networks from decentralized data. arXiv. 2023;54:1273-1282. DOI: 10.48550/arXiv.1602.05629
  50. 50. Krueger AB. How computers have changed the wage structure: Evidence from microdata, 1984-1989. Quarterly Journal of Economics. 1993;108(1):33-60
  51. 51. Autor DH, Dorn D. The growth of low-skill service jobs and the polarization of the US labor market. American Economic Review. 2013;103(5):1553-1597. DOI: 10.1257/aer.103.5.1553
  52. 52. Frey CB, Osborne MA. The future of employment: How susceptible are jobs to computerisation? Technological Forecasting and Social Change. 2017;114:254-280
  53. 53. Santiago LE. The industries of the future in Mexico: Local and non-local effects in the localization of “knowledge-intensive services”. Growth Change. 2020;51(2):584-606
  54. 54. Su G. Unemployment in the AI age. AI Matters. 2018;3(4):35-43. DOI: 10.1145/3175502.3175511
  55. 55. Blanchard OJ, Summers LH. Hysteresis and the European unemployment problem. NBER Macroeconomics Annual. 1986;1:15-78
  56. 56. Gray R. Taking technology to task: The skill content of technological change in early twentieth century United States. Explorations in Economic History. 2013;50(3):351-367. DOI: 10.1016/j.eeh.2013.04.002
  57. 57. Rathi A. Stephen Hawking: Robots aren’t just taking our jobs, they’re making society more unequal , Quartz. 2023. [En ligne]. Disponible sur: https://qz.com/520907/stephen-hawking-robots-arent-just-taking-our-jobs-theyre-making-society-more-unequal
  58. 58. Brougham D, Haar J. Smart Technology, Artificial Intelligence, Robotics, and Algorithms (STARA): Employees’ perceptions of our future workplace. Journal of Management & Organization. 2018;24(2):239-257. DOI: 10.1017/jmo.2016.55
  59. 59. Webb M. The impact of artificial intelligence on the labor market. SSRN Electronic Journal. 2019. DOI: 10.2139/ssrn.3482150. Available from: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3482150
  60. 60. D. Acemoglu P. Restrepo, The Wrong Kind of AI? Artificial Intelligence and the Future of Labor Demand 2020. [En ligne]. Disponible sur: https://www.nber.org/system/files/working_papers/w25682/w25682.pdf
  61. 61. Autor DH. Applying AI to rebuild middle class jobs. SSRN Electronic Journal. 2024: w32140. DOI: 10.2139/ssrn.4722981. Available from: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4722981
  62. 62. Baldwin R. The Globotics Upheaval: Globalization, Robotics, and the Future of Work. Oxford, UK: Oxford University Press; 2019
  63. 63. Agrawal A, Gans J, Goldfarb A. Power and Prediction: The Disruptive Economics of Artificial Intelligence. Harvard, USA: Harvard Business Press; 2022
  64. 64. Li L. Reskilling and upskilling the future-ready workforce for Industry 4.0 and beyond. Information Systems Frontiers. 2022. Available from: https://link.springer.com/article/10.1007/s10796-022-10308-y
  65. 65. Morandini S, Fraboni F, De Angelis M, Puzzo G, Giusino D, Pietrantoni L. The impact of artificial intelligence on workers’ skills: Upskilling and reskilling in organisations. Informing Science Institute. 2023;26:39
  66. 66. Puzzo G, Fraboni F, Pietrantoni L. Artificial intelligence and professional transformation: Research questions in work psychology. Rivista Italiana di Ergonomia. 2020;21:43-60
  67. 67. Jaiswal A, Arun CJ, Varma A. Rebooting employees: Upskilling for artificial intelligence in multinational corporations. The International Journal of Human Resource Management. 2022;33(6):1179-1208

Written By

Kevin Sevag Kertechian and Hadi El-Farr

Submitted: 27 July 2023 Reviewed: 23 February 2024 Published: 06 May 2024