Open access peer-reviewed chapter

Artificially Intelligent Super Computer Machines and Robotics: Apprehensions and Challenges – A Call for Responsible Innovation Framework

Written By

Khalid Rasheed Memon and Say Keat Ooi

Submitted: 16 June 2022 Reviewed: 25 August 2022 Published: 06 October 2022

DOI: 10.5772/intechopen.107372

From the Edited Volume

Industry 4.0 - Perspectives and Applications

Edited by Meisam Gordan, Khaled Ghaedi and Vala Saleh

Chapter metrics overview

181 Chapter Downloads

View Full Metrics

Abstract

“Industrial revolution 4.0” is a term that is becoming increasingly popular among academics. A number of articles have been carved to emphasize the beneficial aspects of the stated issue under many titles such as cyber-physical systems, internet of things, artificial intelligence, smart manufacturing, digitalization of industrial production, and so on. However, few academics have delved into the negative or dark side of such a profound technological paradigm change, especially the artificially intelligent robotics, creating a large knowledge vacuum. Because of this, little is known about the negative repercussions of artificial intelligence (AI), a key component of the Fourth Industrial Revolution (or IR 4.0). It is an open secret now that AI machines may have serious impacts on human autonomy, fairness, justice, and agency. These unanticipated consequences have resulted in the development of an emerging concept, that is, responsible innovation. The responsible innovation framework binds the firm ethically, morally, and socially to be responsible, environmentally friendly, humanitarian, and business-oriented while developing innovative products. The current study proposes an integrated responsible innovation framework that acts as a science governance mechanism and considers organizations and stakeholders collectively responsible for upcoming technological innovations. This study has suggested several implications for policymakers.

Keywords

  • artificial intelligence
  • industrial revolution 4.0
  • cyber-physical systems
  • responsible innovation
  • business ethics

1. Introduction

The fourth industrial revolution, considered as IR 4.0, is ushering in a new era of industry and technological innovation. It will be dominated by manufacturing and industrial processes that use cyber-physical systems, cloud computing, big data & artificial intelligence [1, 2]. Drastic changes are coming to companies across the board in terms of their effect on value generation, business models, and downstream services [3]. Indeed, IR 4.0 is not more than an expansion of information and communication technology, paired with the exponentially grown transmission, computation, and storage capabilities that enable the materialization of incredibly powerful, linked technological systems, dubbed “Cyber-Physical-Systems” [4].

Cyber-physical systems (CPS) integrate computer, networking, and physical processes, utilizing various technologies such as artificial intelligence, robots, big data, and security [5]. These technologies would have a significant influence on industrial output, as well as on our daily lives. It is now common to engage with robots and artificially intelligent devices in a variety of applications [6], including 3D printing, educational learning agents, home health examinations, online vehicle sales systems, gaming and entertainment, and maintenance [7]. Our small and medium enterprises (SMEs) will also be profoundly impacted by this technological breakthrough and the possibility of fusing the virtual and real worlds enabled by these cyber-physical systems [8].

This kind of artificially intelligent technology, in today’s networked age, would make it possible for us to access information and services from anywhere, even our hands, cell phones, cars, and other home and personal electronics may all be controlled from a distance. When it comes to air conditioning, we may want to switch on the unit as soon as we get home so that our room is nice and chilly. Coffee machines may even prepare coffee for us while we are asleep, saving our time and effort in the morning. To make repairs on certain devices, remote access may be required. This will allow technicians to find the true source of the problem and provide them with a working spare part. Using an efficient communication system, even its own system may order the required spares [9]. Scalability, accuracy, efficiency, and long-term sustainability are all attributes these artificially intelligent robots possess [10].

Artificial Intelligence (AI) is a term frequently used to refer to the branch of research that aims to equip robots with cognitive abilities such as logic, reasoning, planning, learning, and perception. Despite the reference to “machines” in this description, it is applicable to “any sort of living intellect” [11]. Similarly, as it is present in primates and other remarkable creatures, the definition of intelligence may be expanded to include an interconnected collection of capacities, such as creativity, emotional awareness, and self-consciousness [12]. Until the late 1980s, “symbolic AI” was strongly connected with the phrase artificial intelligence. To address some of the constraints of symbolic AI, sub-symbolic approaches such as neural networks, fuzzy systems, evolutionary computing, and other computational models began to gain favor, resulting in the emergence of the branch of AI known as computational intelligence [13]. Currently, the word AI embraces the entire notion of an intelligent computer, including its operational and societal implications.

A practical definition is that offered by Russell et al. [14]: “Artificial Intelligence is the study of human intelligence and actions replicated artificially, such that the resultant bears to its design a reasonable level of rationality.” This concept can be further improved by specifying that, for specified and well-defined activities, the degree of rationality may even surpass that of humans.

Current AI technologies are utilized in internet advertising, driving, aviation, health, and picture identification for personal support. Recent advances in AI have captivated both the scientific community and the general audience. This is shown by automobiles with automated steering systems, commonly known as autonomous cars [10]. Each vehicle is outfitted with a collection of lidar sensors and cameras that enable detection of its three-dimensional environment and give the capability to make intelligent judgments regarding moves in changeable, real-world traffic situations. The authors Perez et al. [15] cited that the Alpha-Go, built by Google Deepmind to play the board game Go, is another example. Last year, Alpha-Go became the first computer to defeat a professional player when it defeated the Korean grandmaster, Lee Sedol. More recently, it defeated the current world number one, Ke Jie, in China.

Another recent illustration includes Microsoft’s Tay, an artificial intelligence robot built to carry on discussions on social networks. It had to be deactivated shortly after its debut since it could not discern between positive and bad human contact [16]. The emotional intelligence of AI is likewise restricted. AI can only recognize fundamental human emotional states such as anger, happiness, sorrow, fear, suffering, and tension. Emotional intelligence is one of the next frontiers of greater customization. True and full AI does not yet exist. At this stage, AI will replicate human cognition to the extent that it will be capable of dreaming, thinking, feeling emotions, and having its own objectives [12]. Although there is no indication that this type of genuine AI will emerge before 2050, the computer science concepts propelling AI ahead are evolving quickly, and it is vital to analyze its implications from not only a scientific stance, but also from a social, ethical, and legal one [15].

Artificially intelligent technology also provides enormous societal advantages in nearly every part of life by doing activities autonomously (via robots), reducing costs and time without human interaction, standardizing services, and assisting people with tedious and dangerous duties [17]. We’ll have self-driving automobiles when the rest of civilization is fully automated [18]. But this would not stop here since the world is moving towards “persuasive computing” which would manipulate our minds through the use of sophisticated algorithms on our data and we would be steered through free internet offerings or complex work processes; even these would be used in politics as the governments like to steer their citizens. Especially during elections, whosoever will control and use this technology for the manipulation of undecided voters, can win the elections whereas the said controlling of minds would be difficult to detect [19]. Thus, artificially intelligent devices and robots, in particular, have the potential to do terrible harm to the globe [20]. Machines with artificial intelligence are predicted to overtake humans in all aspects of life between 2020 and 2060. However, prominent scientists and technology experts like Elon Musk, Bill Gates, and Steve Wozniak are predicting that this will pose a serious threat to civilization in the years to come [21]. As a result of these artificially intelligent computers, we will lose our democracies, our autonomous and self-governance decision-making, and our distinctiveness; consequently, we must protect these pillars of our existence as they are the cornerstone for enhanced efficiency and success [12]. As mature information societies, we must be prepared for technology breakthroughs that have social, ethical, economic, and sustainability ramifications, and we must have contingency plans in place. These digital transitions should not occur abruptly and unexpectedly [18].

Numerous initiatives have been launched in Europe and the United States to address these concerns/challenges, including “Technological Assessment Organizations” (TA), “Technological Assessment and Ethical, Legal, and Social Aspects of Emerging Sciences” (ELSA), the “United States Office of Technology Assessment” (OTA), and the “Netherlands Organization for Technology Assessment” (NOTA). Similarly, the “Triple Helix Framework” including university, industry, and government, as well as the “Quadruple Helix Framework” incorporating “Society” as a fourth component, were developed. These movements/frameworks were intended to bridge the divide between society and technological advancements. However, “Responsible Innovation” (RI) is regarded to be far broader than these movements, as it encompasses both societal and governance applications [22, 23, 24]. RI has become one of the most critical and paramount areas for scientific empirical research, which gained diminutive attention earlier [5, 17]. However, responsible innovation has suddenly gained traction and momentum even after extreme crises; Europeans believe that sustainable and smart growth can only be achieved with innovation where responsible innovation develops structure and policy for such innovation [4]. RI will then be used to get Europe’s strategies out of this economic downturn.

Responsible innovation emphasizes that its high time to understand the relationship between IR 4.0 and sustainability; whereby to ensure responsible citizenship, a well-prepared and planned strategy for technological advancements with social, ethical, economic, and sustainability ramifications is crucial. This study aims to contribute theoretically to a relatively under-researched area by proposing the role of organizational resources and capabilities as the antecedents of responsible innovation, highlighting the often-neglected business responsibility that is expected to form a sustainable competitive advantage, and ultimately resulting in a better sustainability performance.

In short, the following are some of the ways that the present research hopes to contribute to the existing body of knowledge. To begin, it will provide a definition of AI and highlight significant advances in the field. The next step is to offer a comprehensive analysis of the possible dangers and risks posed by AI machines on a global scale. Thirdly, the study will emphasize the importance, efficacy, and significance of a responsible innovation framework to offset all of the upcoming challenges and disasters that will be caused by technological innovations, including AI, and stresses the importance of adopting the framework. Fourthly, the study will put out a proposition for an integrated framework of responsible innovation. This framework is comprised of RI dimensions, the helices of the quadruple helix framework, and the basis of the relationship between all these helices. In the end, the study will discuss future research and limitations, and conclusion.

Advertisement

2. Artificial intelligence, its limitations, and responsible innovation

This section will now define AI, briefly discuss its periodic developments, and then discuss the challenges and apprehensions our world may face as a result of AI. At the conclusion of this section, RI will be presented as the solution for dealing with the impending disaster, catering to all technological innovations in the IR 4.0 era, including AI.

2.1 Defining “artificial intelligence”

Artificial intelligence (AI) is one of the latest fields. The work on AI started soon after World War II, whereas its name was invented in 1956. AI presently comprises many sub-fields that may range from general to very specific, for instance, playing chess, poetry writing, car driving, diagnosing a disease, etc. It’s basically about all intellectual tasks [25]. Let us first define what intelligence is then we’ll move toward artificial intelligence.

Intelligence has been defined in many ways, and it has been a bit controversial as well [26]. Therefore, we would present the definition from “Mainstream science on intelligence” (1994), which is a collective statement, signed/agreed by 52 researchers (out of 131) who were invited to sign in Wall Street Journal as an op-ed statement:

“Intelligence is a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience. It is not merely book learning, a narrow academic skill, or test-taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings—“catching on,” “making sense” of things, or “figuring out” what to do” [27].

Similarly, the concept of artificial intelligence is defined, and it’s evident that the incorporation of intelligence in machines would be considered as artificial intelligence. However, it is formally defined as: “The science and engineering of making intelligent machines, especially intelligent computer Programs” [28], whereas Copeland [29] defines AI as “the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.”

2.2 Review of previous works on artificial intelligence

The author of this chapter has highlighted, through sequential data, the overall progression that has taken place within the subject of artificial intelligence (AI) since many readers of this chapter may want a high-level overview of the topic. This would make it simpler, to create links between the many AI sub-fields and to comprehend the incremental development of AI (See Table 1). Nevertheless, these new advancements only address a portion of the whole topic and are listed here for your convenience:

YearDevelopmentSource
1923The play “Rossum’s University Robots” by Karel Kapek premiered in London, marking the first time, the word “Robot” was used in English.[15]
1945The word “Robotics” was coined by Columbia University alumnus Issac Asimov.[30]
1950Alan Turing developed the Turing Test, which measures intellect. Claude Shannon offered a comprehensive examination of chess playing as a search in his book, Chess: A Search for Meaning.[31]
1956“Artificial Intelligence” term was coined by John McCarthy.[25]
1958LISP, the artificial intelligence programming language created by John McCarthy, was released.[25]
1964Danny Bobrow’s thesis at MIT demonstrated that computers are capable of accurately solving algebra word problems.[28]
1979The “Stanford Cart,” the first computer-controlled autonomous vehicle, was created.[28]
1984Dennet discussed the frame problem and how it is related to the difficulties arising from attempting to give robots common sense.[15]
1990Major advances in AI:
  1. The significant demonstrations in Machine Learning.

  2. The case-based reasoning

  3. The multi-agent planning

  4. Scheduling

  5. Data mining, web crawler

  6. The natural language understanding and translation

  7. Vision, virtual reality

  8. Games

[32]
1997World Champion Gerry Kasparov was defeated by the “Deep Blue Chess Program.”[25]
2000Commercialization of robotic pets that can be interacted with. MIT created a robot named “Kismet” with an expressive face.[15]
2015First Humanoid Robot Sophia was developed.[33]
2016TAY robot was released by Microsoft as a chatbot for interacting with Twitter users.[15]

Table 1.

Highlights of incremental developments in the AI field.

2.3 Artificial intelligence-apprehensions and challenges

2.3.1 A threat to world sustenance

Currently, the field of artificial intelligence is thriving. The days of creating code one line at a time are finished, and its time for a change. The world is on its way to becoming intelligent. We will soon have smart homes, smart towns, and smart industries [16]. However, these AI systems are smarter than humans in several ways, including computing, driving automobiles, playing chess, and other strategic games. Scientists are working to create robots that mimic human behavior. This self-reinforcing nature of these robots means that there is a limited amount of time remaining to produce more inventive machines than humans.

However, in the 1960s, the US Defense Department acquired an interest in AI research that had previously focused on problem-solving and symbolic techniques. As a result of these important landmarks, more research and development in this field have led to the development of modern computers such as decision support systems (DSS) and smart computers. However, the frightening image of these robotics is still science fiction, even while it may occur in the future if we do not regulate or establish appropriate strategies for controlling and configuring technological breakthroughs, we can prevent it from occurring [4].

Artificially intelligent robots and computers are predicted to exceed humans in all aspects of life between 2020 and 2060, as previously indicated, despite the dire warnings and predictions of prominent scientists and technologists such as Elon Musk, Bill Gates, and Steve Wozniak [16]. Aside from that, these machines with artificial intelligence have the potential to be used in harmful ways to serve evil ends, making them possibly much more destructive than an atom bomb [7]. Terrorists, criminals, and religious extremists have always been drawn to and craved a position of authority. The Pentagon’s and the White House’s computer networks have already been compromised, so there is no guarantee that robots will not be misused [12]. As a result, instead of building massive, unmanageable computers, it is suggested that intelligent machines be distributed around the world.

2.3.2 The impact of artificial intelligence on jobs, employment, and social life

Jobs and unemployment are a major issue for the general population, with many predicting more deindustrialization [34]. Artificially intelligent systems/robotics will unquestionably modify job roles and may demand more thought-provoking but less technical or professional skills-oriented work tasks and competencies, as well as problem-solving and self-organizing work duties and competences [35]. Research on this subject is extensively available (See [34, 36, 37]).

According to this theory, the ultimate objective of cyber-physical systems and smart manufacturing is not to replace humans with machines, but to link and work with humans in order to accomplish mass customization via their combined effort. A new generation of robots will take the place of low-skilled individuals and technical specialists who do routine activities [35]. Due to the fact that machines can anticipate and correct problems even before they occur, jobs requiring expert planning and monitoring will become more in demand [34, 37], and an increase in the mental requirements for employees will result [37, 38]. The required credentials and talents of employees are, nevertheless, still a source of uncertainty [10, 39]. It is expected that even managers will need extra abilities in order to interact with machines and fulfill the demands and problems of the new era of technology [5]. Therefore, academics and practitioners should also emphasize the lack of competencies and abilities needed to grow up our industries in those specific areas.

Who will need to hire drivers if self-driving cars become common place in everyday life? To a similar extent, robots will take over the boring activities currently performed by humans such as news-reader and typewriter operators as well as file clerks and tellers in banks and other financial institutions. One robot may work for 24 hours continuously without taking a break or changing shifts; for example, it can read the news for 24 hours straight. Consequently, it will ultimately replace people in routine jobs. In the meantime, humans will continue to be needed for jobs that cannot be automated. These include jobs requiring social interaction and creativity as well as jobs involving physical examinations in the medical field, technical jobs (e.g., pipefitters and plumbers), and jobs requiring critical thinking [36].

2.4 Responsible innovation: a pulling strategy for the business arena

The preceding explanation demonstrates that there are increasing risks and uncertainties regarding the future effect of artificially intelligent robots. Apart from this, other technological advances such as those in biotechnology and nanotechnology (some academics and scientists believe that the present pandemic Covid-19 is also the result of such technological breakthroughs) have ensnared and powerless the whole world [4, 40]. Similarly, the literature discusses several other enormous challenges such as poverty, climate change, and sustainability, all of which require extensive involvement and dialog of stakeholders, as well as the formulation of some principles and values to understand better the associated risks, challenges, and uncertainties [41]. Regrettably, some experts feel that critical thresholds have already been crossed, putting the earth’s life-sustaining system in jeopardy. Considering these dangers, there is an immediate need for continuous actions to mitigate their impact on global health, security & development [42].

The United Nations, the European Union, international organizations, and individual governments are all tasked with finding answers to these colossal problems. Numerous projects have been launched to involve companies as active participants and foster collaboration between companies, the public sector, and civil society actors to foster sustainable growth. As a result, industries are increasingly regarded as part of such societal concerns and are expected to seek answers as socially responsible organizations [43]. As previously discussed, several initiatives in Europe and the United States have been launched, including “Technological Assessment organizations” (TA), the “United States Office of Technology Assessment” (OTA), “Technological Assessment and Ethical, Legal, and Social Aspects of Emerging Sciences” (ELSA), and the “Netherlands Organization for Technology Assessment” (NOTA). However, the phrase “responsible research” was first used in the sixth framework program in 2002 to create a growing link between ethics and technology worldwide. Later on, the phrase “responsible research and innovation” (RRI) was established in Europe’s 7th framework plan in 2013 to foster public trust in scientific discoveries (“Regulation (EU) No 1291/2013”, 2013). And today, RRI is often regarded as encompassing these trends, as it encompasses both social and governance applications [22, 23, 24].

The framework offered by the authors mainly integrates many previously published strategies that have made significant contributions to various elements, as mentioned earlier, in multiple ways. These points underscore the need to stimulate scientists’ reflexivity, extend the range of strategic alternatives, “open up,” and infuse a greater capacity for reflection within research work [44, 45, 46]. Anticipatory governance [47, 48], technology assessment in all of its forms (constructive and real-time, for example; [49, 50]), upstream engagement [46], socio-technical integration, and midstream modulation [51, 52]. These various instances demonstrate the need of forethought, involvement, and integration [47]. Even if it’s blatant plagiarism, responsible innovation does so with reason in some cases. By emphasizing, for example, the role that users play in innovation as well as the use of mechanisms such as patens for innovation governance, applied approaches from the fields of strategic innovation management and innovations studies (including concepts such as the democratization of innovation [53] and open innovation [54] contribute equally significantly.

Advertisement

3. Proposed theoretical framework

3.1 Integrated responsible innovation framework

Physical resources such as land, machineries, manufacturing buildings, and equipment were once seen as critical to a company’s success during previous industrial periods. Adam Smith’s worldview influenced the neoclassical economic theory, which held that physical assets were the most important source of wealth creation, and so the exploitation of these resources occurred in the midst of this theory. However, a number of academics (e.g., [55, 56, 57, 58]) argue that other trends in the financial world, such as the increasing value of service, creativity & innovation, knowledge, expertise in information and communication technologies, digitalization, and the flow of intellectual property, have shifted the focus of financial growth and wealth formation.

The new digital era has fundamentally altered the nature and techniques of manufacturing. Traditional procedures and physical resources are no longer a source of competitive advantage since they are now transparent and susceptible to imitation [59]. As a result, intangible assets and talents, such as brand recognition, innovation and creativity, and organizational culture as well as design, may be used to gain a competitive advantage. Leonidou et al. [60] emphasize the importance of intangible assets and competencies in the quest for social and ethical acceptance. Continuous innovation, cross-functional and stakeholder integration, as well as environmental strategies, are among the skills. Furthermore, Kamasak [61] claims that both physical and intangible resources should be employed jointly to prove the assertion of RBV that resources cannot be used alone since it employs the capabilities that gather, integrate, and manage a lot of resources.

Technological breakthroughs are evaluated for their ethical acceptability, sustainability, and social acceptance under the umbrella of “responsible innovation” [4162]. There is also a significant connection between responsible innovation and the resource-based view (RBV) since responsible innovation includes acquiring company resources and skills [63]. Competition and improved performance may be achieved by using an organization’s unique resources to produce a value-creating strategy that is not being employed by any other competitor. These resources are uncommon and precious and are inimitable and non-substitutable [64]. Research by Scholten and Van der Duin [65] shows that a competitive advantage may be achieved if stakeholders and consumers work together with enterprises to implement sustainable and ethical manufacturing systems. This demonstrates that a valuable resource for gaining a competitive edge and improving performance is innovative thinking done responsibly [63].

Responsible innovation (RI) establishes a deliberative framework of stakeholder involvement as the core science governance mechanism, with stakeholders considered jointly and highly accountable to the emerging technological innovations ([66], in press). It entails putting the firm’s resources and competencies to good use to become the firm’s distinguishing competency. While establishing a distinguishing capability of the company, RI may help the company obtain a competitive advantage and improve its performance [63].

In contrast, prior research on RI is only a precursor, and there is a lack of extensive empirical study, therefore nothing is known about its application for economic gains and advantages [67, 68]. Because of the uncertainty around the influence of RI on achieving a competitive edge, its practical application, and its economic advantages and gains, businesses are reluctant to use RI methods. In fact, failing to incorporate a holistic approach while generating responsible innovation may prove to be a fruitless endeavor, given the unique and specific qualities of responsible innovation. Now we would discuss in detail the suggested integrated RI framework (See Figure 1).

Figure 1.

Integrated responsible innovation framework (source: Authors).

3.2 Dimensions of responsible innovation as guiding principles

Practitioners and academics have discussed and debated numerous RI dimensions in the literature, which have been split into administrative and scholarly dimensions. As stated by the European Commission, there are six dimensions: ethics, engagement, science education, gender equality, governance, and open access. As a rule of thumb, these are administrative in nature. Stahl [69] focused on realistically implementable dimensions such as actors, norms, and activities, whereas Stilgoe et al. [70] discussed four (4) distinct aspects: anticipation, responsiveness, reflexivity, and inclusiveness. Stilgoe et al. [70] four-dimensional model would be the focus of our investigation.

Indeed, these aspects are guiding principles for the framework and build a basis for the governance mechanism of the RI framework and the RI framework. Furthermore, these concepts should be applied throughout the whole innovation process, not only at the product’s launching stage. A controlled science governance structure is therefore established, which promotes the notion of making processes responsible rather than allowing them to be unrestrained and unregulated. Let us take a look at each of these dimensions one by one.

3.2.1 Anticipation

In easy words, it is about identifying, forecasting/foreseeing the potential hazards and harms that may be caused through some technological innovation. The tools to be used for anticipation may be technology assessment, foresight, vision assessment, and horizon scanning. This allows the anticipators to understand future technological dynamics on a timely basis rather than getting too late to suggest a constructive way out for society. Looking at the future well before time would allow for allocating resources towards responsible and desirable future directions [70].

3.2.2 Inclusion

This refers to allowing selected public groups, economic and non-economic societal members, to be part of stakeholder groups so that they may convey their voices on behalf of the general public for the ultimate objective of utilizing science and innovation projects for societal benefit. The scientific innovations would get legitimacy as well by including public groups to take part in its processes. Through this way, public opinion and involvement, and governance mechanisms may be established to keep scientific innovation within limits for the public advantage. To achieve this task, several activities and programs may be arranged like public conferences, dialogs, gatherings, citizen’s juries, focus groups, deliberative polling, etc. Accordingly, scientific advisory committees may be constituted through the partnership of a multi-stakeholder approach, and a governance mechanism may be established [70].

3.2.3 Reflexivity

This refers to the phenomenon of self-evaluation, self-judgment, and accountability of oneself and the institution for their activities, assumptions, and commitments for not crossing the defined limits in their conscious and written policies and framework. One should be knowledgeable enough to scrutinize the harmful acts and processes through a self-governing mechanism. This leads to one’s moral and ethical value-based system supervising science and innovation research and developing an internal governing mechanism, binding the scientists and organizations to observe moral, ethical, and societal responsibilities. Furthermore, the next level of reflexivity comes through the written code of conduct and policies of the organization or a project. It plays its role as an external governing mechanism of reflexivity. Therefore, reflexivity demands drawing a boundary wall of moral responsibilities that do not allow one to cross through it for performing professional responsibilities only [70].

3.2.4 Responsiveness

The concept of responsiveness emphasizes the combination, inculcation, and implementation of three previously presented approaches of inclusion, anticipation, and reflexivity throughout research & innovation activities while influencing their line of action, course, programs, and relevant policies. Moreover, it involves taking action towards emerging new knowledge and perspectives and the values of various stakeholders and the public [70].

Advertisement

4. Outcomes

4.1 Integration of four dimensions of responsible innovation

When it comes to responsible innovation, it’s important to remember that the word “Responsible Innovation” encompasses much more than the narrowly defined concepts that have been popularized in the past, particularly in Europe, for instance, “Technological Assessment and Ethical, Legal, and Social Aspects of emerging sciences” (ELSA). In addition to social, ethical, and environmental considerations, the integrated elements encompass governance applications [22, 23, 24]. The focus is not on outcomes but instead on the processes. That’s why this mechanism serves as a scientific governance mechanism, which promotes the notion of ensuring that processes are managed properly rather than being uncontrolled and irresponsible [71].

RI concepts and dimensions must be introduced within the company for responsible innovation to be effective. As a result, in this section, the top-down and bottom-up techniques [72] are recommended. An effective top-down strategy to drive the company’s daily operations while making optimal use of its resources results from defining and translating RI concepts and dimensions into clear vision and mission statements for the organization as a whole. Meanwhile, a bottom-up approach to create a community-based metric has indicated that large community engagement is the key to the promotion of sustainable competitive enterprises [73].

The incorporation of four dimensions would allow for the anticipation and forecasting of future technological advancements, as well as the assessment and development of these technologically innovative products with the participation of relevant stakeholders. It would also allow for the suggestion and implementation of a self-destruction mechanism in products that would operate automatically due to the corrupt work of, for example, an artificially intelligent robot; suggesting and implementing a self-accountability mechanism that would ensure the right direction of organizational production system and in-line with the stated/promised commitments with various stakeholders; and finally determining and paving the way towards future-oriented technological needs of the business as well as stakeholders while combining the previously stated dimensions, policies, and programs for sustainable and competitive growth of the organization [70, 74].

The integration of RI dimensions would ensure the organizational response towards grand societal challenges and reverting to the public interest and increase organizational ethical and societal acceptability, leading the organization towards sustainable competitive advantage and higher firm performance. Since the firm’s main goal is to maximize its shareholders’ profitability, the firm can fulfill its targets through a responsible innovation framework ([66], in press).

4.2 Helices of quadruple helix framework as economic and non-economic stakeholders: responsible innovation as deliberative stakeholder involvement approach

Stakeholder and public involvement in innovation processes are well recognized for their value and utility [41]. There is a need for a constructive engagement of stakeholders with competing priorities and value systems in order to better understand the challenges, threats, and uncertainties associated with technologies such as nanotechnology and biotechnology, AI, big data, and those involved in IR 4.0, which are increasingly complex. This helps stakeholders to learn from one other and improves collaboration among many stakeholders as a result. This knowledge enables them to accomplish common goals and decisions and determines the necessary courses for future technical advancements. Because the significant difficulties are attributed to several social sectors (civil society, commercial sector, and government), the solution to these big challenges necessitates the active engagement of many stakeholders. It is the goal of stakeholder engagement to better understand the various viewpoints and interests of stakeholders and to actively shape the future direction of research and development. It is also widely accepted that stakeholder engagement is a vital way for considering, assessing, and determining the aims and outcomes of innovation [75].

The concept of “collective stewardship” lies at the heart of responsible innovation [70]. Economic and non-economic stakeholders would be actively involved in order to achieve cognitive and moral legitimacy, as well as ethical and social acceptability, in accordance with the framework’s guiding principles. These stakeholders have been divided into four helices as per the quadruple helix framework. 1) Academia 2) Government 3) Industry 4) Society [76]. The quadruple helix framework was used to develop these helices/stakeholders, however since the RI viewpoint of stakeholders differs slightly from the quadruple helix perspective, we will examine the RI perspective of stakeholders. Public and private entities work together in the quadruple helix to transform various inputs into beneficial outputs for themselves and others, according to this research. In a RI setting, the focus is on the mechanisms of partnership, which includes the players participating (public and private), the pooled resources, the operations, and the outcomes of the processes themselves.

Firms have to interact with a diverse set of stakeholders, including suppliers, consumers, workers, governments, universities, and non-governmental organizations. It is an open-ended discussion on who counts as a legitimate stakeholder and why, and there are several theoretical viewpoints on stakeholder engagement, including normative, descriptive, and instrumental approaches. Stakeholder engagement in responsible innovation has a normative perspective on stakeholder engagement. As per the normative view, stakeholders have an inherent value and valid interest in the systems and good of the business, and the organization must take these concerns into account. Stakeholder engagement can be described as “practices that an organization undertakes to involve stakeholders in meaningful organizational activities. “Stakeholder engagement” involves the exchange of knowledge and cooperation between stakeholders. This affects knowledge flows in all ways, including all knowledge from stakeholders to the firms and knowledge from the firms to the stakeholders. One way to facilitate knowledge exchange and two-way conversion is through dialog [41]. Stakeholder dialog offers an understanding of stakeholders’ interests, strengthens shared awareness, and allows win-win scenarios to be generated. Sharing knowledge and expertise is also a means of building trust between stakeholders since it’s a trust-based partnership and trust-building practices such as coordination and collaboration are pre-requisite.

According to the literature on responsible innovation (RI), it’s critical to involve people from all walks of life, not only those with a vested financial interest in the outcome. For example, Brand and Blok [22] take a critical look at the tensions between openness and efficiency in manufacturing and find that participation alone is not adequate. Stakeholder engagement and management is a relatively new area of research in RI, and thus, there is no empirical evidence on how to do it.

4.2.1 Ground rules for responsible innovation relationships

The ground rules refer to the four fundamental rules outlined outside the stakeholder boxes inside the bigger circle in rectangular shapes (See Figure 1). These rules form the basis of the relationship between RI dimensions and stakeholders throughout the RI process. These are 1) Trust, 2) Co-responsibility, 3) Transparency, and 4) Interaction [41].

Several essential concerns have been identified in the literature that needs to be addressed when stakeholders are involved in responsible innovation challenges. When a stakeholder’s collaboration with a rival is feared because of the risk of leaking secret knowledge and expertise information or because of power imbalances, there is a need to address these concerns [41, 77]. For these reasons, the aforementioned ground rules have been given as game rules. These guidelines would force the organization to take a number of critical procedures to avert a potentially disastrous outcome. As an example, “Trust” drives the company to open culture, equal interests of stakeholders and representatives, acceptance of disputes, alignment of partners’ expectations, experience and identity as well as the presentation of accurate and reliable information. “Interaction” leads to discourse and relationship-building, formal and informal socialization methods, commitment, and the choosing of aligned partners. Developing an open culture, sharing important information and expertise, being trustworthy, implementing semi-formal protection techniques, intellectual property management, and taking advantage of first-mover advantages are all examples of “Transparency.” To sum up, as all participants were democratically included throughout the whole innovation process, “Co-responsibility” leads to joint accountability for all initiatives.

To summarize, openness to facts and knowledge is crucial for analyzing the social ethical, and environmental implications of innovation processes from the perspective of many stakeholders. Stakeholders may utilize this knowledge and skills to better understand innovation pathways’ priorities and goals. Through contact with many stakeholders, RIs are able to respond to the needs and concerns of society as a whole. In essence, RI can be trusted. As a result of their reciprocal response, stakeholders are responsible for this innovation route.

Advertisement

5. Implications of responsible innovation for artificial intelligence

Due to the complexities of artificial intelligence technology, as well as their future social and ethical consequences, a strategy that is capable of learning, incorporating external voices, incorporating reflection, and bringing together diverse stakeholder groups is needed. RI takes this approach. Big data and artificial intelligence research and development have the potential to provide tremendous social and economic benefits. However, such research and innovation will also result in a slew of unfavorable outcomes. This is a well-known fact by researchers, funders, politicians, and the general public. What is required is a more comprehensive understanding of the strengths of evolving technological innovations, the potential consequences, how relevant stakeholders see them, and the appropriate responses. The responsible innovation framework may help us resolve this issue by involving various stakeholders for collective stewardship and sustainability of the world.

Computer and engineering societies, governmental regulations, university experts, civil societies, and ethical organizations jointly can formulate some rules and regulations regarding sharing, storing, transparency, adhering to the privacy laws, regulations and technical standards, and so on. Governments of various countries have to play a vital role in defining such regulations. Responsible firms do adopt responsible innovation frameworks deliberately for gaining societal desirability and ethical acceptability. They formulate their internal control mechanism for sharing, storing, and publicizing of user’s data themselves through the consent of relevant stakeholders. They do not like to have scandals like The Facebook–Cambridge Analytical data scandal, and then they have to apologize for their actions since such actions are never liked by their users, which they performed for earning money only. Responsible firms build artificially intelligent machines in a controlled environment and under a controlled mechanism through relevant stakeholders’ consent. RI framework binds these firms to seek input from its stakeholders at each step of product development to share collective stewardship and responsibility. Since it’s not about money now, therefore; these firms do not play with the safety and security of their customers as well as the peace of the world.

In general, the media also needs to play its role as a societal and community at large stakeholder. Awareness should be spread regarding the efficacy and drawbacks of such technological innovations. It should be conveyed to all the stakeholders that they should have in mind from the start that AI systems lead to world peace; however, they have to take steps to recognize and resolve threats, such as damage to users’ lives, bodies, or property caused by actuators or other devices; and ensure their safety and security. Consider such risks over the lifespan of AI systems, and verify them where appropriate and feasible; conduct extensive testing in real-world environments to ensure they are fit for use and meet product specifications; collaborate closely with all stakeholders to maintain and develop the applications’ efficiency, protection, usability, and security. It should be mandatory for all AI machines that, upon improper functioning, such as working against humans, the AI machine should have a self-destruction mechanism.

Advertisement

6. Conclusion and future research

The purpose of highlighting the fears and concerns associated with artificially intelligent robots is to help the world prepare, plan, allocate resources, strategy, execute, and regulate the future negative repercussions of such a profound transformation affecting everyone’s life. The authors have convinced that artificial intelligence technology has the potential to bring enormous benefits, such as ease, flexibility, and accuracy, as well as improvement and advancement in the daily lives of humans. However, the research expresses grave concern about the human race’s sustainability and views unregulated, uncontrolled, and irresponsible technological innovations as a significant danger. Thus, not only should we be mindful of the undesired results, but we should also take significant and coordinated steps to combat the threats.

The study proposes to develop a dedicated and holistic strategy covering all aspects of industrial revolution 4.0, particularly the much-heralded technologies of big data and artificial intelligence. This should be done in consultation with various non-economic and economic stakeholders through the deliberation of all such issues and technological innovations. The developed strategy should not be oriented around industrial interests but rather around the sustainability, well-being, and welfare of the entire society. Organizations may use RI dimensions as governance mechanisms and guiding principles, assisting them in achieving social desirability, ethical acceptability, and a sustainable competitive advantage. However, a legal framework is required to assure the strategy’s implementation, and personal autonomy should take precedence in developed communities. Most importantly, businesses should be held accountable for their products. It must be assured that artificially intelligent super-machines have a contingency mechanism in place; for example, they should self-destruct if they go out of control. Thus, there are a number of challenges that can be overcome with hard work and the implementation of technological solutions, norms and values, rules and processes, and a legal framework.

Finally, this research is theoretical in nature that suggests a framework for responsible innovation. Other studies may undertake empirical research on the suggested framework or another framework to demonstrate the influence of responsible innovation on businesses that are developing artificially intelligent robots.

References

  1. 1. de Sousa Jabbour ABL, Jabbour CJC, Foropon C, Godinho Filho M. When titans meet–can industry 4.0 revolutionize the environmentally-sustainable manufacturing wave? The role of critical success factors. Technological Forecasting and Social Change. 2018;132:18-25
  2. 2. Zhong RY, Xu X, Klotz E, Newman ST. Intelligent manufacturing in the context of industry 4.0: A review. Engineering. 2017;3(5):616-630
  3. 3. Bartodziej CJ. The Concept Industry 4.0 an Empirical Analysis of Technologies and Applications in Production Logistics. Best Masters. Wiesbaden Gmb H: Springer Fachmedien; 2017. DOI: 10.1007/978-3-658-16502-4_1
  4. 4. Memon KR, Ooi SK. The dark side of industrial revolution 4.0-implications and suggestions. Academy of Entrepreneurship Journal. 2021;27(S2):1-18
  5. 5. Piccarozzi M, Aquilani B, Gatti C. Industry 4.0 in management studies. A Systematic Literature Review. Sustainability. 2018;10:3821. DOI: 10.3390/su10103821
  6. 6. Ekudden E. Five technology trends augmenting the connected society. Ericsson Technology Review. 2018. p. 2-12
  7. 7. Winfield AFT, Jirotka M. Ethical governance is essential to building trust in robotics and artificial intelligence systems. Philosophical Transactions of the Royal Society A. 2018;376:20180085. DOI: 10.1098/rsta.2018.0085
  8. 8. Wang L, Wang G. Big data in cyber-physical systems, digital manufacturing and industry 4.0. Journal of Engineering and Manufacturing. 2016;4:1-8. DOI: 10.5815/ijem.2016.04.01
  9. 9. Jazdi N. Cyber physical systems in the context of industry 4.0. In: 2014 IEEE International Conference on Automation, Quality and Testing, Robotics. IEEE; 2014. pp. 1-4 Available from: https://DOI.org/10.1109/AQTR.2014.6857843
  10. 10. Müller JM, Kiel D, Voigt KI. What drives the implementation of industry 4.0? The role of opportunities and challenges in the context of sustainability. Sustainability. 2018a;10(1):247
  11. 11. Wang P. On defining artificial intelligence. Journal of Artificial General Intelligence. 2019;10(2):1-37
  12. 12. Helbing D. Societal, economic, ethical and legal challenges of the digital revolution: From big data to deep learning, artificial intelligence, and manipulative technologies. In: Towards Digital Enlightenment. Cham: Springer; 2019. pp. 47-72
  13. 13. Monett D, Lewis CW. Getting clarity by defining artificial intelligence—A survey. In: 3rd Conference on “Philosophy and Theory of Artificial Intelligence”. Cham: Springer; 2017. pp. 212-214
  14. 14. Russell SJ, Norvig P, Davis E. Upper Saddle River. Artificial Intelligence: A Modern Approach. 3rd ed. NJ: Prentice Hall; 2009
  15. 15. Perez JA, Deligianni F, Ravi D, Yang G-Z. Artificial intelligence and robotics. ArXiv Preprint ArXiv. 2018;1803(10813):1
  16. 16. Helbing D, Frey BS, Gigerenzer G, Hafen E, Hagner M, Hofstetter Y, et al. Will democracy survive big data and artificial intelligence? In: Towards Digital Enlightenment. Cham: Springer; 2019. pp. 73-98
  17. 17. Müller JM, Buliga O, Voigt KI. Fortune favors the prepared: How SMEs approach business model innovations in industry 4.0. Technological Forecasting and Social Change. 2018b;132:2-17
  18. 18. Cath C, Wachter S, Mittelstadt B, Taddeo M, Floridi L. Artificial intelligence and the 'good society': The US, EU, and UK approach. Science and Engineering Ethics. 2018;24(2):505-528
  19. 19. Gordan M, Chao OZ, Sabbagh-Yazdi SR, Wee LK, Ghaedi K, Ismail Z. From cognitive bias toward advanced computational intelligence for smart infrastructure monitoring. Frontiers in Psychology. 2022;13:846610-884661
  20. 20. Buhmann A, Fieseler C. Towards a deliberative framework for responsible innovation in artificial intelligence. Technology in Society. 2021;64:101475
  21. 21. Dreyer M, Chefneux L, Goldberg A, von Heimburg J, Patrignani N, Schofield M, et al. Responsible innovation: A complementary view from industry with proposals for bridging different perspectives. Sustainability. 2017;9:1719. DOI: 10.3390/su9101719
  22. 22. Brand T, Blok V. Responsible innovation in business: A critical reflection on deliberative engagement as a central governance mechanism. Journal of Responsible Innovation. 2019;6(1):4-24
  23. 23. Burget M, Bardone E, Pedaste M. Definitions and conceptual dimensions of responsible research and innovation: A literature review. Science and Engineering Ethics. 2017;23(1):1-19
  24. 24. Chatfield K, Borsella E, Mantovani E, Porcari A, Stahl B. An investigation into risk perception in the ICT industry as a core component of responsible research and innovation. Sustainability. 2017;9(8):1424
  25. 25. Russell SJ, Norvig P. Artificial intelligence: a modern approach. Malaysia: Pearson Education Limited; 2016
  26. 26. Legg S, Hutter M. A collection of definitions of intelligence. Frontiers in Artificial Intelligence and Applications. 2007;157:17
  27. 27. Gottfredson LS. Mainstream science on intelligence: An editorial with 52 signatories, history, and bibliography. Intelligence U. 1997;I:13-23
  28. 28. McCarthy J. From here to human-level AI. Artificial Intelligence. 2007;171(18):1174-1182
  29. 29. Copeland BJ. Artificial Intelligence. In: Encyclopaedia Britannica. 2019. https://www.britannica.com/technology/artificial-intelligence [Accessed: August 26, 2019]
  30. 30. WooldridgeM, Jennings NR. Intelligent agents: Theory and practice. The Knowledge Engineering Review. 1995;10(2):115-152
  31. 31. Turing AM. Computing Machinery and Intelligence. In Parsing the Turing Test. Dordrecht: Springer; 2009. pp. 23-65
  32. 32. Fogel DB. Evolutionary Computation: Toward a New Philosophy of Machine Intelligence. NewYork: John Wiley and Sons; 2006
  33. 33. Sapre K. Impact of Artificial Intelligence on Society. Mahratta. 2019. p. 45-49
  34. 34. Dombrowski U, Wagner T. Mental strain as field of action in the 4th industrial revolution. Procedia Cirp. 2014;17:100-105
  35. 35. Horváth D, Szabó RZ. Driving forces and barriers of industry 4.0: Do multinational and small and medium-sized companies have equal opportunities? Technological Forecasting and Social Change. 2019;146:119-132
  36. 36. Frey CB, Osborne MA. The future of employment: How susceptible are jobs to computerization? Technological Forecasting and Social Change. 2017;114:254-280. DOI: 10.1016/j.techfore.2016.08.019
  37. 37. Kiel D, Müller JM, Arnold C, Voigt KI. Sustainable industrial value creation: Benefits and challenges of industry 4.0. International Journal of Innovation Management. 2017;21(08):1740015
  38. 38. Lasi H, Fettke P, Kemper HG, Feld T, Hoffmann M. Industry 4.0. Business and Information Systems Engineering. 2014;6(4):239-242
  39. 39. Kiel D. What do we know about "Industry 4.0" so far? In: Proceedings of the International Association for Management of Technology Conference, 2017, no. Vol 26, pp. 1-22
  40. 40. Schroeder D. RI–a drain on company resources or a competitive advantage? In: Responsible Innovation. Dordrecht: Springer; 2020. pp. 51-69
  41. 41. Blok V, Hoffmans L, Wubben EFM. Stakeholder engagement for responsible innovation in the private sector: Critical issues and management practices. Journal on Chain and Network Science. 2015;15(2):147-164
  42. 42. Scherer AG, Voegtlin C. Corporate governance for responsible innovation: Approaches to corporate governance and their implications for sustainable development. Academy of Management Perspectives. 2020;34(2):182-208
  43. 43. Lubberink R, Blok V, Van Ophem J, Omta O. Lessons for responsible innovation in the business context: A systematic literature review of responsible, social and sustainable innovation practices. Sustainability. 2017;9(5):721
  44. 44. Rip A, Misa TJ, Schot J, editors. Managing Technology in Society. London: Pinter Publishers; 1995
  45. 45. Stirling A. Opening up or closing down? Analysis, participation and power in the social appraisal of technology. Science and citizens: Globalization and the challenge of engagement. 2005;2:218
  46. 46. Wilsdon J, Wynne B, Stilgoe J. The Public Value of Science: Or How to Ensure that Science Really Matters. London: Demos; 2005
  47. 47. Barben D, Fisher E, Selin C, Guston DH. Anticipatory governance of nanotechnology: foresight, engagement, and integration. In: Edward OA, Hackett J, Lynch M, Wajcman J, editors. The Handbook of Science and Technology Studies. 3rd ed. Cambridge, MA: MIT Press; 2008. pp. 979-1000
  48. 48. Karinen R, Guston DH. Toward anticipatory governance: Te experience with nanotechnology. In: Governing future technologies. Dordrecht: Springer; 2019. pp. 217-232
  49. 49. Guston DH, Sarewitz D. Real-time technology assessment. Technology in Society. 2002;24(1-2):93-109
  50. 50. Schot J, Rip A. The past and future of constructive technology assessment. Technological Forecasting and Social Change. 1997;54(2-3):251-268
  51. 51. Fisher E, Mahajan RL, Mitcham C. Midstream modulation of technology: Governance from within. Bulletin of Science, Technology and Society. 2006;26(6):485-496
  52. 52. Schuurbiers D. What happens in the lab: Applying midstream modulation to enhance critical reflection in the laboratory. Science and Engineering Ethics. 2011;17(4):769-788
  53. 53. Von Hippel E. Democratizing innovation: The evolving phenomenon of user innovation. Journal für Betriebswirtschaft. 2005;55(1):63-78
  54. 54. Chesborough H. Open Innovation Harvard Business School Press. Boston MA, 2003;44(3):35-41
  55. 55. Ambrosini V, Bowman C. What are dynamic capabilities, and are they a useful construct in strategic management? International Journal of Management Reviews. 2009;11(1):29-49
  56. 56. Hitt MA, Ireland RD, Camp SM, Sexton DL. Strategic entrepreneurship: Entrepreneurial strategies for wealth creation. Strategic Management Journal. 2001;22(6/7):479-491
  57. 57. Kor Y, Mesko A. Dynamic managerial capabilities: Configuration and orchestration of top executives' capabilities and the firm's dominant logic. Strategic Management Journal. 2013;34(2):233-244
  58. 58. Surroca J, Tribo JA, Waddock S. Corporate responsibility and financial performance: The role of intangible resources. Strategic Management Journal. 2010;31(5):463-490
  59. 59. Galbreath J, Galvin P. Firm factors, industry structure and performance variation: New empirical evidence to a classic debate. Journal of Business Research. 2008;61(2):109-117
  60. 60. Leonidou LC, Christodoulides P, Kyrgidou LP, Palihawadana D. Internal drivers and performance consequences of small firm green business strategy: The moderating role of external forces. Journal of Business Ethics. 2017;140:585-606. DOI: 10.1007/s10551-015-2670-9
  61. 61. Kamasak R. The contribution of tangible and intangible resources and capabilities to a firm's profitability and market performance. European Journal of Management and Business Economics. 2017;26(2):252-275
  62. 62. Stahl, BC, Obach M. Yaghmaei E. Ikonen V, Chatfield K, Brem A. The responsible research and innovation (RRI) maturity model: Linking theory and practice. Sustainability, 2017;9:1036. DOI: 10.3390/su9061036
  63. 63. Lees N, Lees I. Competitive advantage through responsible innovation in the New Zealand sheep dairy industry. International Food and Agribusiness Management Review. 2018;21(4):505-524
  64. 64. Salam MA. Analyzing manufacturing strategies and industry 4.0 supplier performance relationships from a resource-based perspective. Benchmarking, Vol. ahead-of-print No. aheadof-print, Benchmarking: An International Journal. 2019. DOI: 10.1108/BIJ-12-2018-0428
  65. 65. Scholten VE, Van der Duin PA. Responsible innovation among academic spin-offs: How responsible practices help developing absorptive capacity. Journal on Chain and Network Science. 2015;15(2):165-179
  66. 66. Memon KR, Ooi SK. Rediscovering responsible innovation: A road-map towards corporate sustainability. International Journal of Sustainable Strategic Management. Vol. ahead-of-print No. aheadof-print 2022 in press
  67. 67. Lubberink R, Blok V, van Ophem J, Omta O. Responsible innovation by social entrepreneurs: An exploratory study of values integration in innovations. Journal of Responsible Innovation. 2019;6(2):179-210
  68. 68. Ribeiro BE, Smith RDJ, Millar K. A mobilising concept? Unpacking academic representations of responsible research and innovation. Science and Engineering Ethics. 2017;23(1):81-103
  69. 69. Stahl BC. Responsible research and innovation: The role of privacy in an emerging framework. Science and Public Policy. 2013;40(6):708-716. DOI: 10.1093/scipol/sct067
  70. 70. Stilgoe J, Owen R, Macnaghten P. Developing a framework for responsible innovation. Research Policy. 2013;42:1568-1580. DOI: 10.1016/j.respol.2013.05.008
  71. 71. Gwizdała J, Śledzik K. Responsible research and innovation in the context of university technology transfer. Folia Oeconomica, Acta UniversitasLodzensis. 2017;2(328). DOI: 10.18778/0208-6018.328.04
  72. 72. Ooi SK, Amran A, Yeap JA. Defining and measuring strategic CSR: A formative construct. Global Business and Management Research. 2017;9:250-265
  73. 73. Ooi SK, Ooi CA, Memon KR. The role of CSR oriented organizational culture in eco-innovation practices. World Review of Entrepreneurship, Management and Sustainable Development. 2020;16(5):538-556
  74. 74. Blok V, Lubberink R, van den Belt H, Ritzer S, et al. Challenging the ideal of transparency as a process and as an output variable of responsible innovation: The case of “The Circle”. In Responsible Research and Innovation Routledge. 2019 pp. 225-244
  75. 75. Blok V. Look who’s talking: Responsible innovation, the paradox of dialogue, and the voice of the other in communication and negotiation processes. Journal of Responsible Innovation. 2014;1(2):171-190
  76. 76. Carayannis EG, Campbell DF. ‘Mode 3’and’Quadruple Helix’: Toward a 21st-century fractal innovation ecosystem. International Journal of Technology Management. 2009;46(3-4):201-234
  77. 77. van de Poel I, Asveld L, Flipse S, Klaassen P, Kwee Z, Maia M, et al. Learning to do responsible innovation in industry: Six lessons. Journal of Responsible Innovation. 2020;7(3): 697-707

Written By

Khalid Rasheed Memon and Say Keat Ooi

Submitted: 16 June 2022 Reviewed: 25 August 2022 Published: 06 October 2022