Open access

Introductory Chapter: Artificial Intelligence in Healthcare – Where Do We Go from Here?

Written By

Stanislaw P. Stawicki, Thomas J. Papadimos, Michael Salibi and Scott Pappada

Published: 13 December 2023

DOI: 10.5772/intechopen.111823

Chapter metrics overview

57 Chapter Downloads

View Full Metrics

“When you outsource the production of something, you will gradually lose the knowledge and skills to make or produce it. What then, one might ask, will happen when we ‘outsource’ intelligence to another entity?”

- Stanislaw P. Stawicki

Advertisement

1. Introduction

The human history is full of examples where new inventions have created a significant disruption, dividing people into three broadly defined groups – proponents or early adopters, those who oppose, and those who are ambivalent [1]. When looking back at the relatively recent history of the great industrial revolution in Europe, it was not uncommon for opponents to attack and destroy new factories and new machines, with the perpetrators believing that the technological advances would eventually lead to the loss of their jobs and even entire professions [2]. As recently as in the mid-1980s, a group of mathematics teachers held a protest against the use of calculators in schools [3]. Fast-forwarding to today, calculators are now an integral part of our students’ mathematics armamentarium!

Not surprisingly, our approach to artificial intelligence (AI) seems to be following a similar path. It is probably fair to say that most people are not fully aware of current (and thus future) benefits, limitations, and threats related to AI. Within medicine in general, there is little awareness of what AI actually entails, and what it is capable of at this time. It is this current state that serves as our “starting point” in the emerging debate on AI in medicine, including its integration, projected influence, and a variety of other considerations that are not dissimilar to past technology adoption paradigms.

In a very gradual and stealthy way, artificial intelligence (AI) and machine learning (ML) are becoming part of our everyday lives. This “randomly systematic” adoption process is putting humanity face-to-face with something never previously directly known to our civilization – An intelligence that may (and likely will) exceed our own. With the advent of modern computing capabilities, AI has evolved to a point where it can be integrated into everyday applications. Not surprisingly, it has been gradually introduced into various subdomains within the healthcare industry in recent years [4]. As a result, we will likely see significant shifts in education, clinical treatments and approaches, stakeholder expectations, and responsibilities – both in terms of type and scope, as well as potentially redefinition of jobs and other typical employee characteristics across the healthcare space [4, 5].

In this chapter, we will focus on some of the most profound challenges facing humanity as the “human-AI relationship” approaches the so-called “technological singularity” – A term based on the astrophysical concept of “black hole” that denotes a point beyond which there is “no way back” to the previous state of affairs. In the case of a “black hole,” the gravitational force surrounding the super-massive object becomes so powerful that not even light can escape. In the case of AI, “singularity” refers to a point where “AI-based technology” is sufficiently evolved to essentially take over “control operations” of the human civilization [6, 7]. Alternative views describe both “integration” and “plurality” as possible scenarios, where humans and AI either co-exist synergistically (plurality) [89] or even integrate successfully (e.g., human-machine hybridization) [10, 11].

As we explore the realities of this new world, with omnipresent AI and the growing need for human adaptation, change and caution, the issues at hand will likely become less and less “technological” but will rather gravitate toward the ethical and spiritual domains.

1.1 Destructive potential of AI

There are many science fiction movies highlighting the potential dangers of improperly implemented AI – just a few examples of such messaging include the “Terminator” series, “Star Trek Voyager,” “The Matrix” trilogy, and “I, Robot.” The most common themes across these artistic works include machines “taking over” for humans as a form of “misguided stewardship,” the objectivization of humans as “destructive and dangerous” followed by active control efforts, and finally, the most extreme form of “AI dominance” where highly evolved AI “machine hives” determine that humanity needs to be eliminated in its entirety [12, 13, 14, 15, 16].

At a less physically destructive level, but perhaps equally problematic in its extent and implications, the use of AI / ML in misdirected “societal control” efforts may represent another formidable challenge. For example, what would be required to stop a malignant governmental and/or regulatory entity that possesses powerful AI /ML tools, combined with omnipresent social media platforms, from abusing the tremendous power to misinform, manipulate, and eventually subdue entire populations? [17, 18, 19, 20, 21, 22]. Such concerns have been highlighted by Elon Musk and associates in their open letter, “Pause Giant AI Experiments,” which now has nearly 6000 signatories from all walks of life [23]. This open letter is blunt in stating that AI systems with human-competitive intelligence pose a risk to our civilization. It goes on further to emphasize that the risks of powerful AI systems are likely unmanageable at this time. Appropriate oversight, tracking and regulatory frameworks must be put into place. The letter specifically addresses the inception of systems that are substantially more powerful that the more widely known GPT-4 (generative pretrained transformer 4) [24].

A further concern involves AI being used in the political and social influence spheres. Here there are ominous tidings regarding truth and transparency. In one instance, researchers at Stanford University examined whether AI could influence citizens regarding political issues such as assault weapons, a carbon tax, and parental leave…and it certainly did [25]. When looking at the case of ChatGPT in the setting of higher education, the AI-based system was able to readily pass exams at prestigious law and business schools, not to mention the United States Medical Licensing Examination [26, 27, 28, 29]. Such issues are more than concerning, and they clearly constitute potential threats to civilized society and the ascent of man.

1.2 Potential benefits of AI

As much as there are negatives to wider AI implementation, there are certainly amazing potential benefits that can be derived from properly harnessed AI capabilities, from reaching previously unimaginable levels of efficiency within our established processes and workflows, to human-AI hybridization that could actively enable much longer functional (and meaningful) longevity, new disease cures, and solutions to both physical and mental disability [11, 30]. Furthermore, AI will allow the simulation of unusual or theoretical situations such as legal cases before judges and negotiations with business competitors. In fact, there are multiple domains in which AI will benefit us (Table 1) [31]. There are also estimates that AI-driven innovation may contribute nearly $13 trillion dollars to the world’s economy by 2030 [32].

Automation
Productivity
Solving Complex Problems
Decision-Making
Economy
Managing Repetitive Tasks
Defense
Disaster Management
Personalization
Lifestyle

Table 1.

Potential benefits of AI in our lives and work [31].

Perhaps the most significant benefits of AI will be realized in medicine and healthcare. Today’s society faces staffing shortages of healthcare professionals [33, 34, 35]. In absence of sufficiently staffed healthcare organizations and institutions, current healthcare practitioners will require support to achieve optimal patient safety and levels of care provision. To this end, machine learning and AI-based systems will offer the potential to improve monitoring and alerting of healthcare providers such that patients who are in the greatest need are appropriately resourced.

It is important to note that although machine learning and AI are used interchangeably, they are not one and the same. Machine learning involves data science and the development of models based on large datasets to serve a particular function, for example, supporting diagnosis [36], time series prediction of therapeutic set points, predicting patient outcomes [37] such as readmission [38] or mortality [39]. AI, refers to an artificially intelligent system in that it can ‘think and act’ on its own with some degree of autonomy. While machine learning is closely related to AI, machine learning feeds into AI-based systems to leverage its results to perform some autonomous tasks. AI is best explained by looking at some of its initial use in video games, where antagonists (characters) in video games are programmed with AI to complete a primary task (e.g., stop the player from achieving some goal). AI in medicine and healthcare is the ultimate end goal, where patient data can be monitored continuously over time, and some aspects of treatment and care can be automated by AI. An example of this would be leveraging machine learning to support prediction of glucose in patients with type 1 diabetes [40] and leveraging predictions to automatically and dynamically adjust insulin delivery to maintain tight glycemic control. AI can be incorporated into this system to learn patterns in lifestyle, activity, and other pertinent variables to automatically adjust and adapt insulin delivery to further optimize glycemic control overtime given a patient’s chaotic and unpredictable lifestyle. This is only one example of an application illustrating the power of machine learning and AI in healthcare.

Advertisement

2. Important considerations regarding AI in healthcare

Digital bias is an important concept that is bound to become a mainstream consideration in the still very young AI era [41]. In this context, biases that are already present in various types and channels of “source data” have the potential to perpetuate existing healthcare disparities, resulting in a system that may be technologically more advanced but also one that continues to disenfranchise entire segments of the population [42]. Some applications of AI and machine learning in healthcare are starting to come to the forefront. Primary healthcare education and training, as well as the area of continuing education and training are also important areas where machine learning and AI can play a role. Currently, medical education involves a one-size-fits-all curriculum approach where everyone is given the same set of training/education (simulation-based, didactic, and otherwise) regardless of real-world clinical experiences and proficiency/competency levels. To this end, machine learning and AI in medicine can be used in different ways including personalizing training/education of healthcare professionals [43]. In this context, optimal training and education of healthcare professionals is a “big data” problem and via prediction of performance and knowledge/skill acquisition, maintenance and decay over time, it will be possible to personalize training for an individual provider [44, 45, 46, 47, 48, 49].

With the advent of AI and machine learning in any field, there is always the worry that it will replace the jobs of professionals in the field. Although there has been tremendous growth and advancement of AI and machine learning in healthcare, bedside care providers are not at risk of replacement any time soon. In the near term, AI and machine learning in healthcare will primarily offer the potential to augment the performance of healthcare providers and simplify or support their clinical decision-making processes and clinical workflows. This will likely result in a reduction of workload by identifying patterns and trends in large electronic medical records databases and bringing to the forefront key information that will assist the provider in diagnostics and making the best treatment decisions for their patients. AI and machine learning will become extremely important in our fast-changing world and our continually evolving society, where staffing shortages of medical professionals are likely to remain a significant issue, with demographic trends working “against us” well into the future. Having AI and machine learning-based technologies which ultimately optimize the performance and efficiency of healthcare professionals is therefore urgently needed.

Advertisement

3. Artificial Intelligence in academic medicine

The topic of AI in academic medicine is certainly a heated one. It is becoming evident that the introduction of AI into medical education will likely prompt significant rethinking, and likely rebuilding, of our medical curriculums. This will help ensure that both our medical schools and the new generation of medical trainees are sufficiently prepared to optimize the positive aspects of AI while minimizing any potentially negative aspects and considerations [5]. For the current, fairly traditional medical school curricula, the introduction of ML and AI applications will be both transformational and hugely challenging. Similarly, the increasing presence of ML/AI in clinical medicine will force many changes in clinical information management, patient care workflows, the broad range of diagnostics, and many other related areas [50, 51]. The optimal end-product will be the advent of true “precision medicine” where each patient can be treated using highly individualized and much more optimized approaches.

For all of the above to happen seamlessly, without undue disruptions, the incorporation of AI applications into medical education will require unique curricular modifications. It is likely that the current evidence-based medicine (EBM) guidelines will quickly become obsolete and instead may be replaced by dynamically updated AI-based recommendations (AIBRs). Consequently, how we train our next generation of physicians and other healthcare professionals will likely become unrecognizable in the next 10–20 years. Moreover, the issues of “black box” interpretability, data security, and decision liability are bound to present us with problems not addressed by traditional curricula [52].

It is reassuring to know that research in this area has been ongoing and that a significant amount of expertise is available and continues to grow [5, 52]. Our collective perception of AI is also likely to evolve over time. According to recent data, a large proportion of medical students perceived AI as an assistive technology that could facilitate physicians’ access to information, and patient access to healthcare, all while reducing the number and impact of medical errors [53]. In parallel, more and more medical students are expressing the need for updates in the current medical school curriculum, accommodating the need for adaptation to AI-facilitated healthcare industry transformation [54, 55]. Curricular updates should revolve around equipping future physicians with the knowledge and skills to effectively harness the power of AI-based applications, minimize potential harms related to the misuse of AI, and ensure that their professional values and rights are protected.

At the same time, implementing the right plan and appropriately re-setting professional requirements and boundaries is not an easy task. All clinicians, students, and AI professionals alike should understand the social, ethical, legal, and regulatory issues that will determine whether AI-based tools will narrow or widen health disparities, affect professional independence, and potentially influence any existing healthcare gaps. A multi-pronged approach should involve the development of novel teaching models, the recruitment of qualified and experienced content specialists (to design and teach ML/AI curricula), and subsequently, the facilitation of communication challenges relative to any existing and/or perceived knowledge gaps between physicians and engineers [53].

Parallel to the issue of medical education reform, another set of critical issues will arise pertaining to intellectual property, content attribution, and content originality (e.g., plagiarism) [56]. Within this context, we must remember that ML, AI, and other advanced tools like “chatbots” and “ChatGPT” are not inherently “good or bad,” and that any inappropriate uses of said technological capabilities will stem from misuse by individuals whose intentions lack ethical and/or moral grounding. The educational setting in general, and higher education in particular, is largely based on the presence of academic integrity as an essential component of the system. While AI-based technologies have the potential to greatly enhance our lives and improve our efficiency across various areas of society [6], it is not unreasonable to speculate that such highly sophisticated tools could easily “fool an expert” into giving credit for effort that should have never been attributed to a particular individual, in effect propagating intellectual fraud [57, 58].

In addition to the potential for difficult-to-detect plagiarism, AI-based technologies also have the potential to be used for other nefarious purposes, such as cheating on assignments, using ‘deep fake’ or other unethical practices to gain an unfair advantage, and even assisting unscrupulous individuals in actively lying on their resumes and job applications [59, 60]. As modern technology continues to advance relentlessly, it becomes increasingly difficult to determine whether a piece of writing is truly original or if it has been generated by a machine [61]. This raises questions about the value of originality and the importance of properly crediting sources in the digital age. It also highlights the need for individuals to be more critical of the information they consume and follow, as well as the importance of careful consideration of the sources of the information being actively shared, especially in the context of omnipresent social media [19]. Finally, we must always remember that AI-generated content will be inherently limited by the quality of data inputs utilized during the generative process.

Advertisement

4. Synthesis and conclusion

Similar to all other transformational human inventions, the emergence of AI/ML is a culmination of various simultaneous advances – often parallel and co-dependent – that synergistically combine to facilitate computational processes that approximate the “functional outcomes” of various human logical processes. Among the advances that were required for AI/ML to enter the mainstream were modern integrated circuits, higher computer processing speeds, greater amounts of computer memory, software engineering knowledge, and the ability to harness the power of the Internet to gather vast amounts of high-density data in a very efficient manner.

As the early, more rudimentary capabilities of AI/ML grew, so did the diversity of their applications. With further growth in hardware, software, and implementation infrastructure, increasingly complex areas (and problems) became amenable to AI’s general “scope of abilities.” This gradually expanded into highly sophisticated systems and areas, such as social sciences and healthcare. In this collection of chapters, we will discuss current trends and future developments related to artificial intelligence and machine learning across medical and surgical specialties.

References

  1. 1. Juma C. Innovation and Its Enemies: Why People Resist New Technologies. Oxford: Oxford University Press; 2016
  2. 2. Gera I, Singh S. A critique of economic literature on technology and fourth industrial revolution: Employment and the nature of jobs. The Indian Journal of Labour Economics. 2019;62(4):715-729
  3. 3. Hochman A. Math Teachers Stage a Calculated Protest [cited 2023 April 29, 2023]. 1986. Available from: https://www.washingtonpost.com/archive/local/1986/04/04/math-teachers-stage-a-calculated-protest/c003ddaf-b86f-4f2b-92ca-08533f3a5896/
  4. 4. Çalışkan SA, Demir K, Karaca O. Artificial intelligence in medical education curriculum: An e-Delphi study for competencies. PLoS One. 2022;17(7):e0271872
  5. 5. Chan KS, Zary N. Applications and challenges of implementing artificial intelligence in medical education: integrative review. JMIR Medical Education. 2019;5(1):e13930
  6. 6. Upchurch M. Robots and AI at work: the prospects for singularity. New Technology, Work and Employment. 2018;33(3):205-218
  7. 7. Good IJ. Speculations concerning the first ultraintelligent machine. In: Advances in Computers. New York, USA: Elsevier; 1966. pp. 31-88
  8. 8. Plebe A, Perconti P. Plurality: The end of singularity? In: The 21st Century Singularity and Global Futures. Cham, Switzerland: Springer; 2020. pp. 163-184
  9. 9. Barr J, Cabrera LF. AI gets a brain: New technology allows software to tap real human intelligence. Queue. 2006;4(4):24-29
  10. 10. Bloom P, Bloom P. Heading toward integration: the rise of the human machines. In: Identity, Institutions and Governance in an AI World: Transhuman Relations. Cham, Switzerland: Palgrave Macmillan; 2020. pp. 67-92
  11. 11. White JJ. Artificial Intelligence and People with Disabilities: a Reflection on Human–AI Partnerships. In: Humanity Driven AI: Productivity, Well-being, Sustainability and Partnership. Cham, Switzerland: Springer; 2022. pp 279-310
  12. 12. Arney C. Our final invention: artificial intelligence and the end of the human era. Mathematics and Computer Education. 2016;50(3):227
  13. 13. Francis A, Davies J, Data LC. The Measure of a Machine: The Psychology of Star Trek’s Artificial Intelligence. New York, USA: Sterling Publishing; 2017, pp. 251-265
  14. 14. Garvey C. Testing the ‘Terminator Syndrome’: Sentiment of AI News Coverage and Perceptions of AI Risk. 2018. Available at SSRN 3310907
  15. 15. Bennett E. Deus ex machina: AI apocalypticism in terminator: the sarah connor chronicles. Journal of Popular Television. 2014;2(1):3-19
  16. 16. Cheng X. AI vs Nuclear Weapons: Which Is More Dangerous?. Available online at: http://large.stanford.edu/courses/2018/ph241/cheng1/ [Last accessed June 6, 2023]
  17. 17. Eke D et al. Nigeria’s Digital Identification (ID) Management Program: Ethical, Legal and Socio-Cultural Concerns. Journal of Responsible Technology. 2022;11:100039
  18. 18. Plaza M et al. The use of distributed consensus algorithms to curtail the spread of medical misinformation. International Journal of Academic Medicine. 2019;5(2):93
  19. 19. Stawicki SP, Firstenberg MS, Papadimos TJ. The use of blockchain in fighting medical misinformation: A concept paper. In: Blockchain in Healthcare: From Disruption to Integration. Cham, Switzerland: Springer; 2023. pp. 225-239
  20. 20. Conti K et al. The evolving interplay between social media and international health security: a point of view. In: Contemporary Developments and Perspectives in International Health Security-Volume 1. London, UK: IntechOpen; 2020
  21. 21. Papadimos TJ, Stawicki SP. Hannah Arendt’s Prognostication of Political Animus in America: Social Platforms, Asymmetric Conflict, and an Offset Strategy. Open Journal of Philosophy. 2020;11(1):85-103
  22. 22. Farrell H, Newman A, Wallace J. Spirals of delusion: How AI distorts decision-making and makes dictators more dangerous. Foreign Affairs. 2022;101:168
  23. 23. Letters FO. Pause Giant AI Experiments: An Open Letter. Future of Life Institution; 2023 Available from: https://futureoflife.org/open-letter/pause-giant-ai-experiments
  24. 24. Nori H et al. Capabilities of gpt-4 on medical challenge problems. arXiv preprint arXiv:2303.13375, 2023.
  25. 25. Myers A. AI’s Powers of Political Persuasion [cited 2023 April 17, 2023]. 2023. Available from: https://hai.stanford.edu/news/ais-powers-political-persuasion
  26. 26. Mbakwe AB et al. ChatGPT Passing USMLE Shines a Spotlight on the Flaws of Medical Education. San Francisco, CA USA: Public Library of Science; 2023. p. e0000205
  27. 27. Ryznar M. Exams in the Time of ChatGPT. Washington and Lee Law Review Online. 2023;80(5):305
  28. 28. Gilson A et al. How does CHATGPT perform on the United States Medical Licensing Examination? the implications of large language models for medical education and knowledge assessment. JMIR Medical Education. 2023;9(1):e45312
  29. 29. Choi JH et al. ChatGPT Goes to Law School. SSRN; 2023
  30. 30. Lillywhite A, Wolbring G. Coverage of artificial intelligence and machine learning within academic literature, Canadian newspapers, and twitter tweets: The case of disabled people. Societies. 2020;10(1):23
  31. 31. Lateef Z. Top 10 benefits of artificial intelligence [cited 2023 April 17, 2023]. 2023. Available from: https://www.edureka.co/blog/benefits-of-artificial-intelligence/
  32. 32. Szczepanski M. Economic impacts of artificial intelligence (AI) - European Parliament Briefing [cited 2023 April 17, 2023]. 2019. Available from: https://www.europarl.europa.eu/RegData/etudes/BRIE/2019/637967/EPRS_BRI(2019)637967_EN.pdf
  33. 33. Harp JJ. The shortage of Healthcare Workers in the United States: A Call to Action. In: Assessing the Need for a Comprehensive National Health System in the United States. Hershey, Pennsylvania, USA: IGI Global; 2023. pp. 123-138
  34. 34. Chervoni-Knapp T. The staffing shortage pandemic. Journal of Radiology Nursing. 2022;41(2):74-75
  35. 35. Butler CR, Webster LB, Diekema DS. Staffing crisis capacity: A different approach to healthcare resource allocation for a different type of scarce resource. Journal of Medical Ethics. 2022;2022. DOI: 10.1136/jme-2022-108262. [Published Online First: 01 July 2021]
  36. 36. Ragab M et al. Ensemble deep-learning-enabled clinical decision support system for breast cancer diagnosis and classification on ultrasound images. Biology. 2022;11(3):439
  37. 37. Du Y et al. An explainable machine learning-based clinical decision support system for prediction of gestational diabetes mellitus. Scientific Reports. 2022;12(1):1170
  38. 38. Huang Y et al. Application of machine learning in predicting hospital readmissions: a scoping review of the literature. BMC Medical Research Methodology. 2021;21(1):1-14
  39. 39. Thorsen-Meyer H-C et al. Dynamic and explainable machine learning prediction of mortality in patients in the intensive care unit: a retrospective study of high-frequency data in electronic patient records. The Lancet Digital Health. 2020;2(4):e179-e191
  40. 40. Pappada SM et al. Neural network-based real-time prediction of glucose in patients with insulin-dependent diabetes. Diabetes Technology & Therapeutics. 2011;13(2):135-141
  41. 41. Packin NG. Disability discrimination using artificial intelligence systems and social scoring: Can we disable digital bias? Journal of International and Computer Law. 2021;8:487
  42. 42. Celi LA et al. Sources of bias in artificial intelligence that perpetuate healthcare disparities—A global review. PLOS Digital Health. 2022;1(3):e0000022
  43. 43. Pappada S et al. Personalizing simulation-based medical education: the case for novel learning management systems. International Journal of Healthcare Simulation. 2022;2022:1-8
  44. 44. Patel UK et al. Artificial intelligence as an emerging technology in the current care of neurological disorders. Journal of Neurology. 2021;268:1623-1642
  45. 45. Poly TN et al. Artificial intelligence in diabetic retinopathy: Bibliometric analysis. Computer Methods and Programs in Biomedicine. 2023;231:107358
  46. 46. Itchhaporia D. Artificial intelligence in cardiology. Trends in Cardiovascular Medicine. 2022;32(1):34-41
  47. 47. Makhni EC, Makhni S, Ramkumar PN. Artificial intelligence for the orthopaedic surgeon: an overview of potential benefits, limitations, and clinical applications. JAAOS-Journal of the American Academy of Orthopaedic Surgeons. 2021;29(6):235-243
  48. 48. Vearrier L et al. Artificial intelligence in emergency medicine: benefits, risks, and recommendations. The Journal of Emergency Medicine. 2022;62(4):492-499
  49. 49. Benke K, Benke G. Artificial intelligence and big data in public health. International Journal of Environmental Research and Public Health. 2018;15(12):2796
  50. 50. Mintz Y, Brodie R. Introduction to artificial intelligence in medicine. Minimally Invasive Therapy & Allied Technologies. 2019;28(2):73-81
  51. 51. Ahmed Z et al. Artificial intelligence with multi-functional machine learning platform development for better healthcare and precision medicine. Database. 2020;2020:1-35
  52. 52. Savage TR. Artificial intelligence in medical education. Academic Medicine. 2021;96(9):1229-1230
  53. 53. Civaner MM et al. Artificial intelligence in medical education: A cross-sectional needs assessment. BMC Medical Education. 2022;22(1):772
  54. 54. Hu R et al. Insights from teaching artificial intelligence to medical students in Canada. Communication & Medicine. 2022;2(1):63
  55. 55. Pucchio A, Eisenhauer EA, Moraes FY. Medical students need artificial intelligence and machine learning training. Nature Biotechnology. 2021;39(3):388-389
  56. 56. Francke E, Bennett A. The potential influence of artificial intelligence on plagiarism: A higher education perspective. In: European Conference on the Impact of Artificial Intelligence and Robotics (ECIAIR 2019). 2019;31:131-140
  57. 57. Dehouche N. Plagiarism in the age of massive Generative Pre-trained Transformers (GPT-3). Ethics in Science and Environmental Politics. 2021;21:17-23
  58. 58. Taswell SK et al. The hitchhiker's guide to scholarly research integrity. Proceedings of the Association for Information Science and Technology. 2020;57(1):e223
  59. 59. Campbell C. AI by Design: A Plan for Living with Artificial Intelligence. Boca Raton, Florida, USA: CRC Press; 2022
  60. 60. Maras M-H, Alexandrou A. Determining authenticity of video evidence in the age of artificial intelligence and in the wake of Deepfake videos. The International Journal of Evidence & Proof. 2019;23(3):255-262
  61. 61. King MR. and ChatGPT, A conversation on artificial intelligence, chatbots, and plagiarism in higher education. Cellular and Molecular Bioengineering. 2023;16(1):1-2

Written By

Stanislaw P. Stawicki, Thomas J. Papadimos, Michael Salibi and Scott Pappada

Published: 13 December 2023