Open access peer-reviewed chapter

Artificial Intelligence in Healthcare: Doctor as a Stakeholder

Written By

Subba Rao Bhavaraju

Submitted: 19 February 2023 Reviewed: 31 March 2023 Published: 27 April 2023

DOI: 10.5772/intechopen.111490

Chapter metrics overview

185 Chapter Downloads

View Full Metrics

Abstract

Artificial Intelligence (AI) is making significant inroads into healthcare, as in many other walks of life. Its contribution to clinical decision making, to achieve better outcomes, image interpretation especially in radiology, pathology and oncology, data mining, generating hidden insights, and reducing human errors in healthcare delivery is noteworthy. Yet there are physicians as well as patients and their families, who are wary of its role and its implementation in routine clinical practice. Any discussion on AI and its role in healthcare brings into consideration issues like hype and hope associated with any new technologies, uncertain understanding of who the stakeholders are, patients’ views and their acceptance, validity of data models used for training and decision making at the point of care. These considerations must be accompanied by thorough policy discussions on the future of AI in healthcare and how the curriculum planners in medical education should train the medical students who are the future healthcare providers. A deliberation on the issues on the issues that are common to Information Technology (IT) like cybersecurity, ethics and legal aspects, privacy, and transparency is also needed.

Keywords

  • artificial intelligence
  • data at point of care
  • stakeholders
  • annotation
  • anonymisation
  • ethical and legal issues
  • privacy and security
  • trust
  • concerns
  • explainability
  • medical education
  • emotions and behaviour
  • disability
  • negative impact

1. Introduction

Artificial Intelligence (AI) is making significant inroads into healthcare, as in many other walks of life. Its contribution to clinical decision making, improved outcomes, image interpretation especially in radiology, pathology and oncology, data mining, new insights generation and elimination of human errors creeping in healthcare delivery is noteworthy. Yet there are physicians as well as patients who are wary of its role and its implementation in routine clinical practice.

Data is key for successful AI and machine learning (ML) and more is not always better. Statistics and artificial intelligence need to analyse large data sets to discover useful information and the data should be accurate, appropriate, and clean. Care of the data in healthcare at its generation—the point of care and its value in AI and the healthcare provider’s role cannot be overemphasised. Is AI a totally computer scientists’ dominion? What is the doctor’s role in it? Are they just the beneficiary.

Any discussion on AI and its role in healthcare brings into consideration the issues like hype and hope associated, who are the stakeholders, healthcare personnel and the patients’ views and their acceptance, data at its generation—at the point of care, the future of AI in healthcare and how would the curriculum planners in medical education train the medical students, who are the future healthcare providers. Also needed is a deliberation on the issues that are common to Information Technology (IT) like cybersecurity, ethics and legal aspects, privacy, and transparency. This also brings into review various issues the developers of AI solutions in healthcare need to bear in mind. The gaps in the current knowledge and databases also need a thought. Do the AI database and the EHR cover the global scenario adequately. Many areas in the world do not follow the EHR. Are the ethnic and regional differences in health and disease well represented in the Database?

Advertisement

2. Methodology

This chapter is a review of the literature relevant to the medical profession and their concerns as stakeholders on a subject that is not primarily their dominion. The important search engines used are Google, Microsoft Bing, Google Scholar, and PubMed. The Search words used are: Artificial Intelligence, Data at point of care, Stakeholders, Annotation, Anonymisation, Ethical and Legal Issues, Privacy and Security, Trust, Concerns, Explainability, Medical Education, Emotions and behaviour, Disability, Negative Impact, and loss of Jobs.

The search included blogs, articles, reports and publications in peer reviewed journals referring to AI in healthcare. The ultimate beneficiary of AI in healthcare is the patients. The concerns of the patients as well as healthcare professionals with reference to trust and possible ethical and legal issues are covered. The claims of the AI are not without certain negative impacts. An attempt is made to cover the pros and the cons of AI in healthcare. The technical work concerned with the development of the AI/ML platforms and algorithms is beyond the scope of this review and are excluded. A number of tools applying AI/ML are currently in use in healthcare. A few of them are in pipeline. The purpose of this review is not to evaluate the AI/ML tools that are currently in use or in pipeline in healthcare (Table 1).

ProsCons
Precision in Diagnosis: Computer VisionAbsence of Trust: Lack of Human Touch
Elimination of Human ErrorNo Peer Review of Processes
Efficient performance of Repetitive tasks: No FatigueAbsence of Research Protocol—Double blind Study
Decision SupportScare of Job Loss
Speed of ActionExplainability
DigitalisationNot Universally Accepted
Newer InsightsReliability
Costs: Claim reductionExpensive to implement

Table 1.

AI in healthcare—Pros and Cons.

Advertisement

3. The Hype and The Hope

In today’s world of data intensive computing [1, 2, 3], we seem to live in a state of hype and hope for the role of artificial intelligence in every walk of life. A hope that AI is the panacea or cure all to a mistrust, and scepticism exist in several fields. The doctor patient relationship is a type of bond, much beyond the factual clinical relation of diagnosis, intervention, and outcome. The patients as well as doctors are circumspect and wary of the ability of AI to substitute that relationship [4, 5].

The ability of creating AI and to let an algorithm take over the human function is not preferred by many. The patients have a significant hesitation in handing over their health issues to a machine. Can the machine match the subtleties of communication, eye contact, personal touch and the empathy that is expected from a human? Can the AI manage a situation end to end in patient care? The complexities of healthcare are thought to be beyond the capability of a machine. Is the AI too standardised and not flexible enough to the individual needs of a patient. Decision support systems in vogue are accepted by the patient but want the decision making is preferred to be left to humans [6, 7].

The doctor performs a number of duties in the doctor patient relationships. Does the AI promise to replace the doctor in every aspect? As a clinician, he does interpret the difficulties the patient has, elicits certain signs suggestive of a diagnosis and orders relevant investigations. It is possible that humans err in judgement, may not be fully equipped with all the knowledge and is prone to have his own bias. AI certainly promises to cover these deficits of a human. However, does the AI take over the functions as a team leader, comfort the patient when needed, help the patient in risk assessment and make right decisions in contrasting and compromising situations. One of the most important duties of the doctor is in terminal illness, palliative care, empathy, psychological support and even conveying the sad news of a near one’s death [8]. Is the AI equally competent to humans in these functions?

Advertisement

4. The stakeholders

Is AI a computer scientists’ dominion—the theoreticians, the analysts, the developers [9]? Who are all the stakeholders? Is the end user in healthcare—the patient or the provider a stakeholder? The healthcare provider is an important stakeholder. The doctor, nurse and other parties involved in the healthcare are responsible as a generator of the data at the point of care and as a beneficiary of the final product in AI. The patient too is a stakeholder. Some of the data unless provided by the patient does not contribute to the database under consideration in AI. Needless to mention, the patient is a beneficiary too of the improved outcome and error prevention. What is the stakeholder’s responsibility. They all are accountable, liable, and blameworthy [10].

Advertisement

5. The data at the point of care

The data is the most important resource in the AI process. Efficient and effective management of this data by AI depends on many factors. The data in healthcare consists of the details of the patients—personal demographic details, historical aspects of the illness, clinical observations, diagnostic evaluation, reports generated, treatment including medication, outcomes, and financial data like costs, billing. For effective analytics, be it descriptive, predictive, prescriptive, or cognitive, the data shall be accurate and comprehensive. The healthcare providers play a major role.

Much of the data that forms the basis of AI development is generated at the time the patient is at the healthcare provider, at the point of care. The importance of recording the data at the point of care cannot be overemphasised, once the opportunity of recording an event is lost, the data could be lost forever [11].

How careful are the clinicians, the laboratory staff, and others in the health care team while recording the data at the stage of its occurrence? Do they record the deficiencies, errors, personal bias in ordering tests and their interpretation? Are the complications and untoward reactions reported and recorded? One should know the value of data and the vigilance to be maintained at the point of care—where it is generated. The details, the quality, its reliability, and totality, including a report on the unexpected events, complications and interpretations need proper documentation.

Digital case records like electronic health records (EHR) have significantly enhanced the scope of the data collection [12]. The EHR is a health record that keeps the demographic data, clinical details including symptomatic, historical, clinical, diagnostic, therapeutic and outcome data, nurses’ notes, pharmacy, other therapists like physical therapists, insurance, and billing data. Healthcare providers and organisations collect, track, store, and transmit personal health information. With so much data accumulating, what is important and what is not in the perspective of AI is an issue. What goes into the databases is important [13]. The responsibility of the health care personnel is noteworthy. The comments of the former editor of New England Journal of Medicine [14] reflect the rather unfortunate situation. He regrets to note that the published work does not represent the true state with significant data unpublished. True picture of the illness, its presentation, features, outcomes and complications and untoward reactions may go unrepresented in the database. With the policy of insisting on publication for academic recognition, doubts are cast on the validity of the published work [15, 16]. One should remember the old saying, “If an event is not documented, it did not occur.” The value-based care depends on the validity of the database and its true reflection of knowledge base [17]. Many countries are making electronic health records mandatory. These include Australia, Belgium, Canada, Denmark, the United Kingdom, and United States. The goal of these initiatives in health information technologies is to digitally transform the collection, display, transmission, and storage of patient leading to a steady increase in data at the point of care [18, 19, 20].

Two other dimensions need consideration when EHR is discussed. The EHR is not followed universally in all countries. Paper case records and data entry in non-digital format is common. The inadequacies that are incidental in paper records and their reflection in the database need consideration. The EHR or the paper case records talk of the patient data while he is in doctor’s office or in the hospital. Modern technology offers a different dimension to the data.

With advances in mobile technology, digital patient monitoring, tele-healthcare, ambulatory care and wearable devices, the data is generated while the patient is away from the hospital, at home, work or elsewhere. These data provides the status of the patient during the period intervening the doctor’s visits, contextual and historical data that influence the outcomes and insights that the AI generates. The providers have an obligation to incorporate this data in the patient records. Transfer of this self-collected data to the AI database and its influence on the insights provided by AI in healthcare is to be ensured. The mobile technology and the wearables generating data also lead the concerns of patient privacy, transparency, interoperability, and data sharing across all platforms [21, 22]. The actual time when the event occurred and the uploading of the data is also important as the data is likely to change with times, especially in acute care situations.

Advertisement

6. Annotation

One of the most important steps in AI is Annotation. Data Annotation is the process of categorising and labelling individual elements of the data for AI applications. The data can be in image, a video, audio, graphic or a text format. The annotation is to convert the data into a machine-readable format. The annotation was being done manually to start with, but machine read annotations are available currently. Manual annotators are currently creating the data sets helping the computers using Natural Language Processing (NLP) and Computer Vision to detect the text and images in interpreting images in radiology, pathology, oncology, retinopathy, biometrics, and data insights. Uniformity in description and standardisation of the data annotation detectable by NLP or computer vision forms the basics of annotation. Healthcare provider has an obligation to use the right machine-readable word for the text and description of an image or video. The AI interpreted annotation is one of the future probabilities. Even for the AI interpreted annotations, the algorithms. Need to consider manual data annotation to start with [23, 24, 25, 26, 27, 28].

Advertisement

7. The AI and legal issues

A detailed discussion on the various legal issues relevant to AI and healthcare is beyond the scope of the current note and the reader shall look elsewhere. A brief account of the legal issues in relation to the healthcare is presented for awareness of the doctor as a user and stakeholder, especially in case of untoward reaction and damages occurring during the course of one’s actions using the AI. As per the common law, a person of unsound mind is not responsible for his actions. It means only a person with sound mind is responsible for his actions. The common law also talks of subjective element of criminal intent. Is the computer which lacks the human mind and intent held responsible for its actions. The AI is considered as a technological tool with ability to simulate human brain and perform some of the duties that require human intelligence [29, 30, 31, 32, 33]. In case of wrong decisions, adverse reactions and untoward outcomes resulting after usage of AI, what is the liability of the AI and the healthcare team? The machine does not have its own identity but there are multiple persons involved finally in AI in healthcare—the vendor, the owner of the company, the designer, the hardware or the software developer, the persons who evaluates and tests the tool, the person who supplies the data or the database itself, or the doctor who uses the AI platform on a patient. Who is responsible or accountable for the damages caused in using the AI? The legal issues that arise in addition to adverse reaction or the outcome are—a foreseeable damage, a human rights violation, violation of privacy, a criminal intent, cybercrime, and risk of a hacker laying his hands on the data. While the machine is not responsible in itself, can the person behind the machine be held responsible? To what extent is the doctor accountable as a user. Are the people who built it and use it responsible. Lack of accountability raises concerns about the possible safety consequences of using unverified or unvalidated AI in clinical settings. Awareness of the potential of AI, responsible use of the AI and the insights provided, potential harms, taking an informed consent from the patient while using AI interpreted results, is the responsibility of the doctor [34, 35, 36, 37, 38]. The interesting case of Google vs. Information Commission (UK) shows the issues that need consideration. Issues in handling sensitive data like privacy, transparency need attention. A code of ethics has to be developed [39, 40, 41].

Explainability is an issue that needs consideration [42, 43, 44]. How do the AI driven algorithms arrive at a prediction or a conclusion in a given situation? Should the process of AI be a part of the informed consent? Is it necessary to explain the process? Ethically the medical doctor is accountable for his actions. The informed consent one obtains shall be really informed. When using the AI driven clinical decision support systems, the doctor has to be aware of the reasoning behind such decisions. Four principles are considered when we talk of explainability of AI—the algorithms used in a language the user understands, the evidence and reasoning behind the conclusions drawn, the reliability of the processes used, and the proof for the outcomes or insights. Doctor patient relationship involves mutual trust. The doctor has to explain his actions and decisions and they shall be transparent, patient centred and holistic. Are the processes and algorithms involved in reaching the conclusions in AI systems explainable to a doctor?

Advertisement

8. Privacy and security of data

The AI in healthcare deals significant sensitive personally identifiable information (PII) consisting of demographic and health data of the patient. Apart from the doctors, the nurses, pharmacists, diagnostic lab personnel, other therapists, some statutory bodies like regulatory authorities and the patients themselves access the digital platform in healthcare at various stages. This data generated is accessed at the point of care, at data analysis, deep mining, and when looking for insights. The data necessarily has to be transparent and portable and is stored cloud. This scenario is a hacker’s haven. Whose responsibility is its security? When in a specific case the AI fails or is misused, the ethical principles of privacy, autonomy, and justice could be violated. Data theft and misuse are common threats in any computer program, and it is the responsibility of the user to protect the privacy, and security of the owner of the data. All the stakeholders who have access to the data have to be careful of the data in such a situation. The AI developer and user has to keep a watch on the impact of the misuse or discrimination. What processes should we implement to monitor the impact and how to overcome the unintended clinical outcomes. What skills does a developer, or a user has to acquire to enable performance of these tasks. A dialogue between all the stakeholders is necessary on these issues to protect the rights of those involved against direct or indirect coercion. Should the doctor as a user of the AI systems, and as a person involved in the management of the patient, the ultimate beneficiary, be involved in the various processes of AI [45, 46, 47]?

Advertisement

9. Anonymisation and encryption

Most countries have regulatory procedures to protect the privacy of the data. The Digital Personal Data Protection Bill 2022, The European Union‘s new General Data Protection Regulation (GDPR), The California Consumer Privacy Act (CCPA) and Digital Information Security in Health Care Act (DISHA) [48, 49, 50, 51] are some of the acts concerned with the data protection. Anonymisation and encryption are the two methods of protecting the identity of personal data when large databases are created and stored in places accessed in AI or other computer programs. Encoding, anonymization, pseudonymisation, generalisation, masking, data swapping, data perturbation are some of the methods of removing or coding the words that connect the data to its owner, whereby the personal identity. Encrypted data storage in cloud and use of Internet of Things for remote access is in practice. Blockchain based secure sharing of data in healthcare is another form of secure data handling [52, 53, 54, 55].

Advertisement

10. The promise and challenges of AI in healthcare

The AI in healthcare promises a bright future. The functions of AI can be summarised as relieving, splitting, replacing, and augmenting the role of healthcare personnel [56]. The AI helps streamlining of the work, front office management, the EHRs, human error prevention, administrative work, and provides the expert systems, decision making algorithms, and new insights. The contribution of AI in the diagnostic work especially the interpreting the images in radiology, retinopathy, pathology, and oncology is striking. Help in analysis and mining of large cohorts is a great boon to the epidemiologist. The speed and accuracy of the data processing and predictions are more efficient than humans. Stroke prediction and cardiovascular risk assessment are some of the newer algorithms available. Robotics processes automation are used in healthcare, for repetitive tasks like prior authorisation, updating patient records and billing [57, 58, 59].

The challenge of acceptance by the patients remains. The value of the databases used and updating is always problematic. The ownership of the data, portability and sharing across all data sets need clarity. The ethical and legal issues of responsibility and accountability for adverse outcomes of use or rejection of expert advice of AI need clearer understanding. Informed consent is another area when AI based expert systems are used or not used. How informed is informed consent? It is necessary to inform the patient if the clinician is basing his decision as per the recommendations of AI [60].

There are some problems in AI [61]. Unlike much of the research publications and recommendations, the AI data and inferences are not peer reviewed and blinded on evaluation. Who is responsible and accountable for the insights it provides—the developer, the tech company, the regulator, or the clinician? Can the emotional component of the doctor patient relationship be simulated? Who among the developer, the tech company, regulator, the doctor, or other stakeholders are accountable for any mishaps that happen when AI system recommendations are followed? The AI in healthcare has on one side systematised the various tasks and made available the information at the click of a button, does it with confidence dispense away the human supervision, assure safety and security? Not all human qualities are easy to digitise, and machines may not succeed in copying the sensitive and realistic relationship between the patient and the doctor. The quality AI depends on the quality data. One is aware of the old colloquial saying “garbage in and the garbage out”.

11. Emotions and AI

Anxiety and emotions play a significant role in healthcare. The patients exhibit emotional reaction to the situation they are in. The suffering form pain is not equal in all and is significantly subjective. Reaction to hearing an unpleasant news like a diagnosis of cancer, prognosis of a permanent disability or even news of death have a variable emotional component. Doctors and other healthcare personnel on the other hand, are expected to provide the emotional support to the patient. An arm around the shoulders, empathy, communication, the eye contact, and the body language while sharing the unpleasant news have significant influence on the patient and the family. The patients’ expectations and the helpers’ perceptions influence the emotional support. The support has to be customised often. Where do the machines stand in this context [62, 63]?

The computers need to recognise and respond to the emotions and show empathy. They need to be ethical too. AI chatbots, intelligent healing platforms, therapeutic intelligence, communicative AI, emotion AI or affective AI are some of the AI tools that tend to simulate human emotions [64, 65, 66, 67, 68, 69, 70]. Software used is computer vision and natural language processing with facial recognition and voice recognition. The chatbots are becoming popular, earning the trust and engage the subjects in conversation either verbal or in the text format. While these were not rated as poor, opinion generally is in favour of a human over a machine proving the support. Simulating cultural differences in body language and communication pose problems of misinterpretation. Affective AI and Human Behaviour—Change Project (HBCP) [71, 72] deal with human behaviour. Bringing about a change in the human behaviour like mental health issues and therapy in addiction are the areas the AI is stepping into. AI and the HBCP are creating an open-access online knowledge system of behaviour change interventions. The use of natural language processing and sentiment analysis another branch of AI, has permitted interpretation of verbal communication and helped understand human expressions. Emotion and behavioural assessment is possible through the sentiment analysis [73, 74].

12. AI and differently abled

AI empowers the differently abled. The disability could be physical, restricting the access or mental, restricting the cognition. Smart devices provide support in the activities of daily living. Assistance to the disabled is showing promise. The AI is stepping into help communication and cognition. Assistive technologies are showing lot of promise and a positive outlook for the differently abled. AI is into the diagnosis of cognitive disabilities like autism and such disorders. However, there are issues that need consideration and scope for improvement [75, 76, 77]. ABIDE (Autism Brain Imaging Data Exchange) is one of the AI projects for Autism and it helps the diagnostic evaluation to be less time consuming, more efficient, and accurate. It even identifies certain phenotypes that respond better for therapeutic interventions [78].

13. Medical education and future scope

The current curriculum in most institutes offering graduate and post graduate studies does not expose the students to AI and its importance in patient care. While the clinicians use AI platforms like clinical decision support systems, expert systems, outcome scores and special AI programs developed to help clinical judgements and gain insights, what knowledge of AI is being imparted to the students, postgraduates, and those already in practice. Medicine is a lifelong pursuit and need continuous learning. With the projections showing a great potential of AI in healthcare, is it not necessary to prepare the future healthcare force to be prepared for this growth? A basic information of AI and its impact in practice, and its promise is desirable. The healthcare personnel need not know the intricacies of AI and its development. They should at least know how the AI is used, interpret, and explain its utility to the patients. When ethical issues like privacy and autonomy are involved, the students shall know the legal standing as well. The medical doctor is a team leader in the healthcare, and he needs to know the future trends and possibilities. While at the current undergraduate program, an introduction to AI in the form of electives are offered to introduce the concepts of AI, the postgraduate needs to know more of AI. Integration into the curriculum short courses in data science, informatics, importance of data entry at the point of care, ethical and legal implications along with the use and interpretation of AI in healthcare in their curriculum is essential. Continuing education programs, refresher courses and workshops in AI in virtual or physical mode to current practitioners are to be planned. There are two categories of doctors: those who need basic knowledge and those who show interest and wish to involve themselves in the promotion of AI in healthcare. The institutes shall identify the tech savvy faculty who can take the leadership and design a short course on computers, data and its importance and even assess the competence of the data entry at the point of care [79, 80, 81, 82]. Certain published surveys [83, 84] indicate a mixed response from positive acceptance to totally negative fear.

14. Negative impact of AI

The developers of AI are striving continuously to make the computers simulate the human brain and visualise a day, computers outperform humans. There are many areas where the AI claims success. Recent advances have shown successful attempts to enter the emotional domain of human brain. The elimination of human error and fatigue factor in repetitive tasks, decision making algorithms, speed of action, and precision are some of the advantages. Yet there are concerns of the negative impact of AI.

The foremost concern is the lack of trust. The user perceptions and reliability of AI are two issues that influence the trust. The computers act on the inputs and function as a rule-based machine. The AI/ML tools are built on the database and as algorithms. With the concept of patient centred health care delivery gaining importance, the explainability, validity and reliability of the AI decision support systems are of great concern. Situations unexpected or unusual, are often met in clinical practice. Does the database reflect the real-world situation and ethnic variations. The influence of bias in sample selection, variations in the disease patterns, false negatives and false positives influencing the clinical decisions is significant on what is essentially retrospective database that is used while training the AI/ML tools. How predictable is the AI when applied to a prospective situation? Does it cover adequately the drifts and data shifts possible in newer practices, populations, and lifestyles? The clinician is likely to ignore diagnostic challenges or alternatives when an easier option like decision support systems is available. The human bias in case selection contributing to the database may lead to erroneous insights [85, 86, 87, 88, 89].

The scare of loss of jobs when a computer takes over the human actions is a reality. Fatigue or distraction are common in humans performing repetitive tasks. The precision the computers achieve is well known. The scare one might be replaced by a computer that outperforms a human is a fear in many. A Gallup poll held in USA revealed that the jobs lost are more than created. The AI proponents argue creation of new jobs will compensate the losses of human jobs to machines. The new jobs need new skills. Skill sets required, shortages in healthcare personnel will keep the AI in forefront in job creations, the proponents of AI claim. The AI developers have to strive hard to gain the trust of users and other stakeholders [89, 90]. Those concerned with healthcare, instead of mistrust in AI, should exercise their stakes in this technology. The medical professional shall see the database is well represented covering all variations and contribute to the development of AI. Data at the point of care contributing to the database is the primary responsibility of the healthcare personnel on which the AI/ML tools are built.

15. Conclusions

The medical doctor is an important link in the AI ecosystem in healthcare. The profession has a significant role in the data generation that forms the basis for diagnosis, management, and treatment. Doctor is also the ultimate user and beneficiary of the AI platforms. Data generation at the point of care and the comprehensive database for development of AI are heavily dependent on healthcare profession. For us to understand the true impact of AI, adverse and unwanted effects and complications of all interventions must be recorded at the point of care. However much the promise of intelligent machines performing the duties of the human, at least, as of today and the foreseeable future, the personal touch and empathy provided by the doctors is irreplaceable. Further studies are needed to fully evaluate the potential and limitations of AI in healthcare. The medical profession instead of viewing the AI as a competitor, should collaborate and support actively this technology.

Artificial Intelligence is being promoted as the next major advance in healthcare delivery. AI is here to stay because of the promise, it offers across multiple fields of medicine. For the world to be able to see the true benefits of AI, new technologies using AI must also be developed and validated like any other technology in medicine. Development of AI based technologies must start with thorough evaluation of appropriate use cases, understanding of user needs, whether the user is a patient or the doctor, and comprehensive assessment of risks and benefits. Because the outcome of any AI driven tool depends so much on the data used for training the algorithms, adequate care must be taken in collecting, curating, and use of data. Governing and regulatory bodies and standard committees must work with technical matter experts and intended users (both physicians and patient advocates) to set policies and guidance that define the boundaries of use of this technology and establishing guardrails that prevent misuse or abuse of AI.

Conflicts of interest

Nil

Financial support

Nil

Originality

This is the original work of mine and has not been published or submitted for publication to any peer reviewed periodical.

References

  1. 1. Shamsi J, Ali M, Khoja Mohammad K, et al. Data-intensive cloud computing: Requirements, expectations, challenges, and solutions. Journal of Grid Computing. 2013;11(2):281-310. DOI: 10.1007/s10723-013-9255-6
  2. 2. Patel S. Available from: https://www.powermag.com/Hype-and-Hope-Artificial-Intelligences-Role-in-the-Power-Sector/
  3. 3. Zamorano A. Available from: https://www.Forbes.com/sites/cognitiveworld/2020/04/30/ai-hype-or-ai-hope-when-will-ai-disrupt-the-pharmaceutical-Industry/?
  4. 4. Longoni C and Morewedge CK. AI Can Outperform Doctors. So Why Do not Patients Trust It?. Available from: https://hbr.org/2019.10/ai-can-outperform-doctors-so-why-don’t-patients-trust-it
  5. 5. Harvey H. Why AI will not replace radiologist. Available from: https://www.kdnuggets.com/2018/11/why-ai-will-not-replace-radiologists.html
  6. 6. Yokoi R, Eguchi Y, Fujita T et al. Artificial intelligence is trusted less than a doctor in medical treatment decisions: Influence of perceived care and value similarity International Journal of Human-Computer Interaction: 2021.37(10).981-990 10.1080/10447318.2020.1861763
  7. 7. Tamora H, Yamashina H, Mukai M, Mori Y, Ogasawara K: Acceptance of the use of artificial intelligence in medicine among Japan’s Doctors and the Public: A Questionnaire Survey, JMIR Human Factors: 2022;9(1).e24680 10.2196/24680
  8. 8. Liu X, Keane PA, Denniston AK. Time to regenerate: The doctor in the age of artificial intelligence. Journal of the Royal Society of Medicine. 2018;111(4):113-116. DOI: 10.1177/0141076818762648
  9. 9. Price A, Harborne D, Brains D, et al. Stakeholders in Explainable AI. 2019. Available from: https://deepai.org/publication/stakeholders-in-explainable-ai
  10. 10. Responsible AI and Its Stakeholders | DeepAI: . Available from: https://arxiv.org/pdf/2004.11434v1.pdf
  11. 11. Wanga EO What is point of care documentation? Available from: https://experience.care/blog/what-is-point-of-care-documentation/2022
  12. 12. Bhavaraju SR. From Subconscious to Conscious to Artificial Intelligence: A Focus on Electronic Health records. Neural India. 2018;66:1270-1275
  13. 13. William VD. Meaningful use of patient-generated data in EHRs. Available from: http://library.ahima.org/doc?oid=106996#.W34S9S2B2qA
  14. 14. Angell M. No longer possible to believe much of clinical research published. Available from: https://ethicalnag.org/2009/11/09/nejm-editor/
  15. 15. Fung J. The corruption of evidence-based medicine—killing for profit. Available from: https://medium.com/@drjasonfung/the-corruption-of-evidence-based-medicine-killing-for-profit-41f2812b8704
  16. 16. Enago Academy: Publish or Perish: What Are Its Consequences?. Available from: https://www.enago.com/academy/publish-or-perish/consequences/
  17. 17. Cohen E. Available from: https://appian.com/blog/if-its-not-documented-did-it-happen?
  18. 18. Available from: https://www.hhs.gov/hipaa/for-professionals/special-topics/hitech-act-enforcement-interim-final-rule/index.html
  19. 19. Sittig DF, Singh H. Rights and responsibilities of users of electronic health records. Canadian Medical Association Journal. 2012;184:1479-1483
  20. 20. Electronic Health Record (EHR) Standards for India. 2016 . Available from: https://mohfw.gov.in/basicpage/electronic-health-record-ehr-standards-india-2016
  21. 21. Muzny M, Henriksen A, Giordanengo A, et al. Wearable sensors with possibilities for data exchange: Analysing status and needs of different actors in mobile health monitoring systems. International Journal of Medical Informatics. 2020;133:104. DOI: 10.1016/j.ijmedinf.2019.104017
  22. 22. He J, Baxter SL, Xu J, et al. The practical implementation of artificial intelligence technologies in medicine. Nature Medicine. 2019;25:30-36
  23. 23. Willemink MJ, Koszek WA, Martin J et al: Preparing medical imaging data for machine learning Radiology 2020. 295,192,224 10.1148/radiol.2020192224
  24. 24. Bartolo M, Roberts A, Welbl A, et al. Beat the AI: Investigating adversarial human annotation for reading comprehension. Transactions of the Association for Computational Linguistics. 2020;8:662-678. DOI: 10.1162/tacl_a_00338
  25. 25. Meskó B, Görög M. A short guide for medical professionals in the era of artificial intelligence. NPJ Digital Medicine. 2020;3:126. DOI: 10.1038/s41746-020-00333
  26. 26. Kalir RH, Garcia A. Annotation. United Kingdom: MIT Press; 2021
  27. 27. Zech J, Pain M, Titano J, et al. Natural language–based machine learning models for the annotation of clinical radiology reports. Radiology. 2018;287:570-580
  28. 28. Krenzer A, Makowski K, Hekalo A, et al. Fast machine learning annotation in the medical domain: A semi-automated video annotation tool for gastroenterologists. Biomedical Engineering Online. 2022;25:21-33. DOI: 10.1186/s12938-022-01001-x
  29. 29. Price WN II. Artificial Intelligence in Health Care: Applications and Legal Issues 2017. SciTech Lawyer. 2017;14:10. Available from: https://ssrn.com/abstract=3078704
  30. 30. Sullivan HR, Schweikart SJ. Are current tort liability doctrines adequate for addressing injury caused by AI? AMA Journal of Ethics. 2019;21(2):E160-E166. DOI: 10.1001/amajethics.2019.160
  31. 31. Schönberger D. Artificial intelligence in healthcare: A critical analysis of the legal and ethical implications. International Journal of Law and Information Technology. 2019;27:171-203. DOI: 10.1093/ijlit/eaz004
  32. 32. Rodrigues R. Legal and human rights issues of AI: Gaps, challenges and vulnerabilities. Journal of Responsible Technology. 2020;4:100. DOI: 10.1016/j.jrt.2020.100005. https://www.sciencedirect.com/science/article/pii/S2666659620300056
  33. 33. Gerke S, Minssen T, Cohen G. Ethical and legal challenges of artificial intelligence-driven healthcare. Artificial Intelligence in Healthcare. 2020:295-336. DOI: 10.1016/B978-0-12-818438-7.00012-5. Epub 2020 Jun 26. PMCID: PMC7332220
  34. 34. Norton Rose Fulbright. Available from: https://www.insideteachlaw.com/publications/legal-risks#section1
  35. 35. Bartoletti, I: AI in Healthcare: Ethical and privacy challenges. In: Riaño D, Wilk S, ten Teije, A. (eds) Artificial Intelligence in Medicine. 2019. Springer, Cham. 10.1007/978-3-030-21642-9_2
  36. 36. Anderson M, Anderson SL. How should AI be developed, validated, and implemented in patient care? AMA Journal of Ethics. 2019;21(2):E125-E130. DOI: 10.1001/amajethics.2019.125
  37. 37. Atkinson K, Bench-Capon T, Bollegala D. Explanation in AI and law: Past, present, and future. Artificial Intelligence. 2020;289:103387. DOI: 10.1016/j.artint.2020.103387. https://www.sciencedirect.com/science/article/pii/S0004370220301375
  38. 38. Molnár-Gábor F. Artificial intelligence in Healthcare: Doctors, patients and liabilities. In: Wischmeyer T, Rademacher T, editors. Regulating Artificial Intelligence. Cham.: Springer; 2020. DOI: 10.1007/978-3-030-32361-5_15
  39. 39. Boddington P. Introduction: Artificial intelligence and ethics. In: Towards a Code of Ethics for Artificial Intelligence. Artificial Intelligence: Foundations, Theory, and Algorithms. Cham: Springer; 2017. DOI: 10.1007/978-3-319-60648-4_1
  40. 40. Miquido. 2020. Available from: https://www.miquido.com/blog/ai-legal-issues
  41. 41. Available from: https://www.royalfree.nhs.uk/patients-visitors/how-we-use-patient-information/information-commission-office-ico-investigation-into-our-work-with-deepmind/2017
  42. 42. Amann J, Blasimme A, Vayena E, et al. Explainability for artificial intelligence in healthcare: A multidisciplinary perspective. BMC Medical Informatics and Decision Making. 2020;20:310. DOI: 10.1186/s12911-020-01332-6
  43. 43. Astromskė K, Peičius E, Astromskis P. Ethical and legal challenges of informed consent applying artificial intelligence in medical diagnostic consultations. AI & SOCIETY. 2021;36:509-520. DOI: 10.1007/s00146-020-01008-9
  44. 44. McNamara M. 2022 . Available from: https://www.netapp.com/blog/explainable-ai/
  45. 45. Kadircan H. Keskinbora: Medical ethics considerations on artificial intelligence. Journal of Clinical Neuroscience. 2019:277-282. DOI: 10.1016/j.jocn.2019.03.001
  46. 46. Geis JR, Brady AP, Carol C, et al. Ethics of artificial intelligence in radiology: Summary of the Joint European and North American Multisociety Statement. Radiology;293(2):436-440. DOI: 10.1148/radiol/2019191586
  47. 47. Martinho A, Kroesen M, Chorus C. A healthy debate: Exploring the views of medical doctors on the ethics of artificial intelligence. Artificial Intelligence in Medicine. 2021;121:102190. http://creativecommons.org/licenses/by/4.0/
  48. 48. The Digital Personal Data Protection Bill. 2022. Available from: https://www.meity.gov.in/content/digital-data-protection-bill-2022
  49. 49. General Data Protection Regulation (GDPR). 2016. Available from: EUR-Lex-32016R0679-EN-EUR-Lex (europa.eu)
  50. 50. Bill Text—AB-375 Privacy: Personal information: businesses. (ca.gov). 2017. Available from: https://leginfo.legislature.ca.gov/faces/billtextclient.xhtml?bill_id20172018ab375
  51. 51. Digital Information Security in Health Care Act (DISHA). 2018. Available from: https://mohfw.gov.in/newshighlights/comments-draft-digital-information-security-health-care-actdisha
  52. 52. CFI Team: Data Anonymisation. 2022 . Available from: https://www/corporatefinanceinstitute.com/resources/business-intelligence/data-anonymization
  53. 53. Elliot M, O’Hara K, Raab C, et al. Functional anonymisation: Personal data and the data environment. Computer Law & Security Review. 2018;34:204-221. DOI: 10.1016/j.clsr.2018.02.001
  54. 54. Ghazal TM. RETRACTED ARTICLE: Internet of things with artificial intelligence for health care security. Arabian Journal for Science and Engineering. 2023;48:5689. DOI: 10.1007/s13369-021-06083-8
  55. 55. Xi P, Zhang X, Wang L, Wenjuan W et al.: A review of blockchain-based secure sharing of healthcare data appl. Sciences. 2022;12(15):7912 10.3390/app12157912
  56. 56. Aung YYM, DCS W, Ting DSW. The promise of artificial intelligence: A review of the opportunities and challenges of artificial intelligence in healthcare. British Medical Bulletin. 2021;139:4-15. DOI: 10.1093/bmb/ldab016
  57. 57. Weins J, Shenoy ES. Machine learning for healthcare. On the verge of a major shift in Healthcare Epidemiology. Clinical Infectious Diseases. 2021;66:149-153. DOI: 10.1093/cid/cix731
  58. 58. Keerthy AS, Manju Priya S.Artificial intelligence in healthcare databases. In: Suresh A, Paiva S, editors. Deep Learning and Edge Computing Solutions for High Performance Computing. EAI/Springer Innovations in Communication and Computing. Cham: Springer; 2021. DOI: 10.1007/978-3-030-60265-9_2
  59. 59. Davenport T, Kalakota R. The potential for artificial intelligence in healthcare. Future Healthcare Journal. 2019;6(2):94-98. DOI: 10.7861/futurehosp.6-2-94
  60. 60. Cohen I. Informed Consent and Medical Artificial Intelligence: What to Tell the Patient? Georgetown Law Journal. 2020;108:1425-1469. DOI: 10.2139/ssrn.3529576
  61. 61. Academy of Royal Medical Colleges; Artificial intelligence in healthcare. Available from: https://www.aomrc.org.uk/wp-content/uploads/2019/01/artificial_intelligence_in_healthcare_0119.pdf
  62. 62. Jones SM, Burleson B. The impact of situational variables on helpers’ perceptions of comforting messages: An attributional analysis. Communication Research. 1997;24(5):530-555. DOI: 10.1177/009365097024005004
  63. 63. Smith KA, Masthoff J. Can a Virtual Agent provide good Emotional Support? 10.14236/ewic/HCI2018.13
  64. 64. Guzman AL, Lewis S. Artificial intelligence and communication: A Human–Machine Communication research agenda. New Media & Society. 2020;22(1):70-86. DOI: 10.1177/1461444819858691
  65. 65. Wang W, Siau K. Living with Artificial Intelligence—Developing a Theory on Trust in Health Chatbots. In: Proceedings of the Sixteenth Annual Pre-ICIS Workshop on HCI Research in MIS. San Francisco, CA; 2018
  66. 66. Bilquise G, Ibrahim S, Shaala K. Emotionally Intelligent Chatbots: A Systemic Literature Review. DOI: 10.1115/2022/9601630. CORPUS id: 252552808
  67. 67. Intelligent Healing for Mental Health. 2022 . Available from: https://www.twill.health/mental-health
  68. 68. Meng J, Dai YN. Emotional support from AI Chatbots: Should a supportive partner self-disclose or not? Journal of Computer-Mediated Communication. 2021;26(4):207-222. DOI: 10.1093/jcmc/zmab005
  69. 69. Diederich S, Brendel AB, Morana S, et al. On the design of and interaction with conversational agents: An organising and assessing review of human computer interaction research. Journal of the Association for Information Systems. 2022;23(1):96-138
  70. 70. Sullivan Y, Nyawa S, Wamba SF. Combating Loneliness with Artificial Intelligence: An AI-Based Emotional Support Model URI: 2023. Available from: https://hdl.handle.net/10125/103173 978-0-9981331-6-4
  71. 71. Picard RW. Affective Computing; MIT Media laboratory perceptual computing section technical report 321. 1995
  72. 72. Aonghusa PM, Michie S. Artificial intelligence and behavioural science through the looking glass. Annals of Behavioral Medicine. 2020;54:942-947. DOI: 10.1093/abm/kaaa095
  73. 73. Graham S, Depp C, Lee EE, Nebeker C, Tu X, Kim HC, et al. Artificial intelligence for mental health and mental illnesses: An overview. Current Psychiatry Report. 2019;21(11):116. DOI: 10.1007/s11920-019-1094-0
  74. 74. Shaheen MY. AI in Healthcare: Medical and socio-economic benefits and challenges. 2021. Available at SSRN: https://ssrn.com/abstract=3932277 or DOI: 10.2139/ssrn.3932277
  75. 75. Chakraborty N, Mishra Y, Bhattacharya R, et al. Artificial Intelligence: The road ahead for the accessibility of persons with Disability. Materials Today: Proceedings. 2021. DOI: 10.1016/j.matpr.2021.07.374. Available from: https://www.sciencedirect.com/science/article/pii/S2214785321052330
  76. 76. Smith P, Smith L. Artificial intelligence and disability: Too much promise, yet too little substance? AI and Ethics. 2021;1:81-86. DOI: 10.1007/s43681-020-00004-5
  77. 77. Wall DP, Dally R, Luyster R, et al. Use of artificial intelligence to shorten the behavioural diagnosis of autism. Plos One. 2021;7(8):e43855. DOI: 10.1371/journal.pone.0043855
  78. 78. Chaddad A, Li J, Lu Q , et al. Can autism be diagnosed with artificial intelligence? A narrative review. Diagnostics. 2021;11:2032. DOI: 10.3390/diagnostics11112032
  79. 79. McCoy LG, Nagaraju S, Morgado F, et al. What do medical students actually need to know about artificial intelligence? NPJ Digital Medicine. 2020;2020:86. DOI: 10.1038/s41746-020-0294-7
  80. 80. Park SH, Do K-H, Kim S, et al. What should medical students know about artificial intelligence in medicine? Journal of Educational Evaluation Health Professor. 2019;16:18. DOI: 10.3352/jeehp.2019.16.18
  81. 81. Masters K. Artificial intelligence in medical education. Medical Teacher. 2019;41(9):976-980. DOI: 10.1080/0142159X.2019.1595557
  82. 82. Kennedy TJT, Glenn R, et al. Point-of-care assessment of medical trainee competence for independent clinical work. Academic Medicine. 2008;83(10):S89-S92. DOI: 10.1097/ACM.0b013e318183c8b7
  83. 83. Santos DPS, Giese D, Brodehl S, et al. Medical students’ attitude towards artificial intelligence: A multicentre survey. European Radiology. 2019;29:1640-1646. DOI: 10.1007/s00330-018-5601-1
  84. 84. Gong B, Nugent JP, Guest W, et al. Influence of Artificial Intelligence on Canadian Medical Students’ Preference for Radiology Specialty: A National Survey Study. Academic Radiology. 2018;26:566-577. DOI: 10.1016/j.acra.2018.10.007
  85. 85. Asan O, Bayrak AE, Choudhury A. Artificial intelligence and human trust in healthcare: Focus on clinicians. Journal of Medicine Internet Research. 2020;22(6):e15154. DOI: 10.2196/15154
  86. 86. LaRosa E, Danks D. Impacts on trust of healthcare AI. In: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (AIES ‘18). New York, NY, USA: Association for Computing Machinery; 2018. pp. 210-215. DOI: 10.1145/3278721.3278771
  87. 87. Kelly CJ, Karthikesalingam A, Suleyman M, et al. Key challenges for delivering clinical impact with artificial intelligence. BMC Medicine. 2019;17:195. DOI: 10.1186/s12916-019-1426-2
  88. 88. Challen R, Denny J, Pitt M, et al. Artificial intelligence, bias, and clinical safety. BMJ Quality & Safety. 2019;28:231-237
  89. 89. Duggal N: Advantages and disadvantages of artificial intelligence. 2023. Available from: https://www.simplilearn.com/advantages-and-disadvantages-of-artificial-intelligence-article
  90. 90. Reinhart RJ. Available from: https://news.gallup.com/poll/228194/public-split-basic-income-workers-replaced-robots.aspx

Written By

Subba Rao Bhavaraju

Submitted: 19 February 2023 Reviewed: 31 March 2023 Published: 27 April 2023