Open access peer-reviewed chapter

Toward the Clinic: Understanding Patient Perspectives on AI and Data-Sharing for AI-Driven Oncology Drug Development

Written By

Roberta Dousa

Submitted: 27 September 2019 Reviewed: 11 May 2020 Published: 09 September 2020

DOI: 10.5772/intechopen.92787

From the Edited Volume

Artificial Intelligence in Oncology Drug Discovery and Development

Edited by John W. Cassidy and Belle Taylor

Chapter metrics overview

630 Chapter Downloads

View Full Metrics

Abstract

The increasing application of AI-led systems for oncology drug development and patient care holds the potential to usher pronounced impacts for patients’ well-being. Beyond technical innovations and infrastructural adjustments, research suggests that realizing this potential also hinges upon patients’ trust and understanding. With the promise of precision oncology predicated on a data-driven approach, public and private survey studies indicate patients view the lack of clarity surrounding data privacy, security, and ownership as a growing concern. Assuming an in-depth, semi-structured interview protocol, this qualitative study examines cancer patients’ perceptions of the burgeoning development of AI-led systems for oncology as well as their perspectives on sharing health data (including genetic data) for drug development. This article seeks to provide greater insight into the legal and ethical challenges that surround the application of these tools and to explore patient-centered approaches to building the frameworks of trust and accountability crucial to transferring these advances to the clinic.

Keywords

  • AI
  • oncology drug development
  • health data-sharing
  • cancer patients

1. Introduction

Recent decades have witnessed major advances for AI systems, which has subsequently resulted in increased interest in applying AI-driven technologies for oncology drug development and cancer patient care. Beyond technical and infrastructural adjustments, improvements, and innovations, recent studies suggest that realizing the potential of AI in healthcare and applying data-driven models to oncology drug development hinges in part upon the public’s—with especial regard to potential patients’ and users’—trust and understanding. As contingent to oncology drug development and research, the importance of the public’s capacity for trust extends to both the use of AI and data-sharing. Public and private survey studies indicate patients view the lack of clarity surrounding data privacy, security, and ownership as a growing concern. Exemplifying this, in September 2018, a KPMG survey of over 2000 Britons found that 51% of its participants were both worried about data privacy and unwilling to share personal data with U.K. organizations for AI research and use [1]. In addition, the U.K.’s Academic Health Sciences Network, in conjunction with the Department of Health and Social Care, released a report delineating the results from a 2018 “state of the nation survey.” Similar to the findings of the KPMG survey, this report, titled “Accelerating Artificial Intelligence in Health and Care,” identified that, according to pioneers in the field, the “overall enablers” to realizing the potential of AI in health and care include an ‘ethical framework to build/preserve trust and transparency’ as well as “clarity around ownership of data” [2]. Likewise, when asked, “which of the following areas do you think will be the greatest problem for artificial intelligence,” the KPMG survey respondents’ top answer was “data privacy and security.”

As patients increasingly view the lack of clarity surrounding data privacy, security, and ownership as a growing concern, the potential benefits of AI-led oncology drug development and oncology care systems must not be accepted as superseding their potential to enact social harm. As public and patient approval and participation contribute to the use and development of these systems is imperative to study patient perceptions of AI and AI-led oncology drug development endeavors and to heed and address public concerns. Accordingly, this chapter enlists and examines ethnographic, textual, and other qualitative data that the author has assembled in pursuing a broad examination of the legal, political, and ethical imperatives surrounding the development of AI-driven systems for healthcare and for oncology, specifically. This chapter provides new evidence for understanding patient reception of the development and deployment of these systems as well as patients’ perceptions and willingness to participate in health-related AI development by sharing medical data, necessary to build these systems and advance their efficacy, with the public and private entities engaged in developing them. Rooted firmly in interview work produced utilizing an in-depth, semi-structured interview protocol this chapter offers insights into the legal and ethical quandaries that surround the application of these tools in order to ultimately explore and assess patient-centered approaches to building crucial frameworks of trust and accountability fundamental to transferring these advances into clinical settings for the betterment of patient outcomes.

This chapter opens by offering a contextual scaffolding to understanding the terms AI and machine learning. Subsequently, the author provides an introductory overview to understand how AI-led systems might be applied to clinical contexts and oncology setting, respectively. This is followed by a discussion of some practical considerations and challenges to AI-enabled healthcare applications. The author then provides an overview of the study’s methods and methodology to further contextualize the remaining discussion, which relies heavily upon the author’s original qualitative research. This leads to a discussion of patient perceptions, knowledge, and concerns regarding AI-driven systems for oncology, drug development, and medicine, more broadly. Immediately after, the author stages an exploration of patient perceptions and concerns regarding sharing their medical data to bolster AI and oncology drug development research. The final section of this chapter discusses further patient-centered recommendations and proposals for ensuring patient trust, participation, and safety pertinent to increasing the development and clinical use of AI systems for oncology.

Advertisement

2. Defining AI and machine learning

2.1 Defining AI systems and intelligence

Although conceptions of sentient, machinic animacies can be traced as far back as antiquity, the understanding of “AI” or “artificial intelligence” as a term and field of study originated in 1956 following a conference organized at Dartmouth College by the American computer scientist, John McCarthy. McCarthy, who himself coined the term, is often hailed as a preeminent pioneer of AI McCarthy defined artificial intelligence as “the science and engineering of intelligent machines” [3]. The ensuing decades saw the salad days of what the analytic philosopher John Haugeland names as “Good Old Fashioned AI” (GOFAI). Within the paradigm GOFAI, artificial intelligence essentially referred to “procedural, logic-based reasoning and the capacity to manipulate abstract symbolic representations” [4]. For instance, the commercial “expert systems” of the 1970s and 1980s are typically understood as exemplifying GOFAI. By 1969, however, data scientists began to seriously question the general viability of AI as well as the initial, florid promise that surrounded these systems. The deflation of these experts, coupled with considerable decreases in grant support and research output, led to an “AI winter,” which lasted approximately for the next 20 years until a renewed interest in machine learning techniques propelled AI research forward [5].

In contrast to GOFAI, the “intelligence” at stake in contemporary AI systems is typically understood to imbricate machine learning techniques. Intelligence, in the current paradigm, is thought to derive from systems’ abilities to detect patterns across vast datasets and predict outcomes based on probability statistics. In other words, today algorithmic systems are deemed AI provided they process and analyze vast amounts of data, beyond the scope of an individual human, in order to predict and automate certain activities. Critical to understanding AI’s consequences for epistemology and social practice, anthropologist of technology M.C. Elish stresses that “the datasets and models used in these systems are not objective representations of reality” as systems that utilize machine learning techniques “can only be thought to ‘know’ something in the sense that it can correlate certain relevant variables accurately” [4].

With some cognizance of the shifting valences the term accrued in decades since the 1950s, AI might be otherwise understood as “a characteristic or set of capabilities exhibited by a computer that resembles intelligent behavior” although, evidently, delimiting what might be understood as “intelligence” remains a crucial although unresolved and contested dimension in defining AI [6]. Some researchers consider artificial intelligence to be contingent on behavioral demarcations, ostensible when a “computer can sense and act appropriately in a dynamic environment” [6]. Others link intelligence to symbolic processing, exhibited, for instance, when a system can recognize and respond appropriately to speech [6].

2.2 Machine learning: “imposing a shape on data”

Given the breadth of the term’s inherent contestations, evolutions, and stubborn fluidity, social researchers of technology such as Tim Hwang and M.C. Elish contend that definitions of artificial intelligence and intelligent systems might be appropriately understood as “moving targets.” Rather than possessing a static set of demarcations signaling intelligence, artificially intelligent systems are defined in relation to “existing beliefs, attitudes, and technology” [6]. They argue that the rhetorical power of “artificial intelligence” is found in its “slipperiness”: seemingly everyone has an idea of what AI is, and yet everyone’s notion is different [6]. In consequence, data scientists and engineers today tend to shy away from the term “artificial intelligence.” Indeed, the equivocality of “artificial intelligence” has siloed “AI” as a marketing term rather than a technical one [4].

Current research in artificial intelligence occurs primarily in the field of machine learning (ML). Although “machine learning” was coined in 1959, significant interest in these techniques did not follow until the 1980s following further developments in techniques such as neural networks. Digital medicine researcher, Eric Topol argues that machine learning can be understood as “computers’ ability to learn without being explicitly programmed, with more than 50 different approaches like Random Forest, Bayesian networks, Support Vector machine uses”; they are “computer algorithms [that] learn from examples and experiences (datasets) rather than predefined, hard rules-based methods” [5]. Computer scientist Tom Mitchell has elaborated what that “learning” in the context of ML systems refers to. Mitchell writes: “A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E” [7].

Put differently, media and communications scholar Taina Bucher explains that although “algorithms are ‘trained’ on a corpus of data from which they may ‘learn’ to make certain kinds of decisions without human oversight…machines do not learn in the same sense that humans do.” Rather, Bucher argues, “the kind of learning machines do should be understood in a more functional sense” [8]. Citing legal scholar Harry Surden, Bucher explains that machine learning-driven systems are “capable of changing their behavior to enhance their performance on some task through experience” [8].

Machine learning is largely enabled by “proliferating data from which models may learn.” It follows that enormous datasets are paramount for developing effective ML systems. Machine learning techniques such as logistic regression models, k-nearest neighbors, and neural networks generally “pivot around ways of transforming, constructing, or imposing some kind of shape on the data and using that shape to discover, decide, classify, rank, cluster, recommend, label, or predict what is happening or what will happen” [9]. Bucher underscores that what determines whether to use one technique over another “depends upon the domain (i.e., loan default prediction vs. image recognition), its demonstrated accuracy in classification, and available computational resources, among other concerns” [8].

Machine learning systems are distinct from deterministic algorithms in that “given a particular input, a deterministic algorithm will always produce the same output by passing through the same sequence of steps” while an ML algorithmic system “will learn to predict outputs based on previous examples of relationships between input data and outputs” [8]. In other words, Bucher notes that “in contrast to the strict logical rules of traditional programming, machine learning is about writing programs that learn to solve problems by examples…using data to make models that have certain features” [8]. Feature engineering involves “extracting and selecting the most important aspects of machine learning” [8]. Signaling the constructed subjectivity of the knowledge produced by systems utilizing machine learning techniques, Bucher explains that “the understanding of data and what it represents, then, is not merely the matter of a machine that learns but also of humans who specify the states and outcomes in which they are interested in the first place” [8].

Advertisement

3. AI systems for oncology and oncology drug development

3.1 AI-enabled medical care

AI systems have been deployed in healthcare contexts since at least the 1970s following the development of computer-assisted clinical decision support tools, however the last decade is particularly thought to have been a watershed moment for the nexus of AI systems and healthcare. The advent of so-called big data analytics coupled with crucial advances in machine learning techniques (specifically, the exponential development of new deep learning algorithms), has propelled both the development of, and a far-reaching rejuvenated interest in, applying these models for medical usage. This has compelled technologists, medical researchers, venture capitalists, and media pundits, among others, to question whether the contemporary is witnessing the dawning of a new era of medicine. In the past several years alone, leading-edge advances in machine learning have enabled AI-driven systems to accurately identify heart rhythm abnormalities, predict suicides at a better rate than mental health professionals; to successfully interpret pathology slides of potential neoplastic tissues or medical scans with the same rate of accuracy (at times, even exceeding the rate of accuracy) of that of senior pathologists and radiologists; and to accurately diagnosis both a multitude of eye ailments such as diabetic retinopathy as well as some skin cancers at a similar rate to (and in some instances, better than) medical professionals [5]. Beyond these examples, other current efforts are directed at training AI systems to identify modifications in drug treatment protocols and to predict clinical outcomes.

3.2 AI-enabled cancer care

These celebrated developments, as well as a host of others, have led researchers in oncology-related fields to question how AI systems might be deployed to improve clinical outcomes for patients with cancer. Health researchers are emboldened by the promise that any piece of medical data able to be translated analytically such as “patterns, predictable outcomes, or pair associations” can be effectively evaluated by machines [10]. Currently, AI-based approaches to clinical trial design, pathology, and radiology are being studied for effectiveness with encouraging results. Under development are other promising applications of AI For example, data and medical scientists are endeavoring to integrate and analyze individuals’ multi-omics data (such as individuals’ genomes) using AI The ultimate goal of this cooperative research is to usher in a new standard of tailored or personalized medical care with the potential to improve clinical outcomes for patients with cancer. While some researchers and data scientists are pursuing the deployment of multi-omics data to improve early diagnosis in oncology, others are hoping AI-enabled approaches will aid in the continuing discovery of new and increasingly sensitive biomarkers for cancer care [10]. Healthcare professionals, researchers, and data scientists hope that, in the near future, complex biomarkers will constitute an improved basis for cancer prevention and diagnosis, offering patients the most optimal treatments based on the particular characteristics of their cancer, and aid medical professionals in determining the likelihood of recurrence [11].

Advertisement

4. Practical considerations and challenges to AI-enabled healthcare

4.1 Contextualizing the hype: AI limitations

Accompanying the renewed interest in applying machine learning techniques to health data has been a buzz of exaggerated claims and overdrawn expectations regarding how quickly and comprehensively AI will transform modern medicine. Digital medicine researcher Eric Topol offers a partial list of the “outlandish expectations” escorting the development AI-enabled healthcare. Some envision that soon these systems will “outperform doctors at all tasks; diagnose the undiagnosable; treat the untreatable; see the unseeable on scans and slides; predict the unpredictable; classify the unclassifiable; eliminate workflow inefficiencies; eliminate hospital admissions and readmissions; eliminate the surfeit of unnecessary jobs; result in 100% medical adherence; produce zero patient harm; and cure cancer” [5]. Instead, Topol and other medical researchers assume a more modest view: AI-driven systems will not serve as a panacea to all the aforementioned predicaments in modern healthcare but will instead gradually serve as an increasingly important tool in addressing these and other issues. Moreover, medical experts and technologists alike contend that the encouraging results AI-driven systems have garnered in fields like pathology and radiology, for example, should be taken neither as a justification for the outsourcing of pathologists and radiologists, nor point to the burgeoning obsolescence of medical specialists as a whole [10]. Rather, they stress that these initial successes should be understood as an “indication that their workload could be optimized and, importantly, the waiting time for patients to receive a diagnosis can be reduced” [10]. In this perspective, over time, the widespread adoption of AI systems in healthcare will result in a crucial leveling of the “medical knowledge landscape” [5]. As a consequence, some medical researchers believe that advances in AI and the eventual adoption of these systems within the realm of healthcare will herald unprecedented advantages to modern medical specialists by “restoring the gift of time” to health professionals allowing them to devote more clinical attention, emotional support, and guidance to patients [5].

4.2 Tempering visions of imminent medical revolutions

While in the past decade, the development of AI systems for use in the medical field has certainly progressed and led to feats that have garnered significant attention, these successes remain arguably limited and the progression of these systems decidedly gradual. Taking the field of narrow AI diagnostics as an example, recent systems have accurately diagnosed skin lesions and pathology slides in the realm of oncology. In cardiology, AI diagnostic systems have accurately interpreted echocardiographic images and electrocardiograms in diagnosing heart abnormalities [5]. Other AI diagnostic systems have successfully analyzed audio-wave forms to assist in diagnosing asthma, pneumonia, tuberculosis, and other lung ailments [5]. All of these successes, however, constitute narrow AI tools that, in reasonable estimations, would serve to aid rather than replace medical professionals. Demonstrably, one broad AI diagnostic system sits in recent memory of some oncologists as a stunning failure that highlights the limitations of AI-enabled healthcare at present. From its early inception, IBM’s AI-driven Watson supercomputer was hailed by the company as harnessing the power to revolutionize cancer care. Beginning in 2013, IBM initiated partnerships with leading medical institutions renowned for their research in oncology such as the MD Anderson Cancer Center at the University of Texas, the Memorial Sloan-Kettering Cancer Center in New York, and the University of North Carolina’s Lineberger Comprehensive Cancer Center. IBM bought a multitude of competitor companies and spent millions in order to train Watson on crucial medical data including biomedical literature, patient histories and data, billing records, and medical histories. Although Watson had some success at the University North Carolina in identifying relevant clinical trials for patients and suggesting potential treatments based on its ability to ingest peer-reviewed biomedical literature, Watson was deemed a stunning failure and scrapped by MD Anderson in early 2016 following missed deadlines, a series of fruitless pilot projects, and continuous changes to the types of cancer that would harness Watson’s focus. Watson’s problems at MD Anderson involved a limited ability to understand and suggest actionable insights from the medical data it ingested was made worse by fragmentary clinical data and a lack of evidentiary support in the studies it analyzed. Costing MD Anderson over 62 million dollars before its collapse, investing in Watson proved a remarkable blunder for the cancer research center [12]. A former manager at IBM asserts a further reason as to why the project failed miserably in its lofty efforts to transform oncology. In his estimation, IBM “turned the marketing engine loose without controlling how to build and construct a product” [5]. Topol summarizes that while “there is certainly potential for computing to make a major difference [in medicine and oncology more broadly]… so far there has been minimal delivery on the promise.” Topol contends that the difficulty in assembly and aggregation of data has been underestimated, not just by Watson, but a myriad of tech companies venturing into healthcare [5]. The hype surrounding AI-enabled healthcare tools and indeed, the fortunes at stake, leads technology producers, marketers, commentators, investors, patients, and medical specialists to overestimate the speed of development and delivery of AI systems and, can result in ungrounded and uncritical conceptions of their potential to make significant, comprehensive impacts on medical care and of the liabilities these technologies can incur.

4.3 Defining standards and ensuring quality access to care in a context marked by enduring health inequities

Beyond a modest view for the rates of widespread AI development and deployment, potential of instantiations of AI-enabled healthcare also brings other critical considerations and challenges to the fore. One of the current challenges hampering AI-enabled approaches for routine use in clinical settings involves the lack of appropriate coherency regarding what constitutes standardization regarding these tools. The disparate development of tools utilizing machine learning techniques has produced a paradigm in which the same clinical question is addressed by separate systems developed in independent institutions. Validated on particular and distinct datasets or samples, these systems may produce different outputs, which can ultimately result in differing clinical recommendations and patient outcomes [10]. For example, pathologists can disagree whether a biopsy sample taken from a breast tissue is cancerous, which some studies suggest has contributed to an over-diagnosis of breast cancer. The subtle abnormalities exhibited by small, early-stage cancers are particularly difficult to diagnose. This issue extends beyond breast cancer to diagnosing melanomas, thyroid cancer, and prostate cancer. Existing clinical disagreement over what constitutes cancer may lead to cancer screening AI tools that mimic a tendency for over-diagnosis [13].

When applying an AI-driven tool in a clinical scenario, clinicians and other health professionals across institutions and national borders must have definitive assurances of scalable clinical standardization to deliver appropriate quality of care. Consequently, this requires international collaboration that must necessarily involve technology producers, clinical specialists, and regulatory bodies. Moreover, ensuring all patients have access to state of the art, AI-driven healthcare remains a significant challenge. Similar to other new technologies, experts predict that AI-enabled medical tools will be extremely costly for health institutions initially and will gradually decrease in expense over time. Given the potential of, for example, more timely diagnosis and improved disease monitoring made possible by AI tools, patients being treated at medical centers able to afford AI resources are likely to experience better health outcomes than those at institutions without the financial means to invest in these expensive resources. In addition to possessing considerable economic resources, medical centers may also need to train health professionals in the workings and use of these tools, which presents another potential hurdle to the widespread deployment of these systems.

Furthermore, the U.S.-based research of both professor of medicine and clinical surgery at the University of Illinois, Robert A. Winn, and anthropologist Kadija Ferryman of the Data and Society Research Institute, enjoins them to contextualize AI-driven success stories in medicine—especially in the realm of cancer care—against the backdrop of enduring health disparities in the United States. Although health expenditures in the U.S. are colossal with healthcare constituting more than 18% of the United States’ gross domestic product (climbing more than 10 percentage points since 1975), increased healthcare spending has not corresponded to improved healthcare outcomes across population groups in the U.S. [5]. Ferryman and Winn stress the sobering fact that people of color continue to have disproportionately higher incidence and mortality rates for multiple cancers (among them: kidney, breast, cervical, and prostate cancer) as they pose the following question: “As big data comes to cancer care, how can we ensure that it is addressing issues of equity, and that these new technologies will not further entrench disparities in cancer?” [14]. Winn and Ferryman join other medical researchers in arguing that not only does a shift in increased usage of medical AI tools necessitate population-representative data accessibility coupled with regulatory paradigms to ensure standardization and quality, but it also requires prioritizing healthcare equity, ethical health mandates, and inclusivity [15].

For example, Winn and Ferryman bring attention to how such a shift would impact the clinical responsibilities of health professionals. They reason that due to the nature of clinical care, clinicians must be able to assess, understand, and explain machine learning-driven systems to patients. Consequently this necessitates a certain level of transparency in how these systems are trained, developed, and produce outputs; these systems cannot be fully “black-boxed.” With the capacity of these technologies to refigure clinicians’ responsibilities, Winn and Ferryman echo a chorus of legal scholars who forewarn that a more robust integration of AI tools in clinical settings may incur both a transformation of the patient-doctor relationship as well as a reconceptualization of the regulations surrounding malpractice. Winn and Ferryman further contend that a shift in clinicians’ liabilities and obligations to demystify AI systems for patients may incur higher stakes for patients with “limited access to high quality clinical care, limited health literacy, earned mistrust of medical providers, and those individuals who may be exposed to interpersonal and institutional racism and discrimination in their healthcare encounters” [14]. They argue that it is critical that the potential ramifications for vulnerable patients due to the integration of AI technology in the clinic be not only acknowledged but also, consistently and intentionally managed. Together, these aforementioned challenges consist of only a small sampling of the issues that must be addressed before a successful, widespread adoption of AI-driven medical tools can be undertaken.

Advertisement

5. Methods and methodology

This chapter is informed by and enlists textual, ethnographic, and other qualitative data that the author has collected in undertaking a broad examination of the legal, political, and ethical imperatives surrounding the development of AI-driven systems for healthcare and for oncology, specifically. This study attends to patient reception of the development and deployment of these systems as well as patients’ perceptions and willingness to participate in health-related AI development by sharing medical data necessary to train and improve these systems with public and private entities engaged in developing them.

This analysis draws upon qualitative research methods including textual and content analysis of academic literature reviews, general audience media, and industry-oriented publications. This study was further augmented by an in-depth, semi-structured interview protocol. In addition to attending cancer patient conferences, talks, and support groups, the author conducted interviews with 40 relevant stakeholders. The approximately 15 hours of observation and 40 interviews were undertaken for the first 11 months of 2019. Interlocutors included cancer patients and their caregivers, cancer patient advocates, directors and specialists at cancer care nonprofits, technologists employed at firms developing AI tools for oncology, and clinicians. The interview corpus of this study includes 9 U.S. citizens, 29 U.K. citizens, and 5 citizens from the European Union. Interviews were primarily conducted in cities located in Northern California in the U.S. as well as in London and Cambridgeshire in the U.K.

Among the interview corpus, 28 individuals are cancer patients who were actively undergoing treatment or who were in remission at the time of the interview. Seven patients were in remission at the time the interviews were conducted and the remaining 21 patients are currently receiving treatment for their cancers. Twenty-four of the patients were born between 1939 and 1960.The four remaining patients were born after 1983, the youngest patient interviewed was born in 1990. The majority of the patients interviewed are retired from the workforce having had previous careers as secretaries, telecom and systems engineers, insurance salesmen, military logisticians, child-care providers, librarians, photographers, teachers, and small-business owners. At the time of the interviews, patients in the interview corpus who were in the workforce had employment as data scientists, teachers, lab technicians, and engineers. Those with employment were generally employed as part-time employees as a result of their continuing treatments. Other patients were stay-at-home parents and one patient is a doctoral student. When asked about socioeconomic status, most patients considered themselves to be middle-class. The patients in this interview corpus are of white, European descent although the interview corpus as a whole and the ethnographic work that supplements this study involved patients, advocates, and healthcare professionals from other ethnic and racial backgrounds. The patients interviewed possessed a multitude of different cancer diagnoses. Three of the patients interviewed had received a diagnosis of colorectal cancer; six had received a diagnosis of breast cancer; two had received a diagnosis of cervical cancer; six had received a diagnosis of bladder cancer; five had received a diagnosis of prostate cancer; seven had received lymphoma diagnoses; four had received a diagnosis of myeloma; and, one patient had received a diagnosis of skin cancer (several patients had developed multiple cancers).

Three individuals within this interview corpus are clinicians. Two of these clinicians are senior oncologists with extensive experience working at illustrious cancer research hospitals in the United Kingdom, Ireland, Italy, and the United States. The final clinician is completing their initial rotation-years as a pediatrician at a premiere research hospital on the west coast of the United States; this clinician previously earned a doctorate degree in medical anthropology. Their dissertation research studied patient data-sharing and patient reception of self-tracking approaches to medical care. Seven interviews were conducted with data scientists, bioinformaticians, and start-up founders who work or previously worked at a U.K. and U.S.-based AI oncology-related start-up.

The remaining five interlocutors comprising this interview corpus are trained cancer patient advocates. One of these patient advocates is a U.K. citizen and the remaining four are U.S. citizens. All are based in California although three of them occasionally serve as advocates and patient ethicists for projects at renowned cancer research institutes in other U.S. states. One of these advocates is a licensed nurse practitioner with experience in global health consultancy; this advocate currently serves as the program director of a nonprofit cancer care center and clinic located in a Northern Californian metropolis. Another cancer patient advocate in this corpus has nearly three decades of experience and has earned several awards and accolades for her advocacy work. Previously, this advocate was employed as a patient advocate and advisor at an eminent national cancer support organization and is presently employed as a senior patient navigator with a focus on multicultural patient support at a California nonprofit that primarily caters to local, low-income women of color who have received cancer diagnoses although the organization remains open patients of all backgrounds. In her role, this advocate guides patients through treatment, clinical trial options and hospital visits; assists patients with insurance forms and other medical paperwork; and provides patients with counseling and much needed psycho-social support. This advocate regularly serves on cancer patient advocate conference committees, counsels researchers seeking to work with cancer patients, and acts as a grant reviewer for emerging research ventures. The three remaining advocates have diverse employment histories in the fields of marketing, graphic design, emergency medicine, and nonprofit leadership. For nearly 15 years, these trained cancer survivor advocates with expertise in research and patient communication have worked with national and local advocacy organizations serving on survivorship and research committees for various academic, nonprofit, and governmental organizations. They frequently serve as research partners and advocates on scientific review committees and act as grant reviewers for emerging university-led research projects in the state of California. They also serve on clinical trials advisory committees as advocate observers, patient advisors, and stakeholder reviewers in partnership with state and national research bodies as well as national cancer research organizations including the National Institutes of Health, the American Cancer Society, the Department of Defense, and the Susan G. Komen Foundation. These advocates routinely volunteer at local nonprofits as helpline attendees and peer mentors to cancer patients currently undergoing treatment.

Coagulating and analyzing this qualitative data, this chapter offers insight into the perceptions and concerns some cancer patients (i.e., particularly those identifying as middle class and of European descent), patient advocates, and clinicians abiding in the Home Counties of the U.K. and Northern Californian metropolises possess regarding both the deployment of AI systems for oncology as well as sharing health data with public and private entities for the purpose of developing these systems. Complementing the textual and ethnographic data, the interview work conducted enumerates popular dispositions toward biomedical technological development for oncology within the socially stratified societies of the U.S. and the U.K. as well as refracts the particular exigencies of pursuing cancer treatments within the two nations’ contrasting healthcare systems.

Researchers studying the social and technical valences of AI continue to insist upon the foundational legitimacy and, indeed, the value of studying popular conceptions of machine learning-driven systems. Public perceptions contribute to the fashioning of the material and discursive realities these systems act upon and within and furthermore constitute collective contestations of the political realities, ethical liabilities, and financial viabilities immanent to the social production of these technological systems. Correspondingly, Monteescu and Elish contend that “When it comes to understanding the impact of AI, the social perceptions of a technology’s capabilities are equally important to technical definitions. Elsewhere we have observed that non-expert understandings of AI are often shaped by marketing rhetoric, which sometimes suggests capabilities that are not yet technically possible. For many developers of AI systems, this potential fuzziness is ‘not a bug but a feature,’ so to speak. The public perception of AI is often leveraged to drum up excitement or stand in for a range of automated technologies that haven’t yet become fully actualized. The fluctuating understandings of AI will not be universally resolved, and so it is necessary to account for the consequences of AI as defined through both technical definitions and social representations” [16]. Public trust, knowledge, and perception of health data-sharing and AI development and deployment will undoubtedly influence how governments, health organizations, and corporate entities continue to debate, contest, insist on, and invest in AI viability for medical usage. For this reason, public perceptions may also impact the material development of these systems (e.g., via a collective willingness or unwillingness to use these systems for medical treatment or engage in health data-sharing for AI development); the regulatory mandates and other frameworks of standardization, equity, and accountability pursued in their wake; the funding and long-term economic feasibility of these systems; and perhaps even the meaning of what medical care can or should constitute.

Advertisement

6. Understanding patient perceptions of AI-driven systems for healthcare

This chapter presents an analytic overview of: the extent of knowledge a sample of U.S. and U.K. patients possess regarding AI systems for oncology and oncology-related drug development as well as healthcare, more broadly. Similar to other recent studies, the qualitative research that this discussion derives from indicates that general public audiences (inclusive of cancer patients) continue to possess varied notions of not only what constitutes AI, but also what capabilities these AI systems hold and the extent of proficiency with which they presently perform them. With varied (although certainly increasing) levels of sophistication, cancer patients (as evidenced in the interview sample) are questioning what potential ramifications patients should be aware of, and potentially concerned about, regarding the usage of AI systems for healthcare and oncology. They question what emotive and affective positions they should take with regard to AI Certainly, patients possess divergent understandings of both when and how this technology may impact or augment the standard of care within oncology that directs recommended treatment paths and contributes to patient outcomes. Nevertheless, many are attentive to the limitations of their current knowledge regarding these systems. In consequence, patients are questioning how they can stay informed, what constitutes trustworthy sources from which they can glean accurate and legible information, and what specific types of inquiries should they be attending to.

In order for healthcare technologies to be effectively responsive to patients’ needs, it is evident that institutions, persons, and entities involved in developing instruments that can affect cancer patients’ quality of care not only assess patients’ present knowledge and perceptions of emerging technology, but also heed and address their resultant questions and concerns. With a preponderant focus on analyzing the interview data the author has collected, this chapter assesses the express knowledge, perceptions, and suggestions a sample of U.S. and U.K. patients possess regarding AI systems for oncology and oncology drug development. Specifically, this chapter enumerates three primary analytical axes in attending to this dataset: cancer patients’ perception of AI systems for oncology; their willingness to contribute to the development of the efficacy of these systems and AI-drug development via medical data-sharing; the concerns they bear regarding both the deployment and integration of these systems as well as health data-sharing for the aforementioned purposes; and finally, the recommendations patients and relevant experts are proposing for building accountability measures to ensure both safe usage and improve patient trust.

6.1 Patients’ expressed levels of knowledge

In characterizing the knowledge the patient interlocutors comprising the interview corpus possessed at the time the interviews were conducted, it is principally important to register that the vast majority of these cancer patients had no formal or professional training in regards to these systems. While four of the patients offered examples demonstrating how they currently utilize or previously utilized machine learning systems in their employment, the remaining number (86% of the patient interlocutors) had no professional experience with these systems and learned about AI primarily through general audience media. All things considered, the interview data are largely representative then of not just modes of public perception but lay opinion. All of the patients, with the exception of one, registered having heard the term of AI and exhibited a capacity to grasp the foundations of its most basic principles. These interlocutors, moreover, often went on to demonstrate the applicability of AI tools or machine learning-driven systems within healthcare contexts. Furthermore, when offering examples, patient interlocutors chiefly cited examples from both oncology and general practitioner diagnostics. Those who demonstrated a familiarity of the application of machine learning systems to oncology most frequently cited its current applicability to pathology, medical robotics, and multi-omics data-handling. Five patient interlocutors within the interview corpus related having previously prepared reports or presentations in which they offered an introductory overview of AI and AI applicability to oncology for either cancer patients and advocates, or otherwise general public audiences.

Unsurprisingly, the interviews exhibited a wide range of patient articulations of the foundational aspects of AI systems. For example, when asked what they knew about AI, one patient insisted that they knew “very little”; “I would assume it has something to do with algorithms. In [our support] group, we’ve talked about how there might be some algorithms that can be used for diagnostic tools for GP’s. To me, I don’t know if this is right, but AI has to do with data-handling. There’s so much data out in this world and we have to think about how are we going to make it useful.” After explaining that they first learned about the principles of AI from early science fiction novels (such as Isaac Asimov’s I, Robot), one patient defined AI in the following terms: “Well, I would say it is, basically, a computer that is capable of interpreting input, and making deductions from input that it is given. Obviously, the way it responds to that is being presumably programmed by a human being. But, I believe that computers, or at least AI, are capable of taking it beyond that, they’re capable of learning from the basic information they’re been given and building on that.” Another patient interlocutor explained: “Well, to my mind, AI is programming a computer of some sort to take various in-puts and to learn from them, basically. So if you got say, a visual system—cancers on an x-ray for example, you would have a system that you could teach. Say, put through a number positives or a number negatives of say a thousand scans and maybe a hundred of those are positive and you teach it to compare it to the negative ones and identify which ones are positive. Then you can leave it on its own to work by itself from that point on. Y’know once you are satisfied that it’s strike-rate is sufficient. You can leave it to its own devices. That’s how I kind of look at medical AI anyway. I also think AI is very much a black box, just from what I’ve seen on the telly. You set it going but you don’t necessarily understand how it’s doing it. [laughing]… Whether that’s true or not, I don’t know. But that’s my perception of it from the popular media I guess…I have no idea just how much AI is actually out there and performing at the moment, if you see what I mean. How far it’s come; how much use there is for it at present; whether it still remains a largely experimental field.” These three explanations offer a triangulation of the amount of knowledge and levels of coherency the majority of the cancer patients interviewed expressed to the author. Many could give a relatively clear articulation of how the elementary facets of machine learning or of how AI algorithms function. Typically, this sample of patients demonstrated that these systems function to process vast amounts of data, that with appropriate engineering and sufficient data training sets these algorithms can be “trained” to identify relevant variables as outputs, and that AI can be applied to medical data and have potential use for oncology.

Excluding the four patient interlocutors who have professional training in and experience with AI systems, patients related that they had arrived at their current level of knowledge through general audience media. In particular, all of the remaining patient interlocutors cited two primary sources from which they derived knowledge regarding these systems. All related that they had learned about AI from journalistic sources and accounts they encountered via print media such as a local, national, international, or specialized newspapers (e.g., a business newspaper or magazine) and digital news platforms. Secondly, all related that they had gained an initial introduction to or a partial familiarity with the general principles of AI via speculative accounts found in genre fiction sources such as science fiction texts, films, and television series. Some indicated that accounts concerning AI in speculative fiction or journalistic sources sparked a personal interest in these systems and their development; these patient interlocutors explained that they further bolstered their knowledge through nonfiction texts about AI development and applicability. Otherwise, patients related that they had further learned about AI via friends, spouses, or relatives who have professional involvement with AI Some reported having been informed by existing government reports (e.g., the U.K.’s 2018 House of Lords Report on Artificial Intelligence) that they were initially made aware of from journalistic sources. A small number of patient interlocutors indicated that they also learned about AI via their involvement in patient support groups or patient advocacy work (including oncology-related conferences and involvement with medical research auditing).

6.2 Patients’ general perceptions of AI

Overwhelming, the cancer patients interviewed for this study held positive perceptions and opinions for the development of AI systems for oncology. With the exception of one patient interlocutor who admitted no knowledge and no opinions of these tools, the patient interlocutors comprising the interview corpus voiced hope for the relevancy and potential for AI development and application for medicine. Continuously, these patients insisted that as a “useful tool,” “able to catch things humans can’t,” AI systems would be a “step forward” inasmuch as they “will make things better” by “improv[ing] speed and quality of data analysis.” As one patient put it: “we’ve been waiting for a faster identification of things and this can only help.” Others noted with pronounced optimism that these tools may “reduce workloads” for medical professionals such as doctors and nurses. One patient mused that perhaps such systems could combat clinical biases and bigotry through objective and accurate data-handling; a view that has been critiqued by social researchers of technology as misguided. “Generally,” another patient concluded, “I think tech advances are a good thing.” Another interlocutor echoed this statement, adding: “It sounds great and I think it will give people confidence and perhaps a better chance at survival.”

Patients who possessed professional training or work experience in developing and deploying AI systems expressed similar hope and positivity about AI-enabled healthcare. One patient who works with AI tools as a lab technician within the context of drug development remarked that AI tools are “something in development that can be really useful, especially for handling patient data and especially genomics data… It’s really good for things that have a clear ‘yes’ or ‘no’ and beyond that there are always new improvements, new features to improve the algorithms with… If [AI tools for oncology] allow for the use of certain data like mutations and other genomics data, they could provide more confidence in the use of AI for cancer treatment predictions.” Two other patients, with tangential familiarity with AI systems given their respective professions as a statistician and systems engineer asserted that AI presented “a lot to be gained” and especially holds promise for the improvement of diagnostics. The systems engineer asserted his belief that, used in the arena of oncology, AI tools may “bring reliable indications for decisions that don’t get made or get lost in communication.”

Notably, many of the patient interlocutors interviewed characterized their perceptions of AI tools as thoroughly secondary and overshadowed by the currency and pervasiveness of popular teleological narratives of technology that cast technological development as both heroic and as “inevitable progress.” Concerning AI-enabled healthcare, patients frequently conceded: “It’s the way of the future.” In turn, some expressed that their conceptions of the inevitability of technological progression (in this instance, made manifest by AI tools for healthcare) encouraged feelings that “[The prospect of AI-enabled oncology] is exciting, but a little scary.” In other words, among declarations of the hope regarding the potential of AI, many patients voiced tepid fears in relation to offering their assessments of AI tools given their (and potentially others’) beliefs in the potential marginality of their own social locations—as, for instance, elders and, more broadly, as cancer patients. What would often begin as self-aware statements relating limited abilities to stay current with the seeming swiftness of many technological shifts and innovations, would in many interviews lead to remarks through which patients would minimize their relevancy and position to offer opinions, thoughts, or concerns about AI “Are we doomed?” one patient asked, “I don’t know. All I know is that [AI development] is unstoppable and frankly…you can’t put yesterday’s values on tomorrow.” Another insisted: “Everything is moving forward and does move forward. Why should this be any different? It’s how we live, and maybe we just need to get on and accept it.” Others voiced that regardless of the advancements in AI, they feel they are “too old” to “keep up” and described feeling as if they are suspended in a paradigm of being left behind with regard to their technological knowledge and savviness and have accepted this predicament as “their lot”: “Things move quickly and I’ve switched off.” In addition to age, patients pointed to their diagnosis and the rigorousness of their therapies as preventing them from seeing the future of AI development for oncology as pertinent to them. Patients interviewed in the middle of treatment cycles voiced a similar sentiment of being too sick to “keep up” or of not feeling capable of appropriately assessing how it would affect the future of oncology, let alone themselves and others. In fact, some patients asserted confidently: “[These tools] won’t affect me.”

Patients who were familiar with AI due to the nature of their professional employment admitted that while they firmly supported the technology’s use and development with great hope, in their view, these systems generally remain “underdeveloped and under-utilized.” “Changes are happening,” these patients declared, “but slowly.” Likewise, one patient advocate related: “I’ve been hearing a lot about [AI tools for oncology] at conferences and it sounds wonderful but I haven’t seen it materialize yet in hospitals and clinics.”

In summary, some patients (often those with lay knowledge of AI systems) consider AI systems for oncology and medicine to be developing at a rapid rate and intertwine this conception of rapid technological development with a notion of “natural” and “inevitable” technological progression against which they would unfavorably compare their age and health status as inversely related. In this view, their age and health status become barriers that immobilize their capacity to stay informed and interested in technological development. This logic perhaps serves as a basis for elaborating insecurities about whether they have an appropriate ability to speak lucidly or incisively about AI tools for oncology which, at times, results in a firm belief that they should not concern themselves with forming critical views and voicing judgments about the subject.

Beyond highlighting contrasting conceptions of the pace of AI development, patients framed their enthusiasm regarding AI systems for oncology with statements conceding a general awareness that technological transition may produce vulnerabilities and risks for patients and medical staff. Despite widely expressed optimism, a majority of patients voiced that shifting to a greater use of AI tools for oncology and medicine may subject patients to additional risk for medical errors or mishaps. “There’s always room for errors and mistakes,” as one patient mused. “Errors,” another patient conceded, “are inevitable and it takes time to perfect technology. That’s progression. We learn by mistakes, sadly.” Further epitomizing this appraisal, a patient familiar with machine learning techniques explained: “If used for the benefit of mankind [sic], I am absolutely onboard for this tech. Bring it on. But forcing learning when the data isn’t there, isn’t the right thing to do.” In other words, despite an embrace of narratives of technological progression, patients voiced a desire for cautious progression of AI tools and emphasized the potential human costs of technological innovation and initial deployment.

Moreover, many patients indicated that they believe such tools may, in the future, produce some level of job insecurity for certain doctors and medical staff (e.g., radiologists, pathologists). Still, those who voiced this issue noted that they prioritized manifesting better health outcomes for patients over maintaining employment for medical professionals able to produce less satisfactory health outcomes. Others related that they believe that these tools will not encroach on the necessity of the roles of medical professionals or threaten their employment prospects but will instead produce “a major sea-change for the medical industry,” the consequence of which being that doctors and other medical staff will “need to be retrained or receive additional training.”

Finally, a small minority of patients experience the prospect of AI-enabled healthcare as shrouded in confusion and potential conspiracy. “I have concerns about it,” one patient admitted, “but only in a SciFi-horror film kind of way which is based on ignorance and a certain amount of misinformation.” Other patients related more earnest concerns about AI tools for healthcare regarding potential issues of developers’ nefarious intent, consolidated power, and misguided objectives. One patient confessed these fears in the following manner: “In my way of understanding, ultimately, AI will be writing the software itself. And that’s where it goes out of control because from what I’ve seen, personally, and to the present day, software engineers have a lot of power, a lot of power! And the people who write the software…they could conceal things, you get an unscrupulous one. Ninety-nine percent, I’m sure, are perfectly legitimate, but it only needs one or two unscrupulous ones who can put bugs in software. And it worries me that, as I say, ultimately, that software won’t be written by humans—the software itself will be interpreted and written by AI and I’m sure that’s ultimately where we’re going.” Other patients voiced wariness that there exists far too much control over the development and deployment of AI systems “in the hands of too few.” They stressed the need to democratize relations of power relating to how private entities and corporate structures consolidate the decision-making power over how and which issues are tackled with AI tools and consequently, how these tools are designed and implemented across sectors within and outside of medicine (e.g., the workings of financial services companies and investment banks or the political encroachment and monopolistic tendencies of tech mammoths such as Amazon, Google, and Facebook). Some patients’ portends remained vaguely sketched: “Like all tech, evil men get behind it and we see the bad side of everything…Insert Blade Runner quote here.”

In parallel, other patients declared that although they felt generally optimistic about the prospects of AI developments for healthcare, they underscored a desire for these technologies “to improve quality of life, not extend it.” One patient admitted, “I don’t want AI to cure cancer in order for people to live forever.” Evidently, one patient advocate insisted, “The promise of Big Data is confusing for patients.” Patient interlocutors with technical expertise and familiarity with AI systems expressed their bafflement over other patients’ and public figures’ confusions regarding these systems. One of these patients voiced his frustrations regarding the philosophical or imaginative fears some lay members of the public have: “I don’t understand why people think it’s some Doctor Who-Take-Over-the World syndrome!…most people in the last 30-40 years, would have used computing techniques of some sort to break down their spreadsheet or whatever. Conceptually, I don’t see a great difference between AI and that….I can’t wrap my head around why people think it’s some sort of SciFi, Doctor Who thing or, why they think it’s something that’s been invented last week by Amazon. It’s been around thirty years or so and the math has been around for one hundred years! And secondly, they’ve been doing it all their lives!” While this patient thought it might aid others without expertise to understand what he understood as the banalities of AI by drawing conceptual comparisons to more simple computing properties, another suggested confusions and conspiracy theories could be attributed to idiomatic decisions. He explained: “I feel, sort of working in that area, that we should stop talking about artificial intelligence and talk more about machine learning or statistical learning. Talk about something different from artificial intelligence because when people think about that they think of Arnold Schwarzenegger and The Terminator. In fact, statistical techniques, which are not strictly artificial intelligence, have been around for 30 years. There’s all sorts of techniques that we rely on that have been around for decades.” Together, their comments demonstrate the diverse range of general apperceptions of what these technologies might accomplish, how who develops and deploys these technologies may impact healthcare systems (including patients and medical professionals), and how popular depictions of and professional experience with AI tools contribute to contrasting notions and appraisals of their influence, application, and current state of development.

Advertisement

7. Patient concerns regarding the development, integration, and deployment of AI tools in healthcare contexts

Despite varying levels of expertise and general knowledge, the patient interlocutors interviewed for this study expressed overlapping concerns regarding the development, integration, and deployment of AI tools for oncology and for use within healthcare contexts more broadly. Patients regularly articulated three core areas of concern regarding issues of regulatory oversight, development and training matters, as well as issues of standardization and integration. Together, these common concerns demonstrate how patients are ingesting existing reports of unintended effects and social risks AI systems across different sectors have resulted in. Moreover, they exhibit how patients are envisioning and responding to the potential for AI technologies to produce instances of medical error and harm.

7.1 Patient concerns regarding the need for regulatory oversight

The fulcrum of patient interlocutors’ anxieties concern the need for regulatory oversight of AI tools. This issue materialized as a constant chorus for patients across nearly every interview. One mode in which this issue was raised was as a desire for a “human buffer” or a technical, medical expert between these systems and patients. Patients stressed that, beyond issues of efficacy, they were concerned that health providers might attempt to thoroughly replace “the human element of care” from medical contexts. “Regarding automation and AI techniques” one patient explained, “I think it is comforting to have a human around you. Or, to have a human be the bridge between robotics and the person, the impersonal screen and the person… I think personally it’s still nice to get some human element of care.” To ensure the retainment of this experience of care, patients enumerated preferences for trained medical experts to explain how these systems work, to remain available and present, and to oversee the results that these systems produce in real-time. Furthermore, patients fear the possibility of these systems to possess the power of executive decision-making. They instead stressed the need to limit the function of these systems to auxiliary tools that enable medical professionals and patients to make better-informed medical decisions. One patient elaborated: “As for [AI systems] making decisions, I don’t think it’s the way to go. I think it should be the way it’s done now, they give you all the options and the patient can make the decisions. Not the machine or anyone else.”

Even more frequently, patient interlocutors articulated the need for regulatory agencies and bodies to effectuate heightened oversight, greater legal accountability, and guaranteed quality control of these systems as a mandatory precondition to ensuring the prevention of medical error. In order to establish the responsible use of these systems within medical contexts, patients asserted that these systems cannot be introduced into clinical settings without appropriate regulatory safeguards. A patient interlocutor articulated this issue as such: “I don’t think it can just be done and introduced and used. I think safeguards have got to be put in place and monitored. But who does that? I don’t know.” Other patients voiced misgivings concerning the current lack of regulations because of the existing confusion of when robust regulatory schemes will be introduced and how they will operate. Particularly, patients are concerned with how regulatory schemes will be organized to arrange the necessary flexibility, international collaboration, and enforcement capacity to assure both the optimization of these systems and patient safety.

7.2 Patient concerns regarding the facets of AI development

In addition to concerns about the establishment of robust regulatory networks, patient interlocutors were also perturbed by unresolved several facets of current AI development. Foremost, given AI systems’ reliance on training datasets to improve the accuracy and efficiency of its outputs, patients stressed developers of AI medical tools are faced with crucial mandates regarding the assemblage of training datasets. Patient interlocutors stressed that ensuring regulatory usage and patient trust is fundamentally contingent upon developers’ abilities to guarantee the accuracy, completeness, and representativeness of their datasets. As one patient warned, “forcing learning when the data isn’t there, isn’t the right thing to do.” In questioning the potential for this technology to address health disparities or further entrench them, some patients raised concerns of how researchers and developers are grappling with the limitations of existing health data. Often these data are representative of only a small portion of world’s population. Patients fear that if AI systems are trained on inadequate or unrepresentative data, these systems could potentially reify medical insights (as well as produce medicines and outcomes) with limited efficacy. Emphasizing the need to compile population-representative datasets, one patient disclosed: “My other concern with machine learning or AI, it’s sort of like that old saying for computers: ‘Garbage in, Garbage out’: to make sure you are getting the best training sets from African Americans and Asian Americans and Native Americans and not just Americans but all races [across different populations]… because that’s sort of the big picture. That is the problem with clinical trials in the U.S.—you get a bunch of white people! So racial diversity [is needed] and are you getting enough participants across all age groups?” To assemble representative, comprehensive, and accurate datasets, patients further asserted that health researchers and AI medical tool developers should be engaging in more collaborative research rather than “working in silos” and aim to include multiple kinds of health data including multi-omics data and even non-biological or environmental data or multiple data points derived from multiple sources and perimeters. By the same token, patients were adamant that AI systems for medical use must be able to be updated to integrate new forms of health data. For example, one patient mused, that if an AI system functioned to predict health outcomes for patients with a certain cancer on a specific treatment protocol, it may run into issues if new therapies are discovered and become standard. Given this scenario, he explained, “the relevance of the model becomes less significant. So there are issues around that. The earlier you are in the interference of data—that is, the ability to learn outcomes against base data is hugely, hugely relevant.” Correspondingly, patients were concerned about the abilities of AI models to be able to be responsive to additional information, changes and updates within health contexts. How will that be ensured, they asked? And how will the regulatory process account for this given that these systems should be retrained often?

Patients also questioned the efficacy of AI development given the relative homogeneity of developers. Some patient interlocutors questioned whether emerging AI-driven health technologies might only be fully responsive to and efficacious for the demographic groups resembling developers. These patients worry that as the tech industry is dominated by affluent to middle-class, cis-, white, male developers, the questions, issues, and systems developers are currently pursuing might bear the (un)conscious markings of developers’ particular systemic privileges, interests, politics, desires, and bodies [17]. These patients reason that the needs, worldviews, and commitments of those developing technological instruments will inevitably influence how these instruments will take shape in the world. Compounded by the troubling homogeneity of the tech industry, these patients foresee that these technologies have the potential to embody prejudices, unconscious blindspots, or inherent bias that could result in “unintended harm” that disproportionately affects the most vulnerable in society. Patients stressed the need for “diverse developing teams” who will hold “diversity of viewpoints” and attend to their technologies’ capabilities to reinforce structural inequities and unjust psychological biases to produce harm for patients. Their comments are heedful of the extent to which developers are concerned with constructing tools that function to oppose apathy, greed, and inequity. As one patient contended: “Your tech needs to include and account for everyone or you will create more barriers to quality care. It’s about making sure you don’t leave certain patients in the ‘Dark Age’ and giving all patients the right treatments for the strongest chance at survival.”

7.3 Patient concerns regarding health system integration and access

Furthermore, patients remain troubled by concerns regarding standardization, health system integration, and unequal access to leading-edge medical care. Some patient interlocutors voiced doubts regarding the ability of their current health system to integrate and implement the use of these tools in a successful, rapid, and straightforward manner. As a result of predicaments stemming from its bureaucratic structure and mishaps related to its ability to securely manage health data, some U.K. patients held misgivings about the feasibility of the National Health Service to manage a transition to widespread, systematic use of cutting-edge, AI-driven medical tools. Comparatively, U.S. patients frequently voiced integration concerns relating to how the largely for-profit and privatized U.S. healthcare system results in unequal access to standard of care and even basic health services. As the healthcare landscape in the U.S. remains stunningly rife with inequities, patients fear a potential worsening of the existing unequal implementation and access to AI-driven systems for medical use. Accordingly, U.S. patients asked: what hospitals and medical centers have the resources to launch and integrate this technology for patients’ benefits? What patients will be denied access because of factors such as geographic location, healthcare provider, hospital availability, and insurance issues? How will this further entrench existing healthcare inequities? “While some patients might have access [to cutting-edge AI tools for oncology] through tertiary centers and university research hospitals, what’s happening at local clinics and hospitals?” one U.S. patient asked. “How will standardization play out?” she continued, “We have to make sure that people—that everybody—has access to it and that’s not the case here.” Another U.S. cancer patient advocate further elaborated: “Everyone is thinking it is promising and that it will come our way. My concern is that it is broadly accepted to be covered by public systems. We see a lot of disparities in terms of what public insurance like Medicare and Medicaid will cover versus what private insurance will cover. So my fear is that we are going to have two tiers.” Patient interlocutors comprising this interview corpus recognize that issues surrounding financial resources and incentives as well as individual health system’s bureaucratic and political structures will contribute to the ease or difficulty of systematic integration of AI tools for medicine. In turn, they reason that this may affect unequal access to the most efficacious care and thus, contribute to the further entrenchment of existing healthcare inequities.

Advertisement

8. Understanding patient perceptions regarding data-sharing for AI and drug development research

Access to health datasets is a crucial factor in enabling oncology-specific drug development and AI systems research. This section examines patient responses and concerns regarding sharing their health data for these aforementioned purposes. Together, the comments of this sample of cancer patient interlocutors compose an opening through which to understand some patients’ perceptions, misconceptions, and misgivings regarding sharing their health data. Moreover, their responses exhibit variance in both existing knowledge of cancer patients and the extent to which they express a desire to be involved in the advancement of proposed AI systems for oncology and oncology drug development vis-a-vis data-sharing.

8.1 Data-sharing and research participation: concerns and caveats

Virtually all of the cancer patients interviewed for this study expressed both general enthusiasm and an overall willingness to be involved in oncology drug development and oncology AI tool advancement in some capacity. Furthermore, nearly 22% of the patients comprising the interview corpus indicated that they trust the regulatory schemes and ethical parameters that currently guide public and private entities involved in research enough to be willing to share their medical data for research purposes without any additional conditions or specific requests beyond these existing mandates. The potential for various issues pertaining to data security, storage, targeted surveillance, as well as risks of data re-identification and discrimination, did not inhibit these patients’ desire to contribute to oncology drug and AI development research. In their view, these potential complications did not present an undue risk to them given the existing frameworks of ethical and legal protections regarding research.

Nevertheless, the remaining portion of patient interlocutors held concerns and caveats potent enough to potentially prevent them from agreeing to participate in research. These patients presented a series of considerations that they specifically want corporate researchers to address in order for them to feel comfortable enough to agree to contribute health data for a private entities’ (e.g., pharmaceutical or biotech companies) efforts to conduct research regardless of their affiliations with medical research institutes or university research centers. Notably, however, when presented with a hypothetical scenario in which a university or medical research institute was conducting research without corporate collaboration (exclusive of funding) nearly all patients were willing to offer their medical data without any major caveats although a small number insisted that their willingness to share their data would be affected by corporate sponsorship in this scenario.

8.2 Concerns regarding data security and patient privacy

Most commonly, patient interlocutors declared that their primary concern with respect to sharing medical data for research purposes pertains to issues of data security and privacy. Despite current legal and ethical standards mandating the anonymization of medical data for research, patients voiced that keeping their data anonymized and their privacy secure remains their top priority and issue of concern. Still, several of these patients admitted that if they were assured that their data would be kept anonymized and would be securely stored with respect to current industry and legal standards, they would be willing to participate in research. While a small number of patients expressed doubts as to whether their healthcare provider (i.e., the National Health Service) can effectively keep patients’ health data secure from hackers and data leaks, the majority of patients comprising this interview sample conceded that they had little to no knowledge of how their health data might be stored, kept secure, or circulated beyond their medical provider’s institution.

8.3 Lack of knowledge about legal mandates and fears of insurance complications

Many patients also disclosed that they were unsure of the dictates that ethical review boards and legal frameworks impose on researchers working with health data. While all of the U.S. patients interviewed were at least aware of the federal legislation known as the Health Insurance Portability and Accountability Act or HIPPA (if not also other federal statutes such as GINA or relevant state laws), in contrast, U.K. patients, with the exception of those whose profession involves health data-handling, disclosed that they typically unaware of U.K. statutes regarding health data protections to any degree of notable detail. Regardless of whether this ignorance stems from a lack of interest, from trust in the National Health Service to fully comply with the mandates of legal ordinances, or some other reason, both U.K. patients and U.S. patients alike indicated a concern that current legal frameworks are likely too lax in ensuring the protection of patients’ ability to access healthcare via private insurance. This was the second most frequently cited concern related to health data-sharing and research participation across the interview corpus. “I am deeply concerned this data will make their way to insurance companies and affect premiums,” one patient asserted. In the event of a data breach or a scenario in which data mining allowed health insurance companies to have access to individuals’ health data subsequent to sharing their medical data for research purposes, patients questioned whether current law is robust enough to prohibit health insurance companies from obtaining their medical data for purposes of denying them coverage or limiting their access to coverage through higher fees for coverage based on data originally shared for research purposes.

8.4 Understanding other demands for securing consent

Beyond issues of data security, patients consistently related several other factors that would influence their decision to share medical data with a corporate entity for AI and drug development research related to oncology. Most frequently, patients expressed that they would be willing to share their data with companies for these purposes provided their research was explained to them in full and that they agreed with the ethical imperatives of the study and corporation more broadly. In this vein, patients were consistent in insisting that they wanted to know: (1) the research objectives of a potential study, (2) if the study posed any risks or potential for harm. To a slightly less degree patients asserted that they would also want to be informed about where their data would be stored once shared and who it would be handled by, who would own the data once it is shared for research, if their data to kept for future use or circulated for use in other studies, how the study was to be funded and executed, and how the corporate entity manages their profit motives with ethical mandates. Moreover, if provided with all this information, some patients explained that they would then only be willing to share their data if the company designed their research with the imperative to benefit as many cancer patients as possible. For instance, two patients related that if a pharmaceutical company was aiming to conduct research for a drug that would have only a minimal effect on patients’ well-being and outcomes such as only be able to “prolong life for two months” based on “the need for profit” then they would not be interested in sharing their health data. “Big Pharma,” another patient emphasized, “is difficult to trust.” Others noted that they would want to gather more information about how the hypothetical company may or may not be engaged in depriving some patients of necessary treatments. One patient explained that if a company had a history of using patients’ data to help create drugs in order to then charge exorbitant prices that placed the drug out of reach for a majority of patients, they would not be willing to share their data to aid a company in their research. Still, two patients conceded that the future prospect of production of generics in this scenario would satisfy them enough to want to share their data. In addition, some patient interlocutors asserted their desire to be updated about the status of the research and its potential outcomes. Likewise, patients wanted to be assured that if researchers handling their data were to find something medically concerning or relevant to their future health status (e.g., a genetic predisposition for a disease) that they would be notified by the research body although some admitted that they were unsure as to how this would be accomplished given de-identification of the data.

8.5 Concerns regarding corporate ethics and the potential for targeted advertising

Moreover, patients insisted that additional regulatory safeguards are needed both in the U.S. and in the U.K. to protect patients participating in research not just from healthcare coverage issues, but also from the potential for corporate surveillance and targeted advertising based on their medical data. Specifically, patients indicated that they believe that sharing their health data with entities beyond their healthcare provider and health insurer could potentially expose them to further and more intrusive corporate surveillance and targeted marketing. Although the inherent profit motives of private corporations admittedly troubled some patients, this factor and the potential of targeted marketing alone did not compel any patient interlocutors to declare that they would refuse to share their medical data based on these factors. Rather, patients related that they would take a “holistic” view of: the company, its research aims, the procedures and mechanisms of the study, and why and how a company might ask participants to transfer ownership of their data and further circulate it beyond the individual study. “Before I share my data,” one patient concluded, “I would really need to interrogate the company and its aims.”

Critically, patients widely differed in their insistence of how data security and related issues might be pertinent to their decision to participate in research for oncology drug development and AI One U.S.-based patient advocate who primarily works with low-income cancer patients offered an explanation to suggest the variance with which patients stated these issues as relevant matters of concern. She contended that patients’ awareness of and inclinations to voice such concerns regarding data security and privacy are contingent upon their health status, resources, and level of education. She explained, “I don’t know how much patients know about the extent to which their health data is being shared. I don’t think I do either but to the extent that I do know…gosh, I think ‘Wow, I didn’t know that!’ So I don’t think most people know…Sometimes with advocates may be higher resourced or have come through [their treatments] and are now stable because in the thick of it I don’t hear patients being worried about [issues related to medical data-sharing] during the thick of treatment. Also, we have many clients who are less savvy about the system, and that is, lower resourced here, generally. So I haven’t heard a word about it. They are concerned with their personal privacy when it comes to their social security number, their immigration status, et cetera but as to whether they are concerned with their local CVS selling their data out? I don’t think they are concerned with that. I think that concern is a higher Maslow level than for instance, ‘I’m in treatment and I gotta feed my family.’” In this advocate’s view, patients’ likelihood to be concerned about the aforementioned issues of data ownership, security, data brokerage, threats to insurance coverage, and targeted corporate advertising necessitated a health status, insurance status, and an educational background that would allow them to consider such issues as sufficiently critical and indeed, the data collected by the author did not seem to dispute this view.

8.6 Genetic data-sharing: fears of discrimination and lack of knowledge and value

A smaller number of patients related they feared that in the event of a health data breach, or of medical data circulation subsequent to a private entity’s transference of patient data to health data brokers following research participation, some individuals might be subjected to discrimination or stigma based on their healthcare status, data, or medical history. Patients were particularly concerned with discrimination and corporate surveillance with respect to genetic data. Although patients often insisted that they believed genomics “provides an additional path for predicting the cause of cancer,” that it will potentially “improve personalized treatment,” and that “generally speaking, they see few negatives to [genomics] research,” many related that they remained apprehensive of what social effects the study of such data might entail and communicated fears related to the stigma of medical genomics. For instance, some held fears of how genetic studies might embolden some researchers to take up “social genomics studies” reminiscent of the twentieth-century eugenics and pseudoscientific approaches to genetics. In further explaining these qualms, patients cited the potential for discrimination related to genetic predispositions, STI or HIV/AIDS status, or mental health histories. As a result, a large portion of patients indicated that as legal protections against such ramifications are, by their estimations, weak or fail to account for contemporary use, patients may demand additional assurances from private entities engaged in health research that if they were to participate in research by sharing their medical data with them, their data would be secured to the highest possible standard and sufficiently de-identified. As one patient explained, “As as it can be de-identified with confidence, then data leaks may be less harmful.” Some patients raised other concerns about the current lack of education regarding genomics and cancer patients, corporate actors, and oncologists possess. One patient contended: “I think the field is still early. I am concerned about commercial tests that may not be looking at the same genes and may give different results. Genetic counselors are a must!” Patients possessing these concerns were adamant that genetic data and genomics research needs to be coupled with educational initiatives and expert roles to explain results, consent procedures, and possible harm.

8.7 Issues of financial compensation, benefit-sharing, and medical inclusion

Finally, although most of the patients comprising the interview corpus were willing to share their health data for research without the prospect of financial compensation or benefit-sharing possibilities, approximately 15% of the interviewees (both U.S. and U.K. patients) stressed that these issues would greatly influence their decision to share their health data with a private entity. This issue was particularly important for patients who were currently undergoing treatment. One patient, a mother in her early forties currently undergoing treatment for a rare cancer type, explained her interest in financial compensation and other benefit-sharing as well as how it would affect their decision to participate in private research: “Compensation is nice but I suppose if they can’t compensate and then can’t use the data, I would rather them be using the data if it’s going to be for the greater good and improving medicines and technology. I suppose it would be nice to know what they are working towards. So, in turn, they share: ‘This is what we are trying to achieve.’ But I sort of assume that if they use your data and have anonymized it by the time they do the study there’s really no way of them being able to come back and say, ‘This is what we’ve done with your data.’ In a case like with [the pharmaceutical company who makes a drug I need access to in order to attain a higher chance at survival], if they used your data to create a drug and then sell it for a sky-high price that you can’t afford, I think that’s wrong in a way. Why should they sell it at this sky-high price if they’ve used this data which has come to them as a free resource? Why is that fair? I suppose I’ve not really thought it through to that extent when I’ve given permission [before however] because I just think if this is research, it may possibly help me or help someone else in the future. [So] I probably would still want them to have the data. But maybe there should be other controls to stop them charging the Earth! Just making these drugs and saying, ‘these drugs are really amazing but you can’t have them because they are ridiculously expensive…We’ve made these drugs from your data which we have gathered and now we will sell them for ‘x’ amount.’ So yeah I suppose we would want some financial gain if you are going to be passing your information off to these companies…I would want the NHS [National Health Service] or say, Cancer Research UK, or someone like that just to use my data but maybe it is a bit different if you are talking about a big pharmaceutical company that’s making billions of dollars or whatever…They can afford to compensate you…and yeah there’s generics and I get that. I know that they will come out with generics for [the drug I need] eventually, but that’s too late for me. I need it now.” This patient interlocutor’s explanation of their conditions for medical data-sharing for research participation offers a sense of how patients currently undergoing treatment are conceptualizing the issue of sharing their data and how they wish to benefit in the event of doing so.

In contrast to this patient’s understanding of the value of their medical data and sense of how it might be valuable for other actors, many patients expressed puzzlement and apprehension with regard to how their data might hold future and current value. For example, one patient related: “My major concern is that there’s not enough knowledge to really benefit [for and as a patient currently undergoing treatment] from shar[ing] this health data [including genetic data, with researchers]. I really wish it could accelerate and that we could use AI to guide the treatments but there’s not enough treatments out there to make a massive difference. I hope that it will progress soon…but today I don’t really know what you could do with this data that would impact your life in any way.” In addition to patients’ doubts that participating in research would have a significant impact on the health outcomes of current patients, other patients were confused as to how, in the event of a data breach, their health data information might be of value to others including hackers, government agencies, or corporate entities. “Why would someone want to hack into a researcher’s storage system and take my data” and “why would someone want to re-identify my data?,” some patients questioned. One patient insisted that this would have no bearing on a decision to participate in research: “But why someone want to do it? I don’t really see any reason. So no, it [is not and] wouldn’t be a worry for me.”

In addition to issues of financial compensation, some patients noted that issues of consent regarding medical data-sharing were of critical importance to influencing how likely they are to share their data with researchers for AI advancement and drug development. As several patients were insistent that medical inclusion of diverse populations in research must be a priority, some asserted that they would be unwilling to share their health data with researchers if they did not make the interrelated issues of patient trust, efficacy, and medical inclusion key to their research. To this end, these patients wanted researchers to prioritize building relationships to recruit diverse populations for their studies, offer educational initiative to help equip potential participants with sufficient knowledge regarding what impacts and effects their participation might result in, and be committed to sharing resources and the benefits of “lower-resourced” populations. Only a demonstration of such commitments could impel these patients to want to share their data for research.

Advertisement

9. Patient-centered approaches to building frameworks of trust and accountability

This section examines patient-centered recommendations and proposals for ensuring patient trust, participation, and safety pertinent to increasing the development and clinical use of AI systems for oncology. Building on cancer patients’ concerns, this section highlights three major arenas for cultivating frameworks of trust and accountability crucial to advancing these systems and ushering them into clinical settings. Drawing from the qualitative data produced by this study in addition to the insights of other researchers, these three imperative arenas in need of reinforcement include: Building Knowledge and Redressing Consent and Resource Sharing; Addressing Health Inequities for AI Accountability; and, Promoting and Establishing Additional Safeguards. Strengthening patient support, understanding, and participation in AI-related oncology drug development requires robust, varied responses to these three interrelated arenas of concern from a multitude of relevant actors. This section provides an overview that attempts to synthesize the attitudes, positions, and actions stakeholders can undertake to broadly ensure accountability, equity, and patient trust and participation with regard to these systems.

9.1 Navigating patient participation and trust: building knowledge, redressing consent, and sharing resources

Educational initiatives remain a critical aspect to earning trust and maintaining accountability within AI-oncology related research endeavors. Establishing truly informed consent requires equipping cancer patients, cancer patient advocates, and oncology care providers with the necessary knowledge to stay informed and alert about how these systems operate, how they are designed and trained, what ramifications might ensue as a result of their implementation. Cancer patient advocates are particularly vocal in stressing the importance of giving patients all necessary information required in order to understand what potential limitations or risks such systems may incur. They further assert the need for a collaborative approach to both building patient knowledge and to assessing how potential harms and complications are to be addressed. They believe that collaboratively produced and executed educational initiatives will foster support among the general patient populace for public and private investments in both AI development as well as for the infrastructural adjustments within their use may necessitate. Advocates and oncologists alike contend that patients often remain ignorant of the options for medical coverage and care available to them, particularly with respect to clinical trials and other forms of research involvement. This lack of education not only comprises one barrier to participating in oncology-related research and drug development studies, but also may furthermore preclude patients from receiving the highest quality of care at their disposal. Additional knowledge regarding research endeavors and their potential benefits may encourage patients, many of whom profess to be open to engaging in research, to participate in AI-driven oncology drug development studies.

Indeed, many cancer patients, including the interlocutors who informed this study, actively assert their desire to learn more about the AI-driven systems that have the potential to considerably impact their treatment from trustworthy sources. Patient advocates reason that given the aforementioned demands on patients as well as the nature of clinical care, more advocates, researchers, and clinicians must be trained in how these systems operate and in how they might affect patients in order to equip them with the necessary expertise for helping patients navigate and assess the potential ramifications that these technologies may have on their treatment. Undeniably, more initiatives need to be established to educate patients in how machine learning-driven systems operate, what their levels of efficacy are, and what greater social effects they might precipitate. Such educational initiatives would serve as a crucial first step in assisting current, former, and future patients in understanding what crucial arenas can be acted upon to ensure that patients receive the quality of care they deserve. These arenas might include, for example, participating in relevant research or reinforcing support for policies that attempt to carve out how issues of liability will enfold in the face of medical error due to AI system usage. As stated earlier, such educational endeavors may hold higher stakes and greater challenges for patients with “limited access to high quality clinical care, limited health literacy, earned mistrust of medical providers, and those individuals who may be exposed to interpersonal and institutional racism and other discrimination in their healthcare encounters” [14].

Nevertheless, it remains important to consider how matters of securing consent and research participation extend far beyond merely bolstering educational initiatives for patients. For instance, too often issues pertaining to refusal of consent and slim participation are framed as the consequence of ingrained beliefs that stem from cultural beliefs rather than as rational stances toward the injustices of biomedical research from beget from the nexus of material inequities and historical oppressions. Against the myopia of cultural determinism, researchers of technology and medicine contend that patients’ (un)willingness to participate in research must be appropriately contextualized as complex responses to biomedicine in socially stratified societies. Ruha Benjamin frames such arguments in the following terms: “If we understand trust and distrust not simply as individual or cultural predispositions that are ‘held’ by some and not by others, but rather as outgrowths of social relationships that are produced through the allocation of material resources and symbolic power, then we see that techniques for cultivating relationships hinge on redistributing and refashioning those, respectively” [18].

Exemplifying the limitations engendered by material inequities, clinical trials frequently fail to recruit people of color and other marginalized people. This fact holds further significance as research conducted by the U.S. Census Bureau predicts that the white population in the U.S. will fall below 50% by 2045. In conducting interview work, it was typical to hear patient advocates and medical professionals bemoan how clinical trials and other research endeavors struggled to recruit “diverse” patient groups for their studies. Beyond educational matters, advocates, clinicians, and cancer nonprofit directors frequently framed the issue of participation as one dominated by cultural inclinations (some groups are like ‘x’—‘x,’ in this case, being a list of static traits or stereotypes of racial groups) rather than as dispositions toward structural inequities. Through cogent research that examines how clinicians’ “‘ideas about [their patients’ ‘cultures’] contribute to health disparities,” anthropologist Khiara Bridges contends that “cultural stereotypes and beliefs in the way people from certain cultures ‘just are’ can be dangerous—and just as racist—as racism” [19]. Demonstrably, cultural determinism can result in deleterious health outcomes.

To combat this, Benjamin argues that it is necessary for medical researchers and health professionals to turn “away from a fixation with distrust and towards the problem of institutional trustworthiness” [18]. This logical turn refuses to heap blame, stigma, or tidy labels of ignorance upon marginalized populations whom medical researchers find it difficult to recruit for studies. Instead, it asks researchers to assume a self-reflexive approach to their work and recruitment efforts and compels them to question how their institution, research body, and associates can be accountable to marginalized populations possessing an earned distrust of medical intrusion whom researchers aim to include in medical studies. In advancing the logical turn from a narrow fixation on issues of patient distrust to the broader problem of institutional trustworthiness, health practitioners, tech developers, and medical researchers may begin to fruitfully rectify inequalities rather than reproduce stale, cultural deterministic, and circumlocutory narratives of why “subordinate groups remain elusive to researchers” [19].

Ethicists and researchers similarly stress the need to rethink current regulations for securing consent for biomedical research. They advocate for a shift from the paradigms of one-time consent to frameworks of accountability that attend to participants’ evolving concerns and adhere to ongoing commitments of responsible use of participant samples. They argue that as political surroundings, public opinion, the type of information collected, and the application of this data necessarily shifts, researchers must build responsive systems of consent. Consent practices, they argue, must not only integrate ongoing assessments of the risks and implications of their research but also frequent monitoring of patient attitudes, beliefs, and perspectives.

Ethicists assert that more needs to be done to guarantee reciprocity or ensure that participants, not just researchers and their affiliated institutions and funding bodies, are also benefiting from the research. This begins with a willingness to address historical injustices that have contributed to the mistrust that certain groups continue to hold with respect to biomedical research. For some, distributing broad benefits in genetics and genomics research involves making research and research instruments publicly available so that they are not tethered to the limited access that often characterizes commercial arrangements. Ethicists also explain that research organizations can engage in capacity-building in which more richly resourced research organizations collaborate and share resources with “lower resourced” organizations and community participants.

As ethicists continue to advocate for benefit-sharing in research through endeavors like capacity-building and commitments to engaging in open source and public domain initiatives, they also advocate for the redressal of the politics1 of recruitment itself. As anthropologist Cori Hayden argues “scientific knowledge does not simply represent (in the sense of depict) ‘nature,’ but it also represents”… (in the political sense) the ‘social interests’ of the people and institutions that have become wrapped up in its production” [21]. Following Hayden’s affirmation of the “coproduction” of all scientific endeavors, Benjamin advocates for attending to “informed refusal” as “a necessary corollary to informed consent—one that extends the bioethical parameters of the latter into a broader social field concerned not only with what is right, but also with the political and social rights of those who engage technoscience as research subjects and tissue donors” [18]. Benjamin explains that “the notion of informed consent—although developed to protect the rights and autonomy of individuals to accept or refuse participation in research—implicitly links the transmission of information to the granting of permission; in consequence, “the request to consent can be interpreted as guidance to consent” [18]. Juxtaposing “informed” and “refusal” thereby acts a signal of necessary humility that recalls individuals’ right to refuse participation and recognizes a paradigm in which refusal derives from an educated stance.

It is not enough to recognize that educational initiatives have the capacity to contribute to bolstering research endeavors. Rather, scholars of science and technology and medicine stress how “what matters is not only who is in the room and the intentions of those gathered, but also the structures of participation, modes of inclusion, and assumptions about what forms of knowledge and expression are valid and relevant” [18]. One U.S. based patient advocate incisively summarized these issues surrounding recruitment, knowledge-building, and participation.

“A researcher wants their research to be successful so they write their hypothesis and their aims to prove it. If a researcher has a skewed view about a group, I have seen that they write their study skewed that way. When researchers are doing something where they want to get groups in, I think they have to be honest first. A lot of the times the researchers don’t look like the community. So you can’t walk into the community and not be willing to hear their feelings. It’s important for communities of color to be in research. Part of that problem of not knowing how things affect African Americans, Asians, Latinos, and Native Americans is because they are not involved. But they also don’t have a reason to trust. So like I said to someone who was trying to conduct a research project, she said, ‘Well I don’t look like them’ and I said, ‘Then you say that.’ You don’t walk in there and pretend that the people looking at you do not see that you are a white woman. You admit it. ‘I don’t look like you. I know that. And here’s where my heart lies. I want to hear what you are thinking’. Because at least then you look as though you are there for the right reason and you are not looking to skate around the elephant in the room. Because it is about building relationships. You want someone to participate in your study. You know that people of color need to participate and particularly now that they are talking about precision medicine and personalized care. If people of color don’t participate in that then what will they know about us? They won’t know anything. We will be in the dark age because we are not participating. Although someone came and talked to us and said ‘Getting people to participate in clinical trials even in the white community is low. It’s lower with people of color.’ but there is something that you already know: tell the truth. [laughs] Say, ‘I want to do this research.’ But I feel like with researchers if it’s with people of color that you don’t know and that you have your implicit biased conceptions that were passed down or told to you, you don’t want to work with those groups. Like ‘Oh I don’t want to work with that group because they are this.’ When actually, you don’t know that. When actually you could make a difference and be noticed where others weren’t by stepping out and taking that risk because we already know as medicine is moving in this direction of personalized care, that other populations need to be considered. But you gotta be honest and you gotta figure out how to get them involved and getting them involved is sitting down and talking with them. Not saying ‘hey I want to do this research I am going to come into your community and I am going to use you and then I am going to disappear.’ But making a commitment to come back to the community and share what you learned. When I worked for American Cancer Society and was in San Francisco… I remember Black people [from the Bay View/Hunter’s Point neighborhood] talking about how many researchers showed up and came in, did a research study, got their data and took off and never came back. Well that group never wanted to see another researcher, ‘all they wanted to do was use us.’ You have to change it. And that, to me, means that you are willing to sit there and hear the difficult stuff…if there isn’t a hospital, if they have no way of getting the standard things needed, then how do you partner with other people?…So, researchers,…find out what is out there and available. Because there has to be a way to work around [institutional limitations like funding caps]. Saying ‘ok you are only going to fund this but I found these other community organizations and clinics, how can we work with them to try to bring the community you are working with back a solution?’ Instead of stopping and saying this is too hard and this is why I don’t work with this community. You problem solve.”

In addition to building patient knowledge concerning: medical technological advancements, research endeavors, the ramifications of technological interventions, science and technology studies scholars, biotechnology researchers, and patient advocates maintain that health inequities must be robustly addressed. With regard to making health technologies inclusive rather than exclusionary, patient advocates advise developers and medical researchers to seek out and collaborate with communities of color and other socially marginalized groups. They encourage conducting research and creating tech that focuses on and addresses the needs of vulnerable groups. A crucial aspect of such a venture, they assert, involves: building relationships and collaborative problem solving with these interlocutors to ensure that needs of these groups (such as basic access to standard treatment options) as well as the analyst’s research goals are met. Patient advocates stress that those willing to be pioneering in this regard will be hailed as vanguards and more importantly, are more likely to be recognized by myriad patient groups as worthy of trust.

9.2 Addressing and preventing the entrenchment of existing health inequities via AI tools

Amid the excitement for the potential medical insights machine learning and other AI systems might enable stands an increasingly emphatic chorus of experts urging both the developers of these systems and health specialists to ensure that these systems work to mitigate rather than entrench existing healthcare inequities.

Technology experts and critical algorithm studies scholars implore that we evaluate how these AI models—which increasingly manage and organize our lives—are far from neutral or objective tools. Rather, as mathematician Cathy O’Neil asserts, we must soberly weigh how these instruments are demonstrably encoded with human prejudice, misunderstanding, and bias [22]. One reason for this lies in the fact that these systems and the insights they generate are fundamentally reliant on training data sets composed of existing reference data. Conveying the fallibility of the data-driven paradigm within a different sector, in 2018 Amazon reported that the company was forced to discontinue its AI hiring and recruitment system because it discriminated against women applicants. Amazon’s recruiting tool relied on resumes submitted to the company over the previous 10 years—the majority of which came from men. Accordingly, these reference data organized the algorithm to give preference to male applicants and to screen out women applicants vis-a-vis subtle cues in their resumes such as experience in a women’s organization or education at a women’s college [23].

In another example beyond medicine, in 2016, investigative journalists uncovered how predictive criminal risk assessment algorithms—software used by US courts to predict how likely a person is to commit a crime in the future and relay a recommendation for sentencing to a presiding judge—are prejudiced toward people of color as they consistently recommend stronger sentencing for Black and Latinx people [24]. Scholars, among them Ruha Benjamin and Safiya Noble, and investigative journalists such as Julia Angwin continue to scrutinize the ramifications of integrating AI systems across a multitude of disparate realms among them: housing, finance, news media, welfare eligibility, social media platforms, popular search engines, and healthcare. Their research confirms that AI systems possess the capacity to exacerbate existing social inequities.

As the preponderance of data-driven solutions becomes the norm for healthcare specifically, experts demand that we address how these tools can compound existing disparities in healthcare outcomes. One step toward this remediation, researchers assert, involves educating healthcare providers and developers to ensure they sufficiently comprehend how systemic inequities affect individual health. A robust understanding of the causes, consequences, and modes in which health inequities exist not only affords medical specialists and health tech developers a sense of what research and technological solution need to be prioritized to address injustices, but it can also coincide with a self-reflexive method of medical engagement. In other words, knowing how, why, and what health inequities exist, can allow one to approach health interventions with a heightened awareness of the imbrications and the potentially far-reaching implications of their actions and mediations. It would allow one a crucial frame of reference to question how their instruments and actions might be a catalyst for perpetuating social harm. Many argue that this knowledge is a necessary, fundamental, initial step toward remediating health injustice.

Dr. Tina K. Sacks, a medical sociologist who investigates how race and gender impact health outcomes and a proponent of this kind of knowledge building, advocates for a structural approach to understanding health inequities. Sacks asserts: “Although the dominant paradigm in the United States emphasizes individual choice and responsibility, the empirical evidence indicates that our neighborhoods, schools, jobs, and other factors of day-to-day life shape individual and population health” [25]. Similarly, medical historian John Hoberman analyzes how the historical legacy of racialized thinking is reflected in the contemporary U.S. medical establishment by focusing on how physician racism contributes to health disparities. Hoberman’s research suggests that medical providers rely on false beliefs rooted in racial essentialism—such as the pernicious myth of so-called Black “hardiness”—to determine diagnosis and treatment for Black patients [25]. In addition to racial and gendered oppression, in the past several decades, researchers have demonstrated that health and well-being strongly correlate with socioeconomic status. Sacks summarizes: “One of the most important systemic inequalities is unequal access to income and wealth, which may lead to poor health behaviors, chronic conditions, and disease” [25].

The findings of the Institute of Medicine’s2 seminal study of the causes and ramifications of pervasive healthcare disparities in the US and the volume of research it prompted, found physician bias, whether conscious or unconscious, to be a crucial factor in the production of disproportionate healthcare outcomes. Subsequent empirical studies suggest that people of color and ethnic minorities, women, and other people who occupy vulnerable social positions are most susceptible to the noxious consequences of bias and stereotyping. Sacks further flags that “numerous studies have documented that healthcare providers are unconsciously or unintentionally biased against members of marginalized groups, which ultimately leads to difference in treatment across multiple domains (i.e., speciality care, pain management, mental health services, etc.)” [25].

Myriad experts assert that it is imperative that we are cognizant and considerate of how social inequities are embedded into the health data upon which AI systems are built. Due to design and optimization constraints, training datasets primarily utilize the health data profiles of those who can afford and have access to long-term, continuous healthcare as opposed to those who have limited access to care, discontinuous care, or fragmented records. Moreover, data gathered via clinical trials have long been known to be unrepresentative of the US population. Clinical trials routinely fail to recruit people of color and other marginalized people. Recently, investigative journalists at ProPublica reported that Black Americans, Native Americans, and other Americans of color are steeply under-represented in clinical trials for cancer drugs—even when the type of cancer disproportionately affects them [26]. This has translated to cancer treatments that are least effective for the population most afflicted by the disease. Critically, people of color continue to have disproportionately higher incidence and mortality rates for kidney, breast, prostate, and other cancers [14]. Likewise, AI tools designed to detect skin cancer have proven less adept at diagnosing skin cancer in Black and brown patients than white patients [5]. While people with fair skin have the highest incidence rates for skin cancer—the most prevalent human malignancy—the mortality rate for people with darker skin such as African Americans is considerably higher. Eric Topol contends that this is especially noteworthy for genomic studies driven by machine learning techniques: “First, people of European Ancestry compose most or all of the subjects in large cohort studies, which means that, second, they are of limited value to most people, as so much of genomics of disease and health is ancestry specific” [5]. Prioritizing health equity would not only result in more robust scientific and medical knowledge, but would also constitute a step toward engendering quality healthcare for all.

Increasingly, health researchers such as Sacks and Jonathan Metzl propose efforts toward remediating health inequities that center on structural competency. They advocate for well-researched efforts at the institutional level that aim to address the enduring effects of historical oppression. For example, Sacks explains that structural competency involves moving beyond obfuscating framings of racism as a troubled American past or simply an individual failing of “bad” or “uneducated” people. Instead, structural competency demands that we analyze how racism constitutes structural phenomenon embedded and reproduced in US institutions such as medical schools and healthcare settings [25].

Technology developers and data scientists, moreover, must also be involved in building structural competency across the institutions they navigate to produce more robust, just, and effective technological instruments. Data scientist Ben Green affirms that “by developing tools that inform, influence, impact important social or political decisions—who receives a job offer, what news people see, where police patrol—data scientists play an increasingly important role in constructing society” [20]. In consequence, Green argues that it is imperative that data scientists move away from conceptions of technological instruments as simple tools that can “be designed to have good or bad outcomes” and instead recognize how the technologies they are developing “play a vital role in producing the social and political conditions of the human experience” [20]. By this logic, Green asserts that data scientists must also come to recognize themselves as political actors engaged in the “process of negotiating competing perspectives, goals, and values” rather than as neutral researchers merely coding away in their offices [20]. The decisions data scientists make and responsibilities they hold “cannot be reduced to a narrow professional ethics that lacks normative weight and supposes that, with some reflection, data scientists will make the ‘right’ decisions that lead to ‘good technology’” [20]. As “technology embeds politics and shapes social outcomes,” a position of neutrality remains an “unachievable goal” Green contends, as first, “it is impossible to engage in science and politics without being influenced by one’s background, values, and interests [20]. Second, striving to be neutral is not itself a politically neutral position—it is a fundamentally conservative one” as such a stance functions to maintain a radically inequitable status quo. Correspondingly, Green debunks the logic of the common tech refrain: “‘we shouldn’t let the perfect be in the enemy of the good’” [20]. Green highlights that data science lacks any theories or coherent discourse “regarding what ‘perfect’ and ‘good’ actually entail” and furthermore, “fails to articulate how data science should navigate the relationship” between the two notions; instead, such a claim “takes for granted that technology-centric, incremental reforms is an appropriate strategy for social progress” [20]. Green then points to the example of criminal risk assessment algorithms; “even if they can be designed not to have racial bias,” he argues, their deployment can “perpetuate injustice by hindering more systemic reforms of the criminal justice system” [20]. While recognizing that data science is capable of improving society, in Green’s assessment, a structurally competent approach demands that algorithmic and data science solutions be “evaluated against alternative reforms as just one of many options rather than evaluated merely against the status quo as the only possible reform” [20]. There should not be a starting presumption that machine learning (or any other type of reform provides an appropriate solution for every problem…data science reforms tend to (implicitly if not explicitly) assert that the precise means by which decisions are made is the only variable worth altering. There may be situations in which this assumption is correct, but it should not be made or accepted lightly, without interrogation and deliberation” [20].

Furthermore, patients and patient advocates recommend cultivating patient and health practitioner education in relation to developments in technology and healthcare as a significant step toward getting patients the right treatment involves informing them of their treatment options and of any potential consequences and side effects. This mandates that medical care providers be sufficiently educated to guide patients and that education materials are deliberately designed to be accessible and easily comprehendible (e.g., offering treatment pamphlets in several languages rather than solely in the dominant language). For patient advocates, these three recommendations are critically imbricated in one another. One patient advocate succinctly questioned: “How am I supposed to educate a patient about a new treatment or drug they won’t have access to it?” Experts across the realms of healthcare and technology declare that prioritizing health equity necessitates that we create systems of accountability; educate ourselves on the causes and implications of health inequity; and set our aim ultimately at structural interventions.

9.3 Promoting and establishing additional safeguards

As previously discussed, patients, advocates, and other health professionals are deeply concerned that current legal parameters and regulatory schemes are not robust enough to protect them from the ill effects of potential misuse including health data breaches and medical data-mining. In addition to patients, legal scholars, biomedical researchers, computer scientists, and genetic privacy experts are sounding the call for a legal overhaul of the statutes affording protections based on medical data-sharing and for genetic information, in particular.

Taking the example of genomics and genetics research in a U.S. context, legal experts reason that as genetic information is no longer adequately safeguarded by the protections of HIPAA and GINA, Congress and other legislative bodies may need to pass a broadly applicable, special-purpose genetic privacy law. These researchers also deem it necessary for US policymakers to address the issue of de-identified genetic data. Although legislatures could regulate DNA as personal identifying information in attempt to redress the legal loopholes of genetic genealogy, LawSeq affiliates caution that such a law would not prevent individuals from adding their personal genomes to online databases for ancestry purposes. As a result, Joh and other legal scholars assert that state legislatures and attorneys general can and must act to set up guidelines concerning genetic surveillance and policing by law enforcement agencies while, in addition, Congress and the Federal Trade Commission could address the privacy and security issues of consumer genetic data [27]. Although legal experts do not necessarily advocate for stricter controls on genetic data within biomedical contexts, they do stress the need to regulate the practices of commercial genetic testing companies and data mining firms. Fortunately, many consumer testing companies are invested in preserving the trust of their customers. A few have formed an inter-market privacy coalition, re-committed to strengthening their consent clauses, and released public statements declaring they are opposed to willingly cooperating with law enforcement [28]. Given that it is virtually impossible to ensure anonymity for genetic information, researchers in medicine, law, and computer science also recommend establishing restrictions on how genetic data are stored and repurposed. Some, like Yaniv Erlich, endorse the idea of attaching cryptographic signatures to genetic profiles and using blockchain technology to curb potential abuses. Others advocate for utilizing methods of obfuscation. One of these methods of obfuscation is referred to as “differential privacy” [29]. In this method, noise is introduced to portions of the genetic profile to prevent re-identification and repurposing of the data as well as to control access [29]. Nevertheless, the majority of experts across the fields of law, biomedical science, healthcare, and computer science are unanimous in asserting the urgency for stronger legislative protections.

In addition to supporting more comprehensive regulatory and legal schemes for protecting patients’ data, patients also want to know how algorithmic systems for medical usage will be audited for safety. They are further concerned with how regulatory agencies will account for the fact and monitor AI systems for use in oncology context given these systems require regular updates. Will each update be monitored for safe use? How will these bodies guarantee standardization measures for these updates? Who will be responsible for potential instances of malfunction or medical error pertaining to these systems? Patients stressed that legislators, technologists, legal experts, and bioethicists must all be involved in producing answers to these queries and in establishing the necessary auditing agencies to assure enforcement and cooperation.

Still, patients offered yet another crucial safeguard that can be implemented across most university-related research institutes and research-driven corporate enterprises with relative ease: the involvement of patient advocates in overseeing studies. One patient advocate explained: “If I can throw in my two cents, I would encourage companies to involve patients and advocates sooner rather than later. And to set up a patient advisory board sooner rather than later even if they are still in development. Because they are going to give straight up advice and they are going to have knowledge and perspectives that researchers haven’t thought of. There’s no question they will. Researchers don’t know what they don’t know when it comes to working with patients. But if you bring them in sooner rather than later, they can learn as they go along.” As this patient advocate contends, patients, especially trained advocates, can offer incisive critiques and help guide researchers in reducing the potential for harm, irritating pragmatic issues, and major complications patients might encounter as a result of a study or product. Patient advocates can provide invaluable guidance and intellectual, sociological, and psychological insight into what issues are most pertinent and compelling to patients and how best researchers and research institutions can address their needs and concerns.

Advertisement

10. Conclusion

Researchers assert that AI systems can be understood as constitutive of collective contestations of the political realities, ethical liabilities, and financial viabilities immanent to their social production. Following this logic, studying the patient perceptions of AI and AI-led oncology drug development, listening to patient perspectives, and heeding their concerns constitutes a cooperative entry point to preventing harm, avoiding unnecessary risks, and building networks of public consent and approval.

This chapter examined: patient perceptions of AI-enabled healthcare and present inclination to trust these tools to improve health outcomes; the extent to which they express a desire to be involved in the development of proposed AI systems vis-a-vis data-sharing based on their existing knowledge; the concerns and questions they bear regarding the integration and deployment of these technologies; the recommendations and suggestions they are proposing for ensuring patient trust; and finally, what patient-centered approaches to building frameworks of trust and accountability other researchers of medicine and algorithmic deployment are advancing. While this study found cancer patients hold an openness to participating in research and a general optimism for experimental endeavors related to improving patient outcomes that includes AI-led systems research and use, it also discovered that patients maintain a vast array of concerns that must be addressed to protect patients from a series of potential risks and existing avenues for medical harm and neglect. Specifically, this study discerned that cancer patients are troubled by: a lack of clarity and protections surrounding medical data usage, the potential for emerging technologies to exacerbate existing healthcare inequities, and anemic approaches to resource-sharing, consent procedures, and educational initiatives to bolster research participation and patient trust.

Still, this qualitative study maintains limitations in its scope and aims, its discoveries and discussion. Further research, including quantitative research, may of course aid in parsing out the complexities of understanding cancer patients’ varied responses to relevant oncology-specific, technological developments. In particular, this study could be bolstered by additional comparative, cross-cultural research regarding the distinctions between U.S. and U.K. patients and how their contrasting medical care systems may affect their healthcare experiences and impact their positions toward burgeoning medical technologies.

Patient approval and participation are not only imperative to developing and improving AI-systems given the need for vast amounts of patients’ medical data but also to ensuring the use and future widespread adoption of these tools which possess the potential to improve patient outcomes. It is crucial to attend to patients’ concerns, establish stronger frameworks for ensuring patient trust, and implement accountability infrastructures.

Thanks

I am truly grateful to the patients, their relatives, clinicians, nurses, and nonprofit directors and employees who granted me interviews. Thank you for your presence, trust, time and for sharing your experiences, perceptions, and concerns with formidable heaps of honesty and vulnerability.

I also extend my deepest thanks to Geoffroy Dubourg Felonneau for his support, to Belle Taylor for her patience and editing efforts, and to the CCG team for a fruitful year and welcoming environment.

References

  1. 1. KPMG. PDF. London, U.K.: KPMG International; 2018
  2. 2. Day L, Joshi I, Woods T, Reem M. PDF. London, U.K.: The AHSN Network of the U.K.’s Department of Health and Social Care; 2018
  3. 3. Reddy S, Fox J, Purohit MP. Artificial intelligence-enabled healthcare delivery. Journal of the Royal Society of Medicine. Mar 2018;112(1):22-28
  4. 4. Elish MC. The Stakes of Uncertainty: Developing and Integrating Machine Learning in Clinical Care. SSRN. Data & Society Research Institute; 2019. Available from: https://ssrn.com/abstract=3324571
  5. 5. Topol EJ. Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. New York: Basic Books; 2019
  6. 6. Elish MC, Hwang T. New York: Data & Society; 2017
  7. 7. Mitchell TM. Machine Learning. New York: McGraw-Hill; 1997
  8. 8. Bucher T. If…. Then: Algorithmic Power and Politics. New York, NY: Oxford University Press; 2018
  9. 9. Mackenzie A. The production of prediction: What does machine learning want? European Journal of Cultural Studies. 2015;18(4-5):429-445
  10. 10. What to expect from AI in oncology. Nature Reviews. Clinical Oncology. 2019Jan;16(11):655
  11. 11. Begg K, Tavassoli M. Biomarkers towards personalised therapy in cancer. Drug Target Reviews. 2017. Available from: https://www.drugtargetreview.com/article/23631/biomarkers-personalised-therapy-cancer/
  12. 12. Ross C, Swetlitz I. IBM pitched Watson as a revolution in cancer care. It’s nowhere close. STAT. 2018. Available from: https://www.statnews.com/2017/09/05/watson-ibm-cancer/
  13. 13. Adamson AS, Welch HG. Op-Ed: Using artificial intelligence to diagnose cancer could mean unnecessary treatments. Los Angeles Times. 2020. Available from: https://www-latimes-com.cdn.ampproject.org/c/s/www.latimes.com/opinion/story/2020-01-12/using-artificial-intelligence-to-diagnose-cancer-could-mean-unnecessary-treatments?_amp=true
  14. 14. Artificial intelligence can entrench disparities—Here’s what we must do. The Cancer Letter. 2018. Available from: https://cancerletter.com/articles/20181116_1/
  15. 15. Methany M, Israni S, Ahmed M. PDF. The Journal of the American Medical Association. Washington, D.C.: JAMA; 2019
  16. 16. Elish MC, Monteescu A. PDF. New York: Data & Society; 2017
  17. 17. Myers B. Women and minorities in tech, by the numbers. In: Wired. Conde Nast; 2018. Available from: https://www.wired.com/story/computer-science-graduates-diversity/
  18. 18. Benjamin R. Informed refusal. Science, Technology, & Human Values. 2016;41(6):967-990
  19. 19. Benjamin R. Cultura obscura: Race, power, and “culture talk” in the health sciences. American Journal of Law & Medicine. 2017;43(2-3):225-238
  20. 20. Green B. PDF. Cambridge, MA; 2019
  21. 21. Hayden CP. When Nature Goes Public: The Making and Unmaking of Bioprospecting in Mexico. Oxford: Princeton University Press; 2003
  22. 22. ONeil C. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Great Britain: Penguin Books; 2017
  23. 23. Dastin J. Amazon Scraps Secret AI Recruiting Tool that Showed Bias against Women. Thomson Reuters; 2018. Available from: https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G
  24. 24. Angwin J, Larson J, Kirchner L, Mattu S. Machine Bias. ProPublica; 2019. Available from: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
  25. 25. Sacks TK. Invisible Visits: Black Middle-Class Women in the American Healthcare System. Oxford University Press; 2019
  26. 26. Chen C, Wong R. Black Patients Miss Out On Promising Cancer Drugs. ProPublica; 2019. Available from: https://www.propublica.org/article/black-patients-miss-out-on-promising-cancer-drugs
  27. 27. Joh E. Want to See My Genes? Get a Warrant. The New York Times; 2019. Available from: https://www.nytimes.com/2019/06/11/opinion/police-dna-warrant.html?action=click&module=privacyfooterrecircmodule&pgtype=Article
  28. 28. Gangitano A. DNA testing companies launch new privacy coalition. The Hill. 2019. Available from: https://thehill.com/regulation/lobbying/450124-dna-testing-companies-launch-new-privacy-coalition
  29. 29. Erlich Y, Narayanan A. Routes for breaching and protecting genetic privacy. Nature Reviews. Genetics. 2014;15(6):409-421

Notes

  • Politics, invoked here, does not solely refer to the mechanisms of electoral issues concerning political candidates or parties. Rather, it extends to the “collective social activity”—“public and private, formal and informal, in all human groups, institutions and societies” which affects who gets what, when, and how [20].
  • Now known as the National Academy of Medicine.

Written By

Roberta Dousa

Submitted: 27 September 2019 Reviewed: 11 May 2020 Published: 09 September 2020