2020 INS Annual Meeting
Virtual Conference
October 22-23

Accepted Abstracts



Stanley Center for Psychiatric Research at Broad Institute
The Kavli Foundation
Center for Neuroscience & Society, University of Pennsylvania

Prize Donors



Rethinking Personal Identity: From the Cultural Roots of Neuroenhancement to Peter Kramer's Listening to Prozac

Sara Margarida de Matos Roma Fernandes
Centre of Philosophy University of Lisbon

New developments in neuroscience, especially since the Decade of the Brain have made neuroenhancement an important domain of philosophical reflection. Neuroenhancement is usually understood as any intervention on the brain that aims to improve its capabilities beyond clinical therapy and good health, like better intelectual performance, abler minds, and even happier persons. It is the ability to use certain resources (natural, social or technological), not for a clear or objective therapeutic purpose, to treat a disease in human brain or to rehabilitate a dysfunctional brain, but to improve human capacities of healthy individuals, such as mood, cognitive, and moral functions.

Despite the recent scientific and biotechnological advances in human enhancement, I believe that the desire for human improvement is not new, as it characterizes human nature and it has its western cultural roots traced back to, at least, the Greek civilization. In this article, I discuss selected works of fiction (Greek myths, literature and films), spanning different centuries, that show how this wish of human improvement is a constant over time, as well as the important anthropological dilemmas that it necessarily raises. Science and technology seek to realize the desire of improvement that is very present in the western cultural imagination. "Life imitates art, more than art imitates life" is a famous Oscar Wilde's quote.

In this paper, I first argue that science imitates certain forms of Art - narrative or fictional -, since Art has the capacity to create and innovate in time, to bring up new horizons for research and technology. In some ways, science seeks to accomplish the worlds of artistic creations. Secondly, I emphasize how the impact of science in contemporary western societies, its emerging authority, explains the emergence of a new attempt to ground personal identity in two domains in increasing evolution, genetics and neurosciences. I mainly focus on neurosciences, highlighting its' tendency to form a neurobiology of personal identity and to create a neural ground of that philosophical concept by reducing it to brain functions. Thirdly, I criticize this neuroreductionist attempt, and offer philosophical arguments for acknowledging several dimensions of personal identity which are not subsumed in a neurobiological approach.


  • DeGrazia, David (2000). Prozac, Enhancement and Self-Creation. In: The Hastings Center Report, 30 (2), 34-40
  • Glannon, Walter (2011): Brain, Body and Mind. Neuroethics with a Human Face. Oxford: Oxford University Press.
  • Kramer, Peter D. (1996). Listening to Prozac. London: Penguin Books
  • Noe, Alva (2009): Out of our Heads. New York: Hill and Wang
  • Ricoeur, Paul (1990): Soi-même comme un autre. Paris: Éditions du Seuil.

Disclosures: None


The Dangers of Moral Neuroenhancement

Sara Margarida de Matos Roma Fernandes
Centre of Philosophy University of Lisbon

In the last decades, there have been unprecedented advances in science, technology and economics. We live longer, healthier and we have at our disposal all the men´s knowledge and lots of resources that make life easier. However, Humanity's moral progress does not always seem so fantastic. In fact, some world events seem to be a true moral regression. Some authors maintain that humanity currently faces so serious problems, that already puts in question its survival and calls for a moral neuroenhancement, including with the use of biotechnologies. Examples of this huge problems are environmental degradation, bioterrorism, poverty and social injustice, also associated with increasing sexism, racism and xenophobia. They note that the traditional means for solving these problems, such as family education, socialization throughout life and the role of different social institutions, have not been sufficient in democratic societies. In this sense, they call for a neurotechnological research to be carried out at a moral level, and applied to humanity as soon as it is available, arguing that, from a philosophical point of view, there are not only favorable reasons for its implementation, but it should become mandatory. Therefore, it is important to think about the legitimacy of moral enhancement, due to the intense neuroethical debate that has been taking place on this issue, especially since the launch of the book Unfit for the Future by Persson and Savulescu.

The idea of using biomedical technology for human improvement is nothing new. Human beings have always tried, and continue to do so, to design technologies able to enhance important capacities, such as cognition and mood. However, regarding moral neuroenhancement, the paper will firstly try to answer some philosophical questions that immediately arise: what do neuroethicists understand by morality, moral agent and moral enhancement? Secondly, the paper will try to show its scientific and technological impossibility, since it is based on the misleading neuroreductionist belief that «we are our brain». Following Glannon and Harris, I will also raise three objections to moral neuroenhancement: 1) it offers a very simplified view of the concept of morals; 2) if obligatory, it eliminates the very notion of moral agent, i.e., autonomous and responsible; 3) it threats democratic systems. I will argue that, although imperfect, the cultural means of moral modeling typical of democratic societies, e.g. education, socialization, codes of conduct, regulation, national and international agreements, remain the most plausible and consistent in moral enhancement.


  • Gyngell, C. e Easteal, S. (2015) Cognitive Diversity and Moral Enhancement. Cambridge Quarterly of Healthcare Ethics; 24(1):66-74.
  • Harris, J. (2011). Moral Enhancement and Freedom, Bioethics, 25 (2): 102-111.
  • Person, I. e Savulescu, J. (2012). Unfit for the Future. The Need for Moral Enhancement. Oxford: Oxford University Press
  • Rakic, V. (2014). Voluntary moral enhancement and the survival-at-any-cost bias, Medethics, 24. http://jme.bmj.com/
  • Schlag, S. (2016).The Tragedy of Biomedical Moral Enhancement, Neuroethics, 12:5–17, DOI 10.1007/s12152-016-9284-5

Disclosures: None


Aaron Hernandez and Chronic Traumatic Encephalopathy: A Case Study on Damaged Brains & Legal Culpability

Sunidhi Ramesh
Sidney Kimmel Medical College

Background: Chronic traumatic encephalopathy (CTE) is a neurodegenerative disorder caused by repeated head injuries over time. Characterized by a collection of neuropsychiatric symptoms (e.g., cognitive impairment, personality changes, depression, and suicidality), CTE histopathologically resembles Alzheimer's disease and can only be definitively diagnosed post-mortem on autopsy. There is no cure.

Introduction: Aaron Josef Hernandez began playing American football at eight years old, rising to fame over the years to eventually join the New England Patriots and become "the NFL's best tight end." On June 8, 2013, Hernandez was charged with first-degree murder; a year later, he was indicted on murder charges for a double homicide in Boston. In 2017, after being convicted of the former by a grand jury, Hernandez was found hanging by his bedsheets in his jail cell. An autopsy later found that he suffered from the most severe case of CTE ever recorded in a person his age. He was 27 years old.

The post-mortem findings in Hernandez' brain tell an eerie story that brings to light a bigger question: would we still have considered Hernandez guilty knowing the extent of his brain damage?

Discussion: Aaron Hernandez was unaware that he had CTE during his lifetime. His lawyers and doctors did not take steps to determine if his brain capacity was normal. Notably, while functional neuroscience does not correlate fully with cognitive neuroscience, a basic CT/MRI of Hernandez' brain may have raised alarms about his mental status and prompted researchers to conduct further psychological testing.

Perhaps, then, this serves as a case supporting the use of neuroimaging in courts. Using the Hernandez story as a framework, this poster will outline what role neuroimaging can play in civil and criminal courts and prospective guidelines surrounding doing so.

Conclusions: Within the legal sphere, a defendant can show that thanks to his disorder, his behavior was not voluntary, that he lacked the mental state essential to enact the crime, or that he is lawfully insane. This defense, with the help of neuroimaging, could have changed the outcome of the Hernandez trial. However, the linear causality between a neurologic diagnosis and an individual's subsequent mental state and/or conduct remains unclear. The legal system must carefully follow developments from research on neuroimaging to establish careful guidelines for the permissibility of neuroimaging in courts; such potential guidelines (understanding the limitations of neuroimaging, privacy protections, reliability and accuracy, etc.) will be explored in this poster presentation.


  • 1. Aharoni, Eyal, et al. "Can neurological evidence help courts assess criminal responsibility? Lessons from law and neuroscience." Annals of the New York Academy of Sciences 1124.1 (2008): 145-160.
  • 2. Wasserman, David, and Josephine Johnston. "Seeing responsibility: can neuroimaging teach us anything about moral and legal responsibility?." Hastings Center Report 44.s2 (2014): S37-S49.
  • 3. McKee, Ann C., et al. "Chronic traumatic encephalopathy in athletes: progressive tauopathy after repetitive head injury." Journal of Neuropathology & Experimental Neurology 68.7 (2009): 709-735.
  • 4. Gregory, Hollin. "Making a murderer: media renderings of brain injury and Aaron Hernandez as a medical and sporting subject." Social Science & Medicine 244 (2020): 112598.
  • 5. Price, Greg. "NFL Player Charged With Murder: Who Is Suspect Aaron Hernandez? Former New England Patriot Appears In Court, Pleads Not Guilty." IBTimes. com (2013).
  • 6. Redding, Richard E. "The brain-disordered defendant: Neuroscience and legal insanity in the twenty-first century." Am. UL Rev. 56 (2006): 51.
  • 7. New, John G. "If you could read my mind: implications of neurological evidence for twenty-first century criminal jurisprudence." The Journal of legal medicine 29.2 (2008): 179-198.

Disclosures: None


Media Representation of Social and Ethical Issues in Frontotemporal Dementia

Anirudh Nair (1), Veljko Dubljevic (1)
1. North Carolina State University

Dementia, and especially frontotemporal dementia (FTD), reverses many of the rights that are enjoyed by most individuals in society. The increasing deficits in cognition and decision-making lead to a legitimate reduction in autonomy. [1] The widespread fear of dementia, which may be fueled by the media, leads to social exclusion and stigmatization of those living with dementia and their families, contributing to significant mental health burdens.[2] This project aims to construct a clear understanding of the English language print media representation of: a) FTD in general, b) the effects it has on persons affected by the disease and their caregivers, and c) criminal or socially inappropriate behavior associated with the disease. Using specific formal search terms, we retrieved a sample of 137 articles from the NexisUni database to identify the media representations of FTD and explore the social discourses of labelling persons as ‘demented.' Our initial results suggest a relatively mature and well-rounded depiction of the disease in the public media. 122 of the articles in our sample approached discussions regarding dementia with a balanced tone. However, 58 articles portrayed persons living with dementia with negative moralizing metaphors and 30 articles mention stigmatized behavior in relation to the disease. Our project also revealed a few concerning trends in the media that point to an uninformed portrayal of the condition, underscoring the need for further discussion and coverage. In total, we identified 29 reports of stigma associated with the disease, along with 61 reports of distress experienced by persons with FTD and their caregivers as a result of the condition. Similarly, although we found 32 reports of crimes associated with the disease, there were only 9 total instances of persons with FTD being acquitted of their crimes on the grounds of being declared incapable of intent. Stigma towards FTD is a growing problem that can only be solved with a proper assessment of the conditions of healthcare delivery and social inclusion that enable persons with dementia to pursue dignified lives [3]. Although resources can limit the care that societies can reasonably provide to such persons, they do not limit the available knowledge and respect that can be offered to people with FTD [4]. Ultimately, an accurate and dignified representation of FTD in the media is essential for informing the public about prominent symptoms of the disease and preventing punitive approaches [5] towards individuals living with FTD.


  • [1] Rascovsky, K., J.R. Hodges, D. Knopman, et al. 2011. Sensitivity of revised diagnostic criteria for the behavioural variant of frontotemporal dementia
  • [2] Batsch, N.L., Mittelman, M.S., 2012. World Alzheimer Report 2012: Overcoming the stigma of dementia. Alzheimer’s Disease International, London.
  • [3] Dubljević, V. 2019. Disease and Wellness Across the Lifespan: A Global Perspective on Mental Health Burden of Dementia. In Global Mental Health and Neuroethics, edited by D. Stein and I. Singh, 225-235. Amsterdam: Elsevier.
  • [4] Dubljević, V. and Bottenberg, F. (Eds.): Living with Dementia: Neuroethical Issues and International Perspectives, Cham: Springer, In Press.
  • [5] Dubljević, V. 2020. The Principle of Autonomy and Behavioral Variant Frontotemporal Dementia, Journal of Bioethical Inquiry, https://doi.org/10.1007/s11673-020-09972-z.

Disclosures: None


Public Portrayal of Ethical and Social Issues in Brain Organoid Research

Abigail Presley (1), Leigh Ann Samsa (1), Veljko Dubljevic (1)
1. North Carolina State University

Background and Problem: Grown from induced pluripotent stem cells, human brain organoids are a valuable research tool for studying brain development, physiology, and pathology. This biotechnology offers an ethical, consistent, and personalized alternative to animal models and opens up research avenues previously inaccessible due to requirements for human experimentation. Yet, a host of potential ethical concerns are inherent in their creation. There is a growing group of bioethicists who acknowledge the moral imperative to develop brain organoid technologies and call for caution in this research (Hyun 2020). Although a relatively new technology (Lancaster 2013), brain organoids and their uses are already being discussed in media literature. Media literature informs the public and policymakers but has the potential for utopian or dystopian distortions (Dubljević 2019). Thus, it is important to understand how this technology is portrayed to the public.

Methods: To investigate how brain organoids are displayed to the public, we conducted a systematic review of media literature indexed in the Nexis Uni database from 2013-2019. News and media source articles passing exclusion criteria (n = 92) were scored to evaluate relevant themes. Themes were validated with a pilot sample before being applied to the dataset. Thematic analysis assessed article tone, reported potential for the technology, and the scientific, social, and ethical contexts surrounding brain organoids research.

Findings: Brain organoid publications became more frequent from 2013 to 2019 (n= 7 in 2013 and n= 44 in 2019). We observed increases in positively and negatively toned articles, suggesting growing polarization. While many sources discuss realistic applications of brain organoids, some suggest treatment and cures beyond the scope of the current technology (n= 19). This could potentially lead to overhyping the technology and disillusioning patients and families by offering what turns out to be false hope. In the ethical narrative, we observe a preoccupation with issues such as development of artificial consciousness and "humanization" of organoid-animal chimeras. Conversely, issues of regulation, ownership, and accuracy of the organoid models are rarely discussed.

Significance: Given the power that media literature has to inform or misinform the public, it is important that media literature provides an accurate and balanced reflection of the therapeutic potential and associated ethical issues regarding brain organoid research. Our study suggests that more attention should be given to accuracy of reporting in the media literature.


  • Dubljević, V. (2019). Neuroethics, Justice and Autonomy: Public Reason in the Cognitive Enhancement Debate, Heidelberg: Springer.
  • Hyun, I., Scharf-Deering, J.C., and Lunshof, J.E. (2020). Ethical issues related to brain organoid research. Brain Res. doi: 10.1016/j.brainres.2020.146653.
  • Lancaster, M. A., Renner, M., Martin, C., Wenzel, D., Bicknell, L. S., Hurles, M. E., . . . Knoblich, J. A. (2013). Cerebral organoids model human brain development and microcephaly. Nature, 501(7467), 373-379. doi:10.1038/nature12517

Disclosures: None


Incoming Ethical Issues for Deep Brain Stimulation: When Long-Term Treatment Leads to a 'New Form of the Disease'

Frederic Gilbert (1), Mathilde Lancelot (2)
1. School of Humanities, University of Tasmania; 2. SPHERE, Paris Diderot University/University of Paris

In recent years, Deep Brain Stimulation (DBS) has greatly gain attention in ethics due to intense debate about the putative effects of the technology on personality, identity, agency, autonomy, authenticity and self. However, there is almost no ethical discussion about emerging clinical studies demonstrating evidence that DBS may increase the life expectancy of patients with Parkinson's Disease (PD) (Gilbert & Lancelot 2020). Recent longitudinal follow-up studies have reported a significant increase in postoperative survival rates of patients with PD implanted with DBS as compared with those not implanted with DBS (Lau et al 2019; Ngoga et al 2014; Merola et la 2014). Although DBS tends to increase life expectancy for most patients with PD, this medical benefit does not come without attendant negative consequences. For example, emerging forms of iatrogenic harms are sometimes induced—harms which were not initially expected when clinicians proposed neurosurgery and patients or their guardians consented to the treatment. We report and discuss the clinical case of a patient who was implanted with DBS more than 20 years ago (at the time of writing) and is now experiencing unexpected stages of PD. This case illustrates how extending the life span without improving quality of life may introduce a burden of harms for patients and families. As well, this case shows why we should prepare for the expanding numbers of PD-implanted patients experiencing a gain of longevity but with severe stages of disease leading to diminution in quality of life. This newly observed effect of DBS treatment requires us to explore ethical questions related to iatrogenic harms, informed consent, end of life and caregivers' burden.


  • Gilbert, F. Lancelot, M. 2020. Incoming ethical issues for Deep Brian Stimulation: When long term treatment lead to a "new form of disease". Journal of Medical Ethics. DOI 10.1136/medethics-2019-106052
  • Ngoga D, Mitchell R, Kausar J, et al. 2014. Deep brain stimulation improves survival in severe Parkinson's disease. J Neurol Neurosurg Psychiatry; 85(1):17–22.doi:10.1136/jnnp-2012-304715
  • Merola A, Rizzi L, Zibetti M, et al. 2014. Medical therapy and subthalamic deep brain stimulation in advanced Parkinson's disease: a different long-term outcome? J Neurol Neurosurg Psychiatry 2014;85(5):552–9.doi:10.1136/jnnp-2013-305271
  • Lau B, Meier N, Serra G, et al. 2019. Axial symptoms predict mortality in Parkinson’s disease patients with subthalamic stimulation. Neurology. 10.1212/WNL.0000000000007562

Disclosures: None


Knowledge Representation for Neuroethics

Paola Di Maio
Center for Technology Ethics

Neuroscience research is producing considerable amounts of data and insights, which needs to be interpreted and understood, so that it can be used to inform scientific inquiry in diverse domains, from psychology to cognition and that can also be useful to inspire systems development, such as machine learning and AI architectures.

In this context, Neuroethics can not only guide the ethical aspects of neuroscience research and experimentation, but it can also be applied to develop and evaluate the performance of information technology systems inspired by the human brain.

Considering the interdisciplinarity and complexity of both neuroscience and informatics, neuroethics is not only concerned with developing adequate understanding and good practices to ensure the considerate use of advanced technologies used for investigating the human brain and mind, but it can also be used to support the considerate application of the knowledge derived from the human brain research and applied to to the development and use of biologically inspired hardware and software.

The proposed talk investigates the relationship between Neuroethics and Bias - in cognition, in research and algorithms from a Knowledge Representation (KR) point of view. Knowledge representation consists of a set of methods and techniques well understood in AI, which can be applied in Neuroethics

Bias can be considered a consequence of lack of adoption of appropriate knowledge modelling and representation

The talk includes an overview of the topic Knowledge Representation for Neuroethics with emphasis in neurosymbolic integration and provides a perspective from socio technical complexity. It illustrates details of current research with preliminary results.


  • Levy, N. (2008). Introducing neuroethics.
  • Bird, Stephanie J. "Potential for Bias in the Context of Neuroethics." Science and engineering ethics 18.3 (2012): 593-600.] P. Hitzler, S. Bader and A. S. d’Avila Garcez. Ontology learning as a use case for artificial intelligence. In A. S. d’Avila Garcez, J. Elman and P. Hitzler, editors, Proceedings of the IJCAI-05 workshop on Neural-Symbolic Learning and Reasonin, NeSy’05, Edinburgh, UK, August 2005, 2005.
  • Ricardo Baeza-Yates. 2018. Bias on the Web. Commun. ACM 61, 6 (May 2018), 54–61. https://doi.org/10.1145/3209
  • Neural-Symbolic Learning and Reasoning: Contributions and Challenges Knowledge Representation and Reasoning: Integrating Symbolic and Neural Approaches: Papers from the 2015 AAAI Spring Symposium

Disclosures: None


Safety, Security, and Responsibility: Managing Incidental Findings, Adverse Events, and Privacy Concerns in the Use of Commercial Neurodevices

Nicole Minielly (1), Viorica Hrincu (1), Judy Illes (1)
1. Neuroethics Canada

Brain sensing technologies that have historically only been accessible to researchers and physicians have now been adapted for use by the general public. This industry is predicted to be worth $12.22 billion dollars by 2021 (Markets and Markets, 2017). Alongside claims of benefit (Coates McCall et al., 2019) reside ethical concerns surrounding privacy, incidental health findings, and adverse events (Ienca et al., 2018; Minielly et al., 2020a; Wexler, 2019). In this study, we conducted semi-structured interviews with 12 company representatives (chief executive officers or their designates): 8 from neurowearable companies that produce recording devices (e.g., encephalography [EEG]); 3 from companies that produce stimulating device (e.g., transcranial direct current stimulation [tDCS]); one from a company whose device does both (e.g., EEG+tDCS)) to characterize policies surrounding these concerns. Participants reported few occurrences of incidental findings or adverse events, and formal policies were generally lacking (Minielly et al., 2020a). Through a structured secondary analysis of the interview data we identified four privacy concerns that pertained to data collection and management, user autonomy and information transparency, exceptionalism of brain data, and regulatory systems that were reported to be of far greater concern to participants than unexpected findings (Minielly et al., 2020b). We offer four practical recommendations to guide consumers, the neurotechnology industry, and ethicists alike:

1. Minimize data collection and increase data security.

2. Standardize and make explicit incidental health finding and adverse event management procedures.

3. Encourage regulatory standards about neurotechnologies for health and wellness.

4. Support better harmonized international privacy standards.

Industry endeavours to democratize access to brain health through these devices, as highlighted by an interview participant. This perceived benefit, which drives this lucrative industry, must be balanced with improved regulatory and industry standards to promote ethical innovation.


  • Coates McCall, I., Lau, C., Minielly, N., & Illes, J. (2019). Owning Ethical Innovation: Claims about Commercial Wearable Brain Technologies. In Neuron. https://doi.org/10.1016/j.neuron.2019.03.026
  • Ienca, M., Haselager, P., & Emanuel, E. J. (2018). Brain leaks and consumer neurotechnology. In Nature Biotechnology. https://doi.org/10.1038/nbt.4240
  • Minielly, N., Hrincu, V., & Illes, J. (2020a). A View on Incidental Findings and Adverse Events Associated with Neurowearables in the Consumer Marketplace. In I. Bard & E. Hildt (Eds.), DNB 3: Ethical Dimensions of Commercial and DIY Neurotechnologies (1st ed.). Elsevier, Academic Press.
  • Minielly, N., Hrincu, V., & Illes, J. (2020b). Privacy Challenges to the Democratization of Brain Data. IScience. https://doi.org/10.1016/j.isci.2020.101134
  • Wexler, A. (2019). Separating neuroethics from neurohype. In Nature Biotechnology. https://doi.org/10.1038/s41587-019-0230-z

Disclosures: None


Indignation for Moral Violation Suppresses the Tongue Motor Cortex: Preliminary TMS Evidence

Carmelo M. Vicario (1), Robert D. Rafal (2), Giuseppe Di Pellegrino (3), Chiara Lucifora (1), Mohammed Ali Salehinejad (4), Michael A. Nitsche (4), Alessio Avenanti (3)
1. University of Messina, 2. University of Delaware, 3. University of Bologna, 4. Leibniz Research Center for Working Environment and Human Factors

The neosentimentalist perspective (Haidt, 2001), proposes a causal role of emotions in moral judgment and suggests the existence of a genuine, biologically based, link between disgust and morality. Yet, it remains unclear whether metaphorical expressions linking disgust and morality are genuinely shared at the cognitive/neural level (Royzman and Kurzban, 2011). Using transcranial magnetic stimulation (TMS) we provide new insights for this debate by measuring motor-evoked potentials (MEPs) from the tongue. MEPs were induced by TMS of the tongue primary motor cortex (TNG-M1) representation while healthy participants were presented with vignettes of moral transgressions. In this way, we tested whether moral indignation, felt while evaluating moral vignettes, affected the excitability of the TNG-M1. Since morality is an individual experience potentially influenced by several variables, participants provided subjects moral disapproval ratings for each vignette. In accordance with the results of a recent study of our group (Vicario et al., 2017) documenting reduced TNG-M1 excitability during exposure to pictures of disgusting foods or facial expressions of distaste, we found that vignettes of highly disapproved moral violations reduced TNG-M1 excitability. Moreover, the amount of TNG-M1 excitability and moral indignation were linearly correlated: the higher the moral indignation the lower the TNG-M1 excitability. Respective change of MEPs were not observed in a non-oral (i.e. FDI) control muscle, suggesting a selective decrease of TNG-M1 excitability. Our findings support shared neural representation of disgust and morality at the central nervous system level and provide somatotopic evidence supporting the hypothesis that morality might have an oral origin (Chapman, 2009).


  • Chapman, H. A., Kim, D. A., Susskind, J. M., & Anderson, A. K. (2009). In bad taste: Evidence for the oral origins of moral disgust. Science, 323(5918), 1222-1226.
  • Haidt, J. (2001). The emotional dog and its rational tail: a social intuitionist approach to moral judgment. Psychological review, 108(4), 814.
  • Royzman, E., & Kurzban, R. (2011). Minding the metaphor: The elusive character of moral disgust. Emotion Review, 3(3), 269-271.
  • Vicario, C. M., Rafal, R. D., Borgomaneri, S., Paracampo, R., Kritikos, A., & Avenanti, A. (2017). Pictures of disgusting foods and disgusted facial expressions suppress the tongue motor cortex. Social cognitive and affective neuroscience, 12(2), 352-362.

Disclosures: None


How Can Neuroscience and Neurotechnology Help Answer Societal Questions in the Face of Global Health Crises?

Laure Tabouy (1), (2), (3)
1. University of Paris, 2. Human Technologie Foundation, 3. MaatPharma Biotechnologie

How can neuroscience help everyone to live and care for their brains as a health care system? Understanding and studying human behaviour allows us to measure the impact of such confinement on each of us and to identify those who need help. The challenges are to understand the psychological repercussions following confinement and to understand how neuroscience and neurotechnology can be very interesting tools for dealing with the health crisis. Neuroscience is changing our traditional philosophical and ethical views by providing information about the biological basis of our moral behaviour. The exception of the brain is at the heart of neuroethical thinking and discourse. Neuroethics is an interdisciplinary discipline at the intersection between the humanities and neurosciences. Its aim is to help understand how neurosciences and neurotechnologies can affect society, human life, relationships, daily life, the labour market ... they can help provide solutions and answers to citizens' questions in the context of a health crisis. In the face of dramatic emergency circumstances, such as the world has been experiencing since the beginning of 2020, it is essential to refocus on the human being, to take care of every man and of all mankind. At the heart of this crisis, the stakes are as much social as they are medical. The coronavirus feeds on physical fragility and increases social, psychological and spiritual fragility and isolation. It first reaches the most precarious and creates new precariousness linked to the economic crisis. At a time when technologies and artificial intelligence make up our modern world and are taking more and more space to the detriment of the human being, ethics and deontology represent the best way to put man back at the center of the game. Reflections on bioethical and neuroethical issues will make it possible to take an adjusted turn, centred on benevolence and the common good, and to reflect on respect for values and human intelligence. But it is a double-edged sword, collecting and analyzing brain data in real time seems to be increasingly simple and within everyone's reach, without having the hindsight of the real meaning of these data, allowing one day to read people's thoughts, control them and manipulate them. It is becoming clear that the boundaries between medical and non-medical uses of neuroscience and neurotechnology are becoming very porous, inviting us to reflect on neuroethical issues in order to put safeguards on these uses.


  • Tabouy, L. Neuroethical Issues in Times of Health Crisis. What Roles for Neurosciences, IA, Neurotechnologies? . Preprints 2020, 2020060192 (doi: 10.20944/preprints202006.0192.v1).
  • Tabouy, L; Thomasset, L et Gueydier, P. How the Covid 19 epidemic reactivates the issues of nudging with its socio-political and legal limits ? 2020. in submission.
  • Tabouy, L; how neuroscience and neurotechnology can help in times of covid-19 health crisis. 2020. The conversation. https://theconversation.com/comment-les-neurosciences-et-les-neurotechnologies-peuvent-nous-aider-a-surmonter-la-crise-sanitaire-137922

Disclosures: None


The Ethical Examination of Deep Brain Stimulation's 'Reversibility' and 'Last Resort' Statuses

Ian Stevens (1), Frederic Gilbert (1)
1. Philosophy & Gender Studies, University of Tasmania

Deep brain stimulation (DBS) interventions are being investigated for the management of treatment-resistant psychiatric illnesses. These interventions require the invasive implantation of probes intracranially that deliver high-frequency electrical neurostimulation aiming at providing therapeutic symptom relief in severe refractory disorders including Major Depressive Disorder, Obsessive-Compulsive Disorder, and Anorexia Nervosa. DBS devices are commonly described in the neuroethical, psychiatric, and neurosurgical literature as a reversible treatment utilized as a last resort intervention for refractory psychiatric patients. However, both the ‘reversible' and ‘last resort' statuses of these interventions are rarely expanded upon. Importantly, the usages of DBS for targeting neurological pathologies (e.g. in Parkinson's Disease and Dystonia) has demonstrated the safety and efficacy of DBS interventions and thus encouraged their continued use in clinical settings. As DBS treatments for these neurological diseases continue to improve and are indicated for earlier usage in a disease's timeline, there are ethical concerns that earlier implementation may also become prescribed for psychiatric illnesses. Such concerns include patient vulnerability, limitations in the knowledge of psychiatric disorder neurophysiology, and possible restrictions to patient autonomy. Therefore, this research examined and advanced clarifications on both the ‘reversible' and ‘last resort' statuses of DBS to better clarify its use in psychiatric interventions. To do this, evaluative differences between classic open-loop and novel closed-loop DBS treatments were used to demonstrate how patient autonomy is a paramount guiding principle when implementing these devices earlier in a disease's timeline. In presenting the clarifications of ‘reversible' and ‘last resort' statuses, the ethical tensions of early DBS interventions, following the improvement in their efficacy, can be better addressed to assist in providing psychiatric patients with more quality of life years in line with their values.


  • 1. Neumaier F, Paterno M, Alpdogan S, Tevoufouet EE, Schneider T, Hescheler J, et al. Surgical Approaches in Psychiatry: A Survey of the World Literature on Psychosurgery. World Neurosurg. 2017 Jan 1;97:603-634.e8.
  • 2. Pugh J. No going back? Reversibility and why it matters for deep brain stimulation. J Med Ethics. 2019 Apr 1;45(4):225–30.
  • 3. Mundale J. Reversibility and Deep Brain Stimulation. J Cogn Neuroethics. 2016;3(4):97–111.

Disclosures: None


Stigma, Discrimination, and Social Marginality: Uncovering Perceptions towards Neurodevelopmental Disability

Sunidhi Ramesh (1), Lohit Velagapudi (1)
1. Sidney Kimmel Medical College

Background: Neurodevelopmental disabilities are often characteristic of chronic disorders involving deficient growth of the brain and central nervous system. These disorders manifest as impairments of cognition, communication, language, and activities of daily living. Despite the forward evolution of societal and cultural perceptions of disability in the United States, "stigmatizing qualities... [still often] attract the attention of nondisabled persons, often triggering prejudicial thoughts and attitudes and discriminatory behaviors." For persons with a neurodevelopmental disability, these experiences can serve as a threat to their personhood, justice, and overall day-to-day existence.

While current research acknowledges that this phenomenon of stigmatizing exists, it fails to characterize whether the agents who perpetuate stigma have experiences that predispose them to doing so. In other words, are individuals who experience discrimination and stigma in their day-to-day lives more or less likely to then stigmatize and discriminate against another (or, in this study, a disabled) individual? This study aims to evaluate perceptions of a neurologically disabled individual in media (based on the independent variable of prior experiences with discrimination) using an audience reception methodology.

Methods: A total of 50 volunteers were asked to watch a segment from Season 4, Episode 10 of American Horror Story. The segment portrays the experiences of a young woman, Pepper, who suffers from microcephaly (a rare neurological condition in which the brain does not develop properly). Participants were then asked to answer a 20-item questionnaire. Statistical analysis included the Chi-squared test, the Fisher's exact test, a Kruskal-Wallis analysis, and logistic regression.

Results: Respondents who scored higher on the Williams et al. (2016) scale of "everyday discrimination" were more likely to feel sympathetic towards Pepper (p=0.007). They were also more likely to perceive her character as "respectful" and "helpful" (p=0.032, p= 0.049), and her overall situation as "worrisome" and "sad" (p=0.027, p=0.012). Furthermore, individuals who scored higher on the discrimination scale were less likely to believe that it would be difficult to trust someone like Pepper (p=.001). These respondents were also less likely to feel unsure about what to say or do if around someone like Pepper (p=.002).

Conclusion: Individuals who experience discrimination and stigma in their day-to-day lives are less likely to apply stigma to a person with a neurodevelopmental disability. They are also more likely to perceive these same persons positively and with sympathy. This suggests that a person's tendency to neither perpetuate stigma nor negative perceptions towards neurodevelopmental disability may be correlated with their prior life experiences and subsequent worldviews.


  • 1. Bernat, J. L. (2008). Ethical issues in neurology. Lippincott Williams & Wilkins.
  • 2. Corrigan, P. W (2014). "The stigma of disease and disability: Understanding causes and overcoming injustices". Washington, D.C.: American Psychological Association.
  • 3. Mullen, Brian, Rupert Brown, and Colleen Smith. "Ingroup bias as a function of salience, relevance, and status: An integration." European Journal of Social Psychology 22.2 (1992): 103-122.
  • 4. Brewer, Marilynn B. "The social psychology of intergroup relations: Social categorization, ingroup bias, and outgroup prejudice." (2007).
  • 5. Thompson, M. P. (1993). Reception theory and the interpretation of historical meaning. History and Theory, 248-272.
  • 6. Williams, David R. "Measuring discrimination resource." (2016).

Disclosures: None


Moral Intuitions Regarding NIBS Interventions in Criminal Offenders

Corey Hill Allen (1), Eddy Nahmias (1), Eyal Aharoni (1)
1. Georgia State University

The use of non-invasive brain stimulation (NIBS) techniques to short-circuit the causes of antisocial behavior in criminal offenders is an emergent field of research, and it raises complicated neuroethical implications [1, 2, 3, 4]. But, people's moral intuitions regarding the uses of such technologies are unclear. For instance, could NIBS intervention ever be seen as an adequate replacement for more traditional and retributive forms of punishment? If so, what goals must be accomplished by the intervention? Is rehabilitation of antisocial behavior sufficient? If not, do people's intuitions demand that the offender understands that what they did was wrong or that they experience remorse? And to what extent do people require evidence that the offender suffered as part of their punishment?

In an online experimental vignette survey (N = 444), we examined whether mock juror punishment decisions were influenced by descriptions of a NIBS intervention that independently varied components of offender understanding (an intervention that did vs. did not cause the offender to understand the wrongfulness of their actions) and suffering (a painful vs. non-painful intervention). Across cases, recidivism was controlled for by presenting the NIBS as preventing the offender from committing any further crimes.

We hypothesized that (1) NIBS interventions that cause the offender to understand the wrongfulness of their crime will increase satisfaction with the prospect of offender release and decrease recommended prison sentences, and that (2) painful NIBS interventions will have similar effects. Furthermore, we hypothesize that (3) NIBS inducing both understanding and pain will lead to the highest satisfaction and lowest prison sentencing recommendation.

When NIBS made the offender understand why the crime they committed was wrong, participants reduced their recommended prison sentences (M = -1.66 years) and were more satisfied (M = 3.87 satisfaction) with the prospect of offender release compared to NIBS that did not induce changes in understanding (M = -1.16 years and M = 3.44 satisfaction), p < .05. Contrary to our hypotheses, whether the NIBS induced pain in the offender had no effect on prison sentencing recommendations, p = .34, nor satisfaction, p = .68, and there were no interaction effects, p = .68 and p = .78, respectively.

We discuss how these results inform our understanding of punitive goals in light of consequentialist, deontological, and communicative theories of punishment, and provide brief evaluation of the public's moral intuitions regarding NIBS as a rehabilitative replacement for more traditional forms of punishment.


  • [1] Choy, O., Raine, A., & Hamilton, R. H. (2018). Stimulation of the prefrontal cortex reduces intentions to commit aggression: a randomized, double-blind, placebo-controlled, stratified, parallel-group trial. Journal of neuroscience, 38(29), 6505-6512.
  • [2] Holmen, S. J. (2020). Respect, Punishment and Mandatory Neurointerventions. Neuroethics.
  • [3] Molero-Chamizo, A., Riquel, R. M., Moriana, J. A., Nitsche, M. A., & Rivera-Urbina, G. N. (2019). Bilateral prefrontal cortex anodal tDCS effects on self-reported aggressiveness in imprisoned violent offenders. Neuroscience, 397, 31-40.
  • [4] Romero-Martínez, Á., Bressanutti, S., & Moya-Albiol, L. (2020). A systematic review of the effectiveness of non-invasive brain stimulation techniques to reduce violence proneness by interfering in anger and irritability. Journal of clinical medicine, 9(3), 882.

Disclosures: None


Neurostimulation, Doping, and the Spirit of Sport

Jonathan Pugh (1), Christopher Pugh (2)
1. The Oxford Uehiro Centre for Practical Ethics, University of Oxford; 2. Cardiff School of Sport and Health Sciences, Cardiff Metropolitan University

There is increasing interest in using neuro-stimulation devices to achieve an ergogenic effect in elite athletes. Although the World Anti-Doping Authority (WADA) does not currently prohibit neuro-stimulation techniques, a number of researchers have called on WADA to consider its position on this issue. Focusing on trans-cranial direct current stimulation (tDCS) as a case study of an imminent so-called ‘neuro-doping' intervention, we argue that the emerging evidence suggests that tDCS may meet WADA's own criteria (pertaining to safety, performance-enhancing effect, and incompatibility with the ‘spirit of sport') for a method's inclusion on its list of prohibited substances and methods. We begin by surveying WADA's general approach to doping, and highlight important limitations to the current evidence base regarding the performance-enhancing effect of pharmacological doping substances. We then review the current evidence base for the safety and efficacy of tDCS, and argue that despite significant shortcomings, there may be sufficient evidence for WADA to consider prohibiting tDCS, in light of the comparable flaws in the evidence base for pharmacological doping substances. In the second half of the paper, we argue that the question of whether WADA ought to ban tDCS turns significantly on the question of whether it is compatible with the ‘spirit of sport' criterion. We critique some of the previously published positions on this, and advocate our own sport-specific and application-specific approach. Despite these arguments, we finally conclude by suggesting that tDCS ought to be monitored rather than prohibited due to compelling non-ideal considerations.


  • Pugh and Pugh, ‘Neurostimulation, Doping, and the Spirit of Sport’, Neuroethics, 16 May 2020
  • Nick J. Davis, ‘Neurodoping: Brain Stimulation as a Performance-Enhancing Measure’, August 2013, 649–53;
  • Imperatori, Milbourn, and Garasic, ‘Would the Use of Safe, Cost-Effective TDCS Tackle Rather than Cause Unfairness in Sports?’, 2018, 377–387;
  • Angius, Hopker, and Mauger, ‘The Ergogenic Effects of Transcranial Direct Current Stimulation on Exercise Performance’, 2017, 90.

Disclosures: "The research for this paper was funded by the Wellcome Trust grant number 203195/Z/16/Z.The paper that this abstract has recently been published online and will form part of a forthcoming special issue of Neuroethics ."


Neuroethical Issues in Adopting Brain Imaging for Personalized Chronic Pain Management: Attitudes of People with Lived Experience of Chronic Pain

Daniel Buchman (1,2,3,4), Monica V C de Oliveira (2,3,4), Ariana Besik (2), Karen Deborah Davis (2,5)
1. Bioethics Department, University Health Network; 2. Krembil Brain Institute, University Health Network; 3. University of Toronto Joint Centre for Bioethics; 4. Dalla Lana School of Public Health, University of Toronto; 5. Department of Surgery and Institute of Medical Science, University of Toronto

Chronic pain is a global public health problem that affects ~20% of North Americans [1, 2]. Many chronic pain sufferers are misdiagnosed, inadequately treated, and living with a poor quality of life. The subjective nature of pain is situated in a biomedical culture that relies on objective indicators of disease. This creates a burden for people with lived experience of chronic pain (PWLE) to demonstrate to others that they have chronic pain. Advanced brain imaging technologies (e.g., fMRI) have the potential to provide objective data to help inform personalized treatment plans for pain [3-5]. However, neuroethical issues in adopting these technologies include patient openness to new technologies and concerns regarding the dismissal of self-reports of pain, data privacy, and stigma. As a first step to explore these issues amongst stakeholders, we conducted an online survey in PWLE across Canada to assess their attitudes on neuroethical issues related to the use of brain imaging to detect and treat chronic pain. A total of 349 PWLE completed a 41-item questionnaire to assess attitudes toward 1) new brain imaging technologies, 2) privacy and security of brain data, 3) stigma, 4) treatment, 5) access to brain imaging, and 6) objective representations of subjective experiences. Most participants (>90%) were open to using brain imaging technologies for diagnostic, predictive, and therapeutic decision-making. Many (>60%) participants felt they would be more confident in a diagnosis and treatment plan based on a brain scan and that their healthcare provider would be more likely to believe they had chronic pain based on a brain scan. However, half the participants agreed that brain scans could be used to dismiss their self-report of pain. Most (>75%) PWLE believed that there were policies and procedures in place to protect the privacy of their brain data. Nonetheless, >50% of participants were concerned about privacy issues, and many worried that brain scans could be used against them by an employer or insurer. Although most participants reported that a brain scan could represent their pain and suffering, 80% disagreed that their pain is only real if it is seen in a brain scan. These data suggest that PWLE of chronic pain support the potential benefits of brain imaging, but do not feel that a brain image should be a substitute for their self-reported pain experience. They are additionally concerned about data security and the use of the technology to invalidate their subjective pain experiences.


  • 1. Institute of Medicine (US) Committee on Advancing Pain Research, C., and Education, Relieving Pain in America: A Blueprint for Transforming Prevention, Care, Education, and Research. 2011: Washington, DC.
  • 2. Lynch, M.E., The need for a Canadian pain strategy. Pain Res. Manag, 2011. 16(2): p. 77-80.
  • 3. Davis, K.D., Legal and ethical issues of using brain imaging to diagnose pain. PAIN Reports, 2016. 1(4): p. 1-6.
  • 4. Davis, K.D., et al., Brain imaging tests for chronic pain: medical, legal and ethical issues and recommendations. Nat Rev Neurol, 2017. 13(10): p. 624-638.
  • 5. Davis, K.D., et al., Discovery and validation of biomarkers to aid the development of safe and effective pain therapeutics: challenges and opportunities. Nature Reviews Neurology, 2020 Jun 15. doi: 10.1038/s41582-020-0362-2. Online ahead of print.

Disclosures: None


Pediatric Deep Brain Stimulation for Dystonia: Current State and Ethical Considerations

Katrina A. Muñoz (1), Jennifer Blumenthal-Barby (1), Eric A. Storch (2), Laura Torgerson (1), Gabriel Lázaro-Muñoz (1)
1. Center for Medical Ethics and Health Policy, Baylor College of Medicine; 2. Menninger Department of Psychiatry and Behavioral Sciences, Baylor College of Medicine

Dystonia is a movement disorder that can have a debilitating impact on motor functions and quality of life. There are 250,000 cases in the US, most with childhood onset. Due to the limited effectiveness and side effects of available treatments, pediatric deep brain stimulation (pDBS) has emerged as an intervention for refractory dystonia. Although pDBS is currently offered under an FDA Humanitarian Device Exemption for children (≥ 7 years old) with refractory dystonia, there is limited clinical and neuroethics research in this area of clinical practice. This paper examines whether it is ethically justified to offer pDBS to children with refractory dystonia. Given the favorable risk-benefit profile, we conclude that it is currently ethically justified to offer pDBS for certain etiologies of dystonia (e.g., inherited dystonia without degeneration or structural lesions), but it is potentially not for others (e.g., acquired dystonia). In addition, we argue that various ethical and policy issues must be addressed to optimize the practice of offering pDBS for any type of dystonia. We discuss the role institutions play in making pDBS candidacy determinations, and the need to scrutinize the clinical and social support criteria that are used in order to avoid inappropriate or unfair patient selection. Family and patient expectations must also be properly managed to prevent unnecessary harm or potential suffering that unrealistic optimism could cause in certain situations. Beyond clinical risks, other burdens of pDBS are examined, such as the high cost of pDBS and uncertainties in health insurance coverage, which can generate access to care concerns for most families. Finally, potential ethical concerns that may be exacerbated in the pediatric setting of DBS are discussed, such as how pDBS may influence a child's sense of identity. Strategies are proposed to address these challenges as pDBS continues to expand, including the need for clinicians to share clinical and quality of life outcomes in pDBS data registries.


  • Blumenthal-Barby JS, Ubel PA. (2018). In Defense of “Denial”: Difficulty Knowing When Beliefs Are Unrealistic and Whether Unrealistic Beliefs Are Bad. The American Journal of Bioethics. 2018;18(9):5. doi:10.1080/15265161.2018.1498934
  • Chen T, Mirzadeh Z, Lambert M, Gonzalez O, Moran A, Shetter AG, et al. (2017). Cost of Deep Brain Stimulation Infection Resulting in Explantation. Stereotact Funct Neurosurg. 95(2):117-124. doi:10.1159/000457964
  • Elkaim LM, Alotaibi NM, Sigal A, et al. (2019) Deep brain stimulation for pediatric dystonia: a meta‐analysis with individual participant data. Dev Med Child Neurol. 2019;61(1):49-56. doi:10.1111/dmcn.14063
  • Gilbert F, Viaña JNM, Ineichen C. (2018). Deflating the “DBS causes personality changes” bubble. Neuroethics. doi:10.1007/s12152-018-9373-8
  • Kaminska M, Perides S, Lumsden DE, Nakou V, Selway R, Ashkan K, et al. Complications of Deep Brain Stimulation (DBS) for dystonia in children – The challenges and 10 year experience in a large paediatric cohort. European Journal of Paediatric Neurology. 2017;21(1):168-175. doi:10.1016/j.ejpn.2016.07.024

Disclosures: None


Researcher Perspectives on Ethical Considerations Related to the Development of Adaptive Deep Brain Stimulation Devices

Katrina A. Muñoz (1), Kristin Kostick (1), Clarissa Sanchez (1), Lavina Kalwani (2), Laura Torgerson (1), Rebecca Hsu (1), Demetrio Sierra Mercado (1,3), Simon Outram (4), Barbara Koenig (4), Stacey Pereira (1), Amy McGuire (1), Peter Zuk (1), Gabriel Lázaro-Muñoz (1)
1. Center for Medical Ethics and Health Policy, Baylor College of Medicine; 2. Department of Neuroscience, Rice University; 3. Department of Anatomy & Neurobiology, University of Puerto Rico School of Medicine; 4. Program in Bioethics, University of California, San Francisco

Interest and investment in closed-loop or adaptive deep brain stimulation (aDBS) systems have quickly expanded due to this neurotechnology's potential to more safely and effectively treat refractory movement and psychiatric disorders compared to conventional DBS. A large neuroethics literature outlines potential ethical concerns about the development of neurotechnologies such as DBS and aDBS systems. Few studies, however, have examined stakeholder perspectives about ethical issues in aDBS research and other next generation neural devices. To help fill this gap, we conducted semi-structured interviews with researchers involved in aDBS trials (n=23) to gain insight into the most pressing ethical questions in aDBS research and any potential concerns about specific features of aDBS devices, including devices' ability to measure brain activity, automatically adjust stimulation, and store neural data. Using thematic content analysis, we identified 8 central themes in researcher responses. The need to measure and store neural data for aDBS raised concerns among researchers about data privacy and security issues (91%), including the avoidance of unintended or unwanted third-party access to data. Researchers reflected on the risks and safety (83%) of aDBS due to the experimental nature of automatically modulating then observing stimulation effects outside a controlled clinical setting and in relation to need for surgical battery changes. Researchers also stressed the importance of ensuring informed consent and adequate patient understanding (74%). Concerns related to automaticity and device programming (65%) were discussed, including current uncertainties about biomarker validity. Additionally, researchers discussed the potential impacts of automatic stimulation on patients' sense of control over stimulation (57%). Lastly, researchers discussed concerns related to patient selection (defining criteria for candidacy) (39%), challenges of ensuring post-trial access to care and device maintenance (39%), and potential effects on personality and identity (30%). In response to researcher concerns, we discuss the need to minimize cybersecurity vulnerabilities, advance biomarker validity, promote the balance of device control between patients and clinicians, and enhance ongoing informed consent. We examine the implications of these findings for the development of aDBS systems and other next-generation DBS devices.


  • Chiong, K., Leonard, F., & Chang, F. (2018). Neurosurgical Patients as Human Research Subjects: Ethical Considerations in Intracranial Electrophysiology Research. Neurosurgery, 83(1), 29–37. doi:10.1093/neuros/nyx361.
  • de Haan, S., Rietveld, E., Stokhof, M., Denys, D., & Haan, S. (2015). Effects of Deep Brain Stimulation on the Lived Experience of Obsessive-Compulsive Disorder Patients: In-Depth Interviews with 18 Patients: e0135524. PLoS ONE, 10(8), e0135524. doi:10.1371/journal.pone.0135524.
  • Hendriks, S., Grady, C., Ramos, K., Chiong, W., Fins, J., Ford, P., et al. (2019). Ethical Challenges of Risk, Informed Consent, and Posttrial Responsibilities in Human Research With Neural Devices: A Review. JAMA Neurology, 76(12), 1506–1514. doi:10.1001/jamaneurol.2019.3523.
  • Klein, E. (2020). Ethics and the emergence of brain-computer interface medicine. Handbook of Clinical Neurology, 168, 329–339. doi:10.1016/B978-0-444-63934-9.00024-X.
  • Provenza, N., Matteson, E., Allawala, A., Barrios-Anderson, A., Sheth, S., Viswanathan, A., et al. (2019). The Case for Adaptive Neuromodulation to Treat Severe Intractable Mental Disorders. Frontiers in Neuroscience, 13, 152. doi:10.3389/fnins.2019.00152.

Disclosures: None


Cutting Edge Science Raises Cutting Edge Ethical Questions: An Analysis of Ethical Questions That May Arise in the NIH BRAIN Initiative Scientific Portfolio

Saskia Hendriks (1), Nina S. Hsu (1), Mara Olenick (1), Joshua A. Gordon (2), Walter J. Koroshetz (1), James Churchill (2), Khara M. Ramos (1)
1. NINDS, National Institute of Health; 2. NIMH, National Institute of Health

Background: The Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative aims to revolutionize understanding of the human brain. To achieve this goal, it is accelerating the development and application of innovative tools and neurotechnologies. Ethical challenges can surface when conducting research to develop and use innovative technologies, including BRAIN Initiative funded research. We aimed to identify and characterize some of these potential ethical questions or challenges [1]. Urgent concerns were addressed proactively rather than simply noted.

Methods: The 498 publicly available abstracts of research funded by the NIH BRAIN Initiative in fiscal years 2017-2019 were screened for their likeliness to raise ethical issues. Of the 498 total abstracts, 278 abstracts were considered unlikely to raise ethically challenging questions or issues (e.g., basic science research into cellular interactions). Members of the BRAIN Initiative Neuroethics Working Group [2] reviewed the remaining 220 abstracts and identified potential ethical questions or issues that may arise from this research.

Results: The identified potential questions or concerns were grouped into two overarching categories: how to conduct research with the highest ethical standards (i.e., research ethics) and the potential ethical and societal implications of new tools/neurotechnologies, which may extend beyond the duration of the trial. Themes were identified within each of these overarching categories. The themes related to research ethics included: 1) challenges in applying general research ethics principles such as informed consent; 2) post-trial responsibilities to research participants; 3) managing and broad sharing of neural data; 4) animal research ethics; and 5) stewardship of engineered brain models. Themes related to the impact of new neurotechnologies included: 1) effects of tools and technologies on agency, identity, personhood, and other related concepts; 2) increasing use of neuroimaging outside of traditional clinics or research centers (e.g., wearable imaging or possible use at e.g., pharmacies); 3) privacy and security risks of novel neural devices; 4) the identification of biomarkers or neural correlates (e.g., for psychiatric diseases) and their effects on patients; 5) risk of hyping ethical concerns; and 6) potential future non-clinical applications. While the research ethics questions may be more urgent, consideration of the ethical and societal implications can enable researchers to develop tools/technologies in a way that maximizes their individual and societal value and minimizes potential harms.

Conclusion: Emerging neuroethics questions should be considered and addressed to ensure that research is conducted to the highest ethical standards and produces the most social value.


  • [1] https://braininitiative.nih.gov/brain-programs/neuroethics
  • [2] https://braininitiative.nih.gov/about/neuroethics-working-group

Disclosures: "Disclaimer: The authors thank the members of the BRAIN Initiative Neuroethics Working Group.Conflict of interest: none."


Committing Crimes with BCIs: How Brain-Computer Interface Users can Satisfy Actus Reus and be Criminally Responsible

Kramer Thompson
Macquarie University

Brain-computer interfaces allow agents to control computers without moving their bodies. The agents imagine certain things and the brain-computer interfaces read the concomitant neural activity and operate the computer accordingly. But the use of brain-computer interfaces is problematic for criminal law, which requires that someone can only be found criminally responsible if they have satisfied the actus reus requirement: that the agent has performed some (suitably specified) conduct. Agents who affect the world using brain-computer interfaces do not obviously perform any conduct, so when they commit crimes using brain-computer interfaces it is unclear how they have satisfied actus reus. Drawing on a paper by Allan McCay, I suggest three potential accounts of the conduct that satisfies actus reus: the agent's neural firings, his mental states, and the electronic activity in his brain-computer interface. I then present two accounts which determine how actus reus may be satisfied – one a counterfactual and the other a minimal sufficiency account. These accounts are lent plausibility because they are analogous to the but-for and NESS (Necessary Element in a Sufficient Set) tests for causation which are generally accepted tests for causation in legal theory. I argue that due to the determinations of these accounts and considerations regarding the relationship between the mind and brain, actus reus is satisfied by either the neural activity or brain-computer interface electrical activity. Which of these satisfies actus reus is determined by how well the brain-computer interface is functionally integrated with the agent.


  • None

Disclosures: None


Understanding Patient and Public Perceptions about Psychiatric Electroceutical Interventions and their Effect on Self

Emily Castillo (1), Robyn Bluhm (1), Eric Achtyes (1), Aaron M. McCright (1), Laura Cabrera (1)
1. Michigan State University

One major focus of the neuroethics literature is whether electrical stimulation of the brain might affect some aspect of patients' selves. Neuroethicists have mainly considered this with respect to DBS, given documented cases of striking personality changes post-surgery 1,2,3. However, neuroethicists have not also examined similar effects on self from other psychiatric electroceutical interventions (PEIs), such as electroconvulsive therapy (ECT), transcranial magnetic stimulation (TMS), and adaptive brain implants (ABI). Since this issue is relevant not only to the patient receiving treatment, but also to clinicians and members of the public, we investigated how patients and the public view the effects on self by PEIs when used for treatment-resistant depression. We conducted semi-structured interviews with participants who had varying degrees of experience and familiarity with PEIs. We then used content analysis to identify key perceptions, concerns, and beliefs around effects on self. A number of patients connected "changes to self" to changes in mood, reasoning that improving depressive symptoms would positively change a patient's mood and, therefore, personality. Some patients and members of the public believed that PEIs do change aspects of self but were uncertain whether these changes were positive or negative. A number of participants connected side effects of PEIs with negative effects on self, particularly with memory loss after ECT. We also asked participants to compare PEIs to more common therapies such as psychotherapy, in which participants emphasized self-reflection as an important component that could lead to deeper changes in personality. As with medications, some participants noted the difference in duration and permanence of the neural effects would cause different effects on self. When comparing DBS for movement disorders to PEIs for depression, a number of participants thought depression therapies would be more likely to affect personality because these therapies are stimulating areas of the brain more linked to personality. Overall, participants voiced greater concern about changes to self due to ECT and DBS than to TMS. Our results add to the literature by providing a broader range of beliefs about the effects of DBS on the self while exploring how other PEIs are perceived to impact the self. Understanding patient and public perceptions is important to help inform clinicians' conversations with patients, help patients distinguish between treatment options, and contribute to public understanding of these interventions.


  • 1. Schechtman, M. 2010. Philosophical reflections on narrative and deep brain stimulation. Journal of Clinical Ethics 21 (2): 133–139.
  • 2. Witt, K., J. Kuhn, L. Timmermann, M. Zurowski, and C. Woopen. 2013. Deep brain stimulation and the search for identity. Neuroethics 6 (3): 499–511.
  • 3. Ford, Paul J., and C.S. Kubu. 2006. Stimulating debate: ethics in a multidisciplinary functional neurosurgery committee. Journal of Medical Ethics 32 (2): 106–109

Disclosures: None


Patient and Public Perceptions of the Invasiveness of Psychiatric Electroceutical Interventions

Marissa Cortright (1), Robyn Bluhm (2), Eric Achtyes (3), Aaron M. McCright (4), Laura Y. Cabrera (5)
1. Michigan State University; 2. Department of Philosophy, College of Arts & Letters, Lyman Briggs College; 3. Division of Psychiatry & Behavioral Medicine, College of Human Medicine; 4. Department of Sociology, College of Social Science; 5. Center for Ethics & Humanities in the Life Sciences, Department of Translational Science & Molecular Medicine, College of Human Medicine;

Psychiatric electroceutical interventions (PEIs) – therapies that use electrical or magnetic stimulation of the brain — are being used or developed for the treatment of major depressive disorder. PEIs include electroconvulsive therapy (ECT), transcranial magnetic stimulation (TMS), deep brain stimulation (DBS), and adaptive brain implants (ABI). Some characterize technologies like DBS as invasive and others like TMS as non-invasive or minimally invasive. This reflects a medical connotation of invasiveness as physical intervention into the functioning of internal organs. The neuroethics literature relies upon the dominant medical connotation of invasiveness as the physical intervention into a body, and therefore, there has not been an in-depth analysis of the concept of invasiveness. In this study, we aim to provide insights into how different stakeholders perceive invasiveness. We conducted semi-structured interviews across three stakeholder groups: psychiatrists, patients diagnosed with clinical depression, and members of the general public. We used content analysis to identify major views about PEIs across these groups. Here we present results about patients' and public's views on the invasiveness of PEIs. Our participants' judgments of a PEI's invasiveness went beyond the prevailing medical connotation. Our participants perceived invasiveness to be multidimensional, including physical invasiveness (use of IV, implants, etc.), psychological invasiveness (perceptions of intruding into personal information and emotions), and lifestyle invasiveness (treatment time, travel to treatment, etc.). Further, we found that three key characteristics of interventions influenced our participants' judgments of invasiveness. One aspect was the localization of the intervention's effects; for some participants, the less localized the effects of a treatment, the more invasive it seemed (pharmaceuticals compared to PEIs). Another feature was the perceived length of effects, permanence of the interventions, and/or the temporal frequency of use. The last characteristic was familiarity with the intervention; for some participants, familiarity reduced perceived invasiveness. Both the invasiveness dimensions and intervention characteristics inform participants' judgements of an intervention's overall invasiveness. This study is an important contribution to furthering discussions about how the invasiveness of PEIs is perceived. This is important because assessments of invasiveness are used to craft medical guidelines and used by policy bodies like the FDA, and also because understandings of invasiveness are crucial for effective communication between physicians and patients.


  • None

Disclosures: None


Embryo Screening for Polygenic Conditions and Traits: Neuroethical Considerations

Gabriel Lázaro-Muñoz (1), Stacey Pereira (1), Shai Carmi (2), Todd Lencz (3)
1. Baylor College of Medicine; 2. Hebrew University; 3. Zucker School of Medicine at Hofstra/Northwell

Potential parents who use in vitro fertilization (IVF) have a median of 5 embryos to select from for implantation. Implicit in decisions about which embryo to implant is often a difficult ethical determination about which life is more worth living. The more genomic information that can be obtained from these embryos, the more ethically complex these decisions become. Recently, a company began offering embryo screening for polygenic conditions and traits (e.g., intelligence, height)(1). Unlike testing for aneuploidies (e.g., Trisomy 21) or monogenic conditions (e.g., cystic fibrosis), polygenic embryo screening (PES) allows potential parents to learn about each embryo's risks for some of the most common and deadly health conditions (e.g., diabetes, heart disease, cancer, mental health disorders). However, there are difficult scientific and ethical questions that we need to examine as a neuroethics community(2). First, PES can generate information about risk for brain-based stigmatized conditions like psychiatric disorders. This could lead to potential parents selecting against embryos with increased risk for these disorders due to stigma and overestimation of the burdens of these disorders. Second, PES can generate information about likelihood of "desirable" cognitive traits like intelligence, therefore adding new dimensions to preimplantation genetic testing and the need to balance between desirable traits and undesirable risk for health conditions when selecting among embryos. Third, the use of PES will be mostly limited to those who use IVF, which can be prohibitively expensive, leading to questions about distributive justice. Fourth, the genetic samples from which PRSs are generated are typically mostly from people of European descent, and thus, are less accurate for individuals from other groups, this may exacerbate existing health disparities.(3) Finally, at a societal level, in places like the United States in which preimplantation genetic screening is largely unregulated, PES will amplify questions about whether the state or professional organizations should regulate its use. We will discuss these challenges, and how the capacity to generate information about cognitive traits and mental health disorders makes PES a critical technology for neuroethics examination.


  • (1) Treff NR, Eccles J, Lello L, et al. Utility and First Clinical Application of Screening Embryos for Polygenic Disease Risk Reduction. Front Endocrinol. 2019;10:845.
  • (2) Karavani E, Zuk O, Zeevi D, et al. Screening Human Embryos for Polygenic Traits Has Limited Utility. Cell. 2019;179(6):1424-1435.
  • (3) Martin AR, Kanai M, Kamatani Y, et al. Clinical use of current polygenic risk scores may exacerbate health disparities. Nat Genet. 2019;51(4):584-591.

Disclosures: None


Moral Rationalism on the Brain

Joshua May
University of Alabama at Birmingham

The psychological thesis of moral rationalism states that moral judgments are essentially dependent on reasoning, not emotions conceived as distinct from inference. I argue that the neurobiological evidence, when combined with philosophical analysis, illuminates and strengthens the empirical case for moral rationalism. The argument proceeds in two main steps.

First, studies of neurological disorders that affect moral capacities reveal the entanglement of reason and emotion (e.g. Saver & Damasio 1991). Psychopaths exemplify the entanglement thesis well, given their deficits in both affective and inferential capacities (e.g. Maibom 2005). This entanglement might seem to trivialize the philosophical debate, but it conflicts with the core sentimentalist claim that moral (or, more broadly, evaluative) judgment requires emotions unlike other forms of cognition. The collapse, or at least entanglement, of reason and emotion ultimately fits better with the rationalist tradition, which maintains that morality isn't special: moral cognition requires affect only in the way that all reasoning or inference does.

Second, the neurobiological evidence suggests that moral cognition arises from domain-general capacities in the brain. There is not a particular brain area or network that is solely dedicated to either moral judgment or evaluative judgment generally (e.g. Decety & Lamm 2007; Greene 2009; Young & Dungan 2012). The relevant brain areas support inferring the consequences of an agent's action, the agent's mental states (e.g. intent), and the social norms relevant to the context. This favors the rationalist thesis that, just like cognition in other domains (e.g. mathematics, geography, language), inference is integral to moral cognition.

Ultimately, converging evidence in neuroscience fits better with moral rationalism, which insists that moral cognition is continuous with non-moral cognition in requiring domain-general capacities for reasoning, learning, and inference.


  • Decety, J., & Lamm, C. (2007). The role of the right temporoparietal junction in social interaction. The Neuroscientist, 13(6), 580–593.
  • Greene, J. D. (2009). The cognitive neuroscience of moral judgment. In M. S. Gazzaniga (Ed.), The Cognitive Neurosciences. MIT Press, pp. 987-999.
  • Maibom, H. L. (2005). Moral Unreason: The Case of Psychopathy. Mind & Language 20(2), 237–257.
  • Saver, J. L., & Damasio, A. R. (1991). Preserved access and processing of social knowledge in a patient with acquired sociopathy due to ventromedial frontal damage. Neuropsychologia, 29(12), 1241–1249.
  • Young, L., & Dungan, J. (2012). Where in the brain is morality? Everywhere and maybe nowhere. Social neuroscience, 7(1), 1-10.

Disclosures: None


Rights and Meanings: Advanced Technologies for Disorders of the Brain

Louise Harding (1), Caterina Marra (1), Jacob McFarlane (1), Patrick J. McDonald (1), Christopher R. Honey (1), Judy Illes (1)
1. University of British Columbia

From primary care to specialty care using innovative and emerging medical technologies, equitable access should be a human right. While the right to health has been outlined in numerous international and national policy documents, Canada and the USA have tiered healthcare systems that continue to underserve rural, remote and racialized populations.

In this session we will explore issues pertaining not only to access, but to the values, priorities and meanings associated with modern technologies for major disorders involving the brain. We focus on functional neurosurgery that may involve invasive irreversible interventions such as resection of brain tissue for epilepsy, and less invasive modulatory interventions such as deep brain stimulation (DBS) for movement disorders, chronic pain and mental health disorders. These advanced neurotechnologies aim to ameliorate unbearable suffering and restore dignity to the lives of people living with intractable conditions; however, concerns have been raised about their suitability for certain populations alongside questions about availability (Chan et al., 2014; Honey et al., 2018). Furthermore, limited knowledge exists about how various cultural groups may differentially conceptualize and relate to the use of functional neurosurgical interventions.

We used a spiral design to engage with expert communities and knowledge holders in a study involving semi-structured interviews. Interviews entailed mixed-modes consultations with neurosurgeons, neurologists, other subspecialist physicians, nurses, and experts in Indigenous and rural health. The cohort comprised 27 Canadian participants; consultations with two international participants enriched the perspectives we derived through qualitative analytic methods. In tandem, we conducted a scoping literature review to understand global Indigenous perspectives on brain wellness, disease and injury.

Our findings reveal key barriers, facilitators and emerging opportunities surrounding access to functional neurosurgery for diverse populations. Major cultural themes about brain health centred around the interrelationships between mind, body, and spirit in Indigenous worldviews.

By bringing together perspectives from medical experts, Indigenous knowledges and rural and remote health, we find interlocking concerns and unique considerations that collectively inform a rights-based approach to meaningful access to functional neurosurgery. We argue that the right to brain health is realized not only by facilitating equity in access to care, but also that medical interventions must be relevant and meaningful to the intersecting realities of diverse populations. As Diné writer Lou Cornum wrote, from an Indigenous worldview:

""Advanced technologies are not finely tuned mechanisms of endless destruction. Advanced technologies should foster and improve human relationships with the non-human world."


  • Chan, A. K., McGovern, R. A., Brown, L. T., Sheehy, J. P., Zacharia, B. E., Mikell, C. B., Bruce S.S., Ford, B., & McKhann, G. M. (2014). Disparities in access to deep brain stimulation surgery for Parkinson disease: Interaction between African American race and Medicaid use. JAMA Neurology, 71(3), 291-299.
  • Honey, C. M., Malhotra, A. K., Tamber, M. S., Prud'homme, M., Mendez, I., & Honey, C. R. (2018). Canadian assessment of deep brain stimulation access: The Canada study. The Canadian Journal of Neurological Sciences, 45(5), 553-558.

Disclosures: Dislocures: This work was generously supported by The North Growth Foundation, NIH BRAIN RF1 #MH11780501, and the Canada Research Chairs program.


The Ethical Aspects of Brain Computer Interface Technology: A State-of-the-Art Review

Allen Coin (1), Joshua Myers (1), Veljko Dubljevic (1)
1. North Carolina State University

Brain-Computer Interface (BCI) technology has been a promising area of research in recent decades, with rapid advancements in the technology leading to a broadening of applications [1]. Researchers and clinicians are increasingly able to accurately interpret brain activity through both invasive (implanted) and non-invasive (outside the body) monitoring devices, allowing them to create better therapeutic solutions for patients suffering from disorders or diseases that inhibit their ability to interact with the world around them. With this technology comes a number of ethical issues that require considerations by all stakeholders involved (researchers, clinicians, patients, etc.). Previous research into the ethics of BCI technology conducted by Burwell et al. [2] analyzed past publications in BCI ethics. Burwell et al. identified common themes in the literature including issues of responsibility for consequences of BCI use, potential loss of autonomy, identity, and personhood as a result of BCI use, and security issues regarding the collection, analysis, and possible transmission of neural signals [2]. However, since then the coverage of the ethical issues of BCI in the literature has rapidly increased.

Building on prior work, we conducted a search for papers on BCI and ethics on PubMed and PhilPapers in order to track academic discussion of this technology since 2016. While Burwell et al. identified 42 relevant articles published before 2016, our slightly modified search using similar methodology yielded almost as many relevant articles discussing the ethics of BCI (n=34) published since 2016. Additionally, some emerging applications of BCI technology — including commercial ventures that seek to meld human intelligence with artificial intelligence [3] — present unique new ethical concerns that we seek to understand how the academic literature on the topic addresses. Similar to our prior work [4], we will code each publication according to the ethical issues discussed, and identify trends and areas of concern or debate among researchers and ethicists, especially regarding topics like autonomy, privacy and security, and informed consent/research ethics. Through this type of systematized qualitative analysis [5], we hope to provide perspective on the complicated ethical implications of this technology and directions for its responsible development and use that can ensure it advances in an ethically sound manner.


  • [1] Ramadan, R. A., & Vasilakos, A. V. (2017). Brain computer interface: control signals review. Neurocomputing, 223, 26-44.
  • [2] Burwell, S., Sample, M., & Racine, E. (2017). Ethical aspects of brain computer interfaces: a scoping review. BMC medical ethics, 18(1), 1-11.
  • [3] Winkler, R. (2017). Elon Musk launches Neuralink to connect brains with computers. The Wall Street Journal, 27.
  • [4] Voarino, N., Dubljević, V. & Racine E. (2017): tDCS for memory enhancement: A critical analysis of the speculative aspects of ethical issues, Frontiers in Human Neuroscience, DOI: DOI: 10.3389/fnhum.2016.00678.
  • [5] Saigle, V., Dubljević, V. & Racine, E. (2018): The impact of a landmark neuroscience study on free will: A qualitative analysis of articles using Libet et al.’s methods, AJOB – Neuroscience, 9(1): 29-41.

Disclosures: None


The Normative Implications of Recent Empirical Neuroethics Research on Morality

Veljko Dubljevic (1)
1. North Carolina State University

Empirical neuroethics models have always had normative ambitions. Older models, such as Greene's dual process model (Greene 2008) attempted to debunk traditional moral theories, whereas newer models such as the Agent-Deed-Consequence (ADC) model (Dubljević & Racine 2014) attempt to fit their empirical and normative claims with them. A pragmatist outlook on the moral domain places morality within the natural world, as studied by natural and social sciences. The issue of normative significance as it pertains to the use of survey methodology on moral intuitions remains open.

The recent paradigm shift in moral psychology toward the view that moral judgment is tacit and intuitive (know how), and not explicit (know that) could be seen as posing several distinct problems for moral epistemology: "moral dumbfounding", relativism, parochialism and the is-ought tension (Dubljević 2017, May 2019). Drawing on Dewey (1929), this paper describes the pragmatic and dynamic understanding of the epistemology of moral intuitions and habits, and offers solutions to the problems that affect strong forms of essentialist naturalistic ethics. The authors argue that a progressive model of ethics which is open-ended but not relativistic has to rely on the best considered judgments which are revisable based on available evidence and the evolution of social life. Furthermore, the three different meanings of "is" (natural fact, social fact, and social practice) are thoroughly analyzed in order to refute the anti-naturalistic challenge. Finally, some important implications for ethics are considered. Tacit (vs. explicit) knowledge seems to be at odds with the traditional view of the role of ethics, while the pragmatic account of ethics escapes these problems. Additionally, the nature of moral competence, which is attributable to all normally functioning adult human beings and moral expertise, which is traditionally attributed to ethicists, are impacted by this new and emerging view of morality.

The paper concludes with a fine-grained analyses the Is/Ought gap and the empirical underpinnings of influential constructivist approaches in order to argue that the normative ambitions of empirical neuroethics models are not necessarily always misguided. Ultimately, morality can be fruitfully studied and informed by natural and social sciences and, thorough ethical reflection, ought also to inform empirical research.


  • John Dewey, The Quest for Certainty: A Study of the Relation of Knowledge and Action (New York: Milton, Balch & Company, 1929).
  • Veljko Dubljević and Eric Racine, ‘The ADC of moral judgment: Opening the black box of moral intuitions with heuristics about agents, deeds and consequences’, AJOB Neurosci. 5 (2014), pp. 3–20, doi: 10.1080/21507740.2014.939381.
  • Veljko Dubljević, ‘Is it time to abandon the strong interpretation of the dual process model in neuroethics?’, in E. Racine and J. Aspler (eds.), Debates about neuroethics: Perspectives on its development, focus, and future (Heidelberg, Germany: Springer, 2017), pp. 129–140.
  • J. D. Greene, ‘The secret joke of Kant’s soul’, in W. Sinnott-Armstrong (ed.), Moral psychology Vol. 3, The neuroscience of morality: Emotion, brain disorders, and development, (Cambridge, MA: MIT Press, 2008), pp. 35–79.
  • Joshua May, Regard for Reason in the Moral Mind, (Oxford: Oxford University Press, 2018).

Disclosures: None


Ethical and Cognitive Challenges in the COVID-19 Emergency

Chiara Lucifora (1), Gustavo Cevolani (2)
1. University of Messina; 2. IMT School for Advanced Studies, Lucca

The global emergency related to the spread of COVID-19 raises critical challenges for decision makers, individuals, and entire communities, on many different levels. Among the urgent questions facing politicians, scientists, physicians and other professionals, some concern the ethical and cognitive aspects of the relevant choices and decisions. Philosophers and cognitive scientists have long analyzed and discussed such issues. As an example, the debate on moral decision-making in imaginary scenarios, like the famous "Trolley problem" (Foot, 1967), becomes dramatically concrete in the current crisis. Focusing on the Italian case, we discuss the emergency guidelines proposed by the Italian Society of Anesthesiology, Analgesia, Resuscitation and Intensive Care (SIAARTI), highlighting some crucial ethical and cognitive issues surrounding clinical decision-making in the current situation.

The SIAARTI recommendations have two main purposes (Vergano et al. 2020). On the one hand, they aim to assist doctors and nurses in their daily decisions, making explicit a shared code of conduct; on the other hand, the document aims at making explicit, both to professionals and the public, the logic behind the proposed guidelines. In discussing them, we proceed as follows. We first present the SIAARTI recommendations and the ensuing discussion, focusing in particular on some crucial ethical issues raised by the document. We then briefly review recent discussion of moral reasoning and ethical decision-making in so-called Trolley scenarios in the philosophical, psychological, and neuroscientific literature (Greene, 2013; Somaggio e Marchioni, 2020). Finally, we discuss some critical points in the SIAARTI document which raise challenging issues concerning the ethical and cognitive dimension of reasoning and decision-making in emergency situation.

Using the debate surrounding the SIAARTI document as a case-study, we conclude by highlighting some more general conclusion. In particular, we argue that the design, implementation and communication of ethical guidelines communication require a solid understanding of real human decision-making processes and of the different factors affecting them (Kahneman 2011). This is particularly true in emergency situations like the COVID-19 pandemic, where the role of emotions, training and choice architecture in determining moral reasoning and decision-making becomes especially relevant. We suggest that explicitly considering these and other factors may help in better shaping future interventions and communicating in a more effective and transparent way.


  • Foot, P. 1967. The Problem of Abortion and the Doctrine of the Double Effect. Oxford Review 5. Retrieved from https://philpapers.org/archive/footpo-2.pdf.
  • Greene, J. D. 2013. Moral tribes: Emotion, reason, and the gap between us and
  • them. Penguin
  • Kahneman, D. 2011. Thinking, fast and slow. Macmillan.
  • Somaggio, P, Marchioni, S. (2020). Tragic choice in the time of pandemic. BioLaw Journal 2/2020. In press.
  • Vergano M, Bertolini G, Giannini A, et al. Clinical Ethics Recommendations for the Allocation of Intensive Care Treatments, in Exceptional, Resource-Limited Circumstances. Italian Society of Anesthesia, Analgesia, Resuscitation, and Intensive Care (SIAARTI). March 16, 2020.

Disclosures: None


The Public Impact of Academic and Print Media Portrayals of TMS: Shining a Spotlight on Discrepancies in the Literature

Abigail Scheper (1), Veljko Dubljević (1)
1. North Carolina State University

TMS is FDA approved for treatment of drug-refractory depression, migraine headaches, and OCD. However, off-label applications of TMS [1] are booming as the media continues to promote neurostimulation products and services [2]. This raises concerns over whether TMS is being used safely, appropriately, and ethically in all of its applications and how media presentation may be impacting this [3].

We tracked the academic and media literature on TMS over the last five years to evaluate the ethical impact of reporting on the public. We searched PubMed and NexisUni databases and retained 1404 academic articles and 411 print media articles after applying exclusion criteria. We then systematically coded for (i) type of source, (ii) year of publication, (iii) purpose of TMS application, (iv) age of subjects, (v) population, (vi) overall tone, and (vii) specification of TMS parameters and analyzed the data qualitatively [4].

Our results indicated discrepancies between academic and media reporting on TMS.

Tone: Media literature took a more optimistic stance than the academic literature (media: 61.5% optimistic, 35.0% neutral, 3.5% critical; academic: 1.7% optimistic, 97.5% neutral, 0.8% critical).

Specification of parameters: Academic literature was unsurprisingly more specific (Academic: 88.2%; Media 51.8%). However, even the academic sample exhibited a lack of consensus on TMS paradigms for similar conditions.

Purpose of TMS: although reviews of TMS and other NIBS technologies in the ethics community frequently focused on enhancement, the public discourse on the ethics of TMS was dominated by concerns linked to therapeutic uses (90.8%). Interestingly, though, we noticed a convergence in the academic and media literature when it came to the scope of TMS: the reporting spanned far beyond the three FDA approved applications, including discussions of TMS for body dismorphia, PTSD, autism, ADHD, Parkinson's disease, and substance use.

Cumulatively, these findings beget questions surrounding the accuracy of information being presented in the media and the safety profile of TMS for unapproved uses. Additionally, it leaves important ethical issues for the general public (i.e., justice, beneficence, informed consent) currently underexplored. Providing adequate information to the public about specific forms of risk associated with TMS, therefore, is imperative, as presenting TMS as simply ‘safe and effective' treatment belies its scientific complexity and is arguably unethical. We call upon the ethics community to increase scrutiny of TMS services and reporting on TMS in order to ensure that the quality of care for patients with mental illness is not compromised.


  • [1] Luber B and Lisanby SH (2014) Enhancement of human cognitive performance using transcranial magnetic stimulation (TMS). Neuroimage 85(3): 961–70.
  • [2] Dubljević V, Saigle V, Racine E (2014) The rising tide of tDCS in the media and academic literature. Neuron 82(4): 731–736.
  • [3] Dubljević, V. (2015): Neurostimulation devices for cognitive enhancement: Toward a comprehensive regulatory framework, Neuroethics, 8 (2): 115-126.
  • [4] Saigle, V., Dubljević, V. & Racine, E. (2018): The impact of a landmark neuroscience study on free will: A qualitative analysis of articles using Libet et al.’s methods, AJOB – Neuroscience, 9(1): 29-41.

Disclosures: None


Ethics and Neuroethics in the Time of COVID-19: What is Different and What Remains the Same

Christine Grady (1), Nina S. Hsu (2), Saskia Hendriks (2), Khara M. Ramos (2)
1. NIH Clinical Center; 2. NINDS, NIH

The current COVID-19 pandemic raises myriad questions on clinical, public health, and research ethics, which may benefit from collaboration among scientists, clinicians, and ethicists. We overviewed these issues within neuroscience-related disciplines, first analyzing the ethical landscape relating to this pandemic1, and then inviting attendees of the Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative Investigators' Meeting to discuss and expand on our analysis.

Several principles of bioethics remain unchanged in light of the pandemic, including respect for persons, care for individual and collective well-being, and justice. Yet the pandemic has forced us to examine how these and other principles can be specified and applied to ethical issues facing us as scientists, clinicians, healthcare systems, and citizens. These include fair allocation of the scarce available resources, the increased magnitude and differential impacts of health disparities (e.g., underserved populations that are most affected and at risk for infection and adverse neurological outcomes), and research urgency and complexity.

Clinical ethical issues that are especially relevant for patients with brain diseases and disorders include challenges in obtaining high-quality care in times of crisis (e.g., conducting neurological examinations remotely), balancing the need for care with increased risks (e.g., possible delays in regular care for stroke patients), how to advocate for patients who cannot advocate for themselves (e.g., dementia patients), the need for advance care planning, and end-of-life care.

Public health ethical questions concern allocation of scarce resources (e.g., equipment, medications, and healthcare personnel), and restrictions on individual liberties. When resources are insufficient, questions arise about what priority to give to persons with pre-existing neurological conditions. Further, surveillance and contact tracing measures involve tradeoffs between privacy and public health needs.

Additionally, ethical questions emerge in conducting neuroscience research, including those related to research priority setting during a public health emergency. Given limited resources and increased risks, how should decisions be made about continuing or prioritizing research? How do delays and altered research protocols impact the risk-benefit profiles and scientific integrity of these studies? Finally, do research decisions differentially affect vulnerable and underserved populations?

The COVID-19 public health crisis provides opportunities for collaboration between ethicists and researchers in basic, translational, and clinical neuroscience to tackle ethical issues arising in this pandemic. As the pandemic continues to evolve, efforts should be made to identify relevant opportunities for neuroethics-neuroscience partnerships, and to continue to consider and address ethical issues that may be similar – or different – in these uncertain times.


  • 1. Kim, S., & Grady, C. (2020). Ethics in the time of COVID: What remains the same and what is different. Neurology, 94(23), 1007–1008.

Disclosures: The authors thank the participants and organizers of the Neuroethics-focused session at the BRAIN Investigators’ Meeting.


A Systematic Review of Stakeholder Perspectives on Cognitive Effects of Brain Stimulation

Garrett M. Flynn (1), Miranda Zhang (2), Kevin Rao (3), Eran Klein (4,5), Dong Song (1)
1. Department of Biomedical Engineering, University of Southern California; 2. Flintridge Preparatory School; 3. La Salle High School; 4. Center for Neurotechnology and Department of Philosophy, University of Washington; 5. Department of Neurology, Oregon Health and Science University

Whether implemented in deep brain stimulation (DBS) devices, transcranial direct current stimulation (tDCS) systems, or bidirectional brain-machine interfaces (BMIs), targeted electrical stimulation of the brain is an emerging technology for the treatment of neurological and psychiatric conditions. Despite the promise of neurostimulation, there is an accumulating body of literature on side effects, mostly on motor, psychiatric or psychosocial effects. Cognitive side effects of neurostimulation have been relatively understudied and underreported by patients, caregivers, and healthcare providers (Mehanna, Bajwa, Fernandez, & Shukla, 2017). Several neuroethics scholars have advocated for complementing clinical measures with qualitative methods that characterize complex, context-specific changes in language, memory, and other cognitive processes following brain stimulation (Shook, Galvagni, & Giordano, 2014). Additionally, there is increasing scientific and medical interest in the use of brain stimulation techniques to treat disorders of cognition, such as dementia (Hampson et al., 2018). Reviews of brain-computer interface (BCI) studies using social research methods have proven useful to guide future qualitative research (Kögel, Jox, & Friedrich, 2020; Kögel, Schmid, Jox, & Friedrich, 2019). However, qualitative brain stimulation studies where neural activity is directly modulated to affect cognition have not been the focus of these reviews.

We conducted a systematic review of literature on stakeholder perspectives on cognitive effects of neurostimulation. A PubMed database search of qualitative studies on brain stimulation yielded 34 unique datasets from 41 publications that include quoted responses to open-ended interview or survey questions. Of these 34 datasets, 14 (41%) had at least one associated publication addressing actual or expected cognitive effects. Patient-reported cognitive effects were logged in 7 (50%) of the 14 datasets, yet only 1 (14%) of these seven datasets indicated that patients were asked about cognitive effects. On the other hand, 5 (36%) of the 14 datasets logged provider-reported cognitive effects and 4 (80%) out of these 5 datasets asked providers about expectations or views of cognitive enhancement. Similarly, quoted providers mention that brain stimulation worsens, restores, and enhances cognition with equal frequency while patients describe that brain stimulation actually worsens cognition slightly more than it restores. Our findings suggest that previous qualitative studies on brain stimulation lack focused attention to cognitive changes—specifically patient-reported cognitive impairment. As such, future studies of neurostimulation should aim to more consistently and rigorously evaluate cognitive effects. Researchers and clinicians involved in existing studies may also benefit from development of qualitative tools that assess cognitive effects on device users.


  • Hampson, R. E., Song, D., Robinson, B. S., Fetterhoff, D., Dakos, A. S., Roeder, B. M., … Deadwyler, S. A. (2018). Developing a hippocampal neural prosthetic to facilitate human memory encoding and recall. Journal of Neural Engineering. https://doi.org/10.1088/1741-2552/aaaed7
  • Kögel, J., Jox, R. J., & Friedrich, O. (2020). What is it like to use a BCI? - Insights from an interview study with brain-computer interface users. BMC Medical Ethics. https://doi.org/10.1186/s12910-019-0442-2
  • Kögel, J., Schmid, J. R., Jox, R. J., & Friedrich, O. (2019). Using brain-computer interfaces: a scoping review of studies employing social research methods. BMC Medical Ethics, 20(1), 18. https://doi.org/10.1186/s12910-019-0354-1
  • Mehanna, R., Bajwa, J. A., Fernandez, H., & Shukla, A. A. W. (2017). Cognitive impact of deep brain stimulation on Parkinson’s disease patients. Parkinson’s Disease, 2017. https://doi.org/10.1155/2017/3085140
  • Shook, J. R., Galvagni, L., & Giordano, J. (2014). Cognitive enhancement kept within contexts: neuroethics and informed public policy. Frontiers in Systems Neuroscience, 8(December), 1–8. https://doi.org/10.3389/fnsys.2014.00228

Disclosures: None


Human Today, Cyborg Tomorrow? Public Attitudes towards Brain Stimulation Devices (BSDs) and Brain-Computer Interfaces (BCIs)

Dana Pietralla (1), Sebastian Sattler (1)
1. University of Cologne

Background: Scientists, engineers, and healthcare professionals currently work on technological advances in Brain-Computer Interfaces (BCIs) and Brain Stimulation Devices (BSDs). While BCIs use an artificial link between the human brain and an external communication (e.g. spellers) or motor device (e.g. neural prosthetics), BSDs stimulate brain regions (e.g. through transcranial direct current stimulation, tDCS). Owing to ongoing technological developments and certain societal processes (such as aging populations, increased performance stress, and competition at work and in education), the market for such devices might further increase, for both medical and non-medical applications. Advancing our knowledge about public attitudes towards different forms of usage of various devices could inform legal and ethical debates concerning them. The Emerging Issues Task Force of the INS (2019) has also emphasized the importance of investigating the popularization of neurodevices.

Method: We conducted two large-scale web-based 2×2 between-subject experiments in Germany in which we provided respondents with vignettes describing the functioning of BSDs (Experiment 1, N=1,113) and BCIs (Experiment 2, N=1,106) along with their possible benefits, side effects, and areas of usage. In these experiments, we varied the purpose of use (treatment vs. enhancement) and the invasiveness (non-invasive vs. invasive). Subsequently, the willingness to use and moral acceptability of such technologies were assessed.

Results: Our results show that respondents were more willing and morally accepting of BCIs as compared to BSDs. The treatment scenarios elicited a higher use willingness and moral acceptability in comparison to enhancement scenarios. Respondents were less willing and accepting of invasive compared to non-invasive BCIs. Whereas respondents were less willing to use invasive rather than non-invasive BSDs, invasiveness was only found to reduce moral acceptability when used for enhancement. Moreover, we also found variation by sex, age, and the importance of health in life.

Conclusions: Ongoing technological advancements offer promising technological solutions, which also applies to BCIs and BSDs. Our results, however, suggest that the demand for enhancement with such neurodevices seems more limited than for treatment purposes. Thus, the idea of a widely artificially modified society remains currently more science-fiction than reality. Our findings also show a stronger hesitancy towards invasive technologies, maybe due to fears concerning an operation or cyborgization of humans. While the development and application of such technologies is still in its infancy, our results offer interesting insights into related public attitudes and allow a prediction on their further development.


  • Emerging Issues Task Force, International Neuroethics Society. (2019). Neuroethics at 15: The current and future environment for neuroethics. AJOB neuroscience, 10(3), 104-110.

Disclosures: None


Big Brains in Artificial Captive Environments

L. Syd M Johnson (1), Bob Jacobs (2), Lori Marino (3)
1. SUNY Upstate Medical University; 2. Colorado College; 3. Whale Sanctuary Project

Considerable empirical evidence demonstrates that large-brained mammals, such as cetaceans and elephants, experience poor mental and physical welfare in captivity. Diminished well-being in captive, large-brained, social mammals is related to the well-understood effects of chronic stress on neural development and architecture. Marino et al argue that in cetaceans (e.g., bottlenose dolphins, belugas, and orcas), captivity and isolation in environmentally impoverished tanks has deleterious effects on cortical and subcortical structures such as the hippocampus, the basal ganglia, and the amygdala. These same effects have been observed in some institutionalized humans, including incarcerated adults, and in nonhuman primates used in biomedical research.

Numerous neural harms are found in captive animals. The detrimental effects of impoverished environments on the cerebral cortex, from the molecular to the neuromorphological level, are well documented in a number of species. Chronic stress, common in animals confined in artificial settings, has been associated with increased apoptosis in the hippocampus and amygdala, leading to memory deficits and mood dysregulation. Perceived lack of control over the environment also appears to contribute to dysregulation of the basal ganglia, which is associated with behavioral stereotypies such as circling, head bobbing, and self-mutilation.

Neuroethics considers issues related to the use of animals in neuroscientific research, but the effects and ethics of other forms of captivity on the brain in animals, such as in the confinement of cetaceans for entertainment in zoos and marine parks, have been neglected. Understanding the effects of all forms of captivity on animals in a way that is attentive to how captivity itself frustrates the fulfillment of important needs, alters behavior, and alters the brain should be a focus of neuroethics. It can and should inform our ethical, social, and legal thinking about captivity for these and other species, including humans. Similarly, what we know about institutionalization and environmental deprivation in humans should inform the way we view the captivity of nonhuman animals in various settings, especially those animals who are most like us (highly intelligent, big-brained, and social mammals) and who share our vulnerabilities to confinement in artificial environments.


  • 1. Marino, L., et al., The harmful effects of captivity and chronic stress on the well-being of orcas (Orcinus orca). Journal of Veterinary Behavior, 2020. 35: p. 69-82.
  • 2. Diamond, M.C., et al., Effects of environmental enrichment and impoverishment on rat cerebral cortex. Journal of neurobiology, 1972. 3(1): p. 47-64.
  • 3. McEwen, B.S., C. Nasca, and J.D. Gray, Stress effects on neuronal structure: hippocampus, amygdala, and prefrontal cortex. Neuropsychopharmacology, 2016. 41(1): p. 3-23.
  • 4. McBride, S.D. and M.O. Parker, The disrupted basal ganglia and behavioural control: an integrative cross-domain perspective of spontaneous stereotypy. Behavioural brain research, 2015. 276: p. 45-58.
  • 5. Johnson, L.S.M., Neuroethics of the Nonhuman. AJOB Neuroscience, 2019. 10(3): p. 111-1

Disclosures: None


Ethical and Legal Considerations of Complementary and Alternative Neurotherapies

Ashwini Nagappan (1), Louiza Kalokairinou (1), Anna Wexler (1)
1. University of Pennsylvania

Background: Alternative neurotherapies are increasingly being offered to the public outside mainstream healthcare settings. Such therapies include electroencephalography (EEG) neurofeedback, single-photon emission computerized tomography (SPECT) that is utilized for psychiatric diagnostics, and experimental brain stimulation techniques, such as transcranial direct current stimulation (tDCS) and off-label transcranial magnetic stimulation (TMS). While the issues raised by the provision of these therapies parallel those surrounding other complementary and alternative medicine (CAM) techniques, alternative neurotherapies are unique in that they: (a) utilize techniques and devices employed by mainstream scientists and clinicians; (b) are marketed with an "aura of neuroscience;" and (c) are often targeted to parents of children with disorders (Wexler et al., 2020). Given that the validity and efficacy of these therapies is often not well-established (Thibault et al., 2017; Farah & Gillihan, 2012), the provision of these services raises important ethical and legal questions.

Objective: To map the landscape of prominent alternative neurotherapies and analyze the ethical and legal issues raised by the provision of these services.

Discussion: Currently, alternative neurotherapies are available for a broad range of conditions, including depression, brain injury, and insomnia. Despite the unproven validity of many of these therapies, providers, especially in the field of neurofeedback, often make misleading claims (Wexler et al., 2020), which might lead users to misinterpret the evidence base for these therapies. This is particularly concerning given that these services often target vulnerable individuals, such as patients with neuropsychiatric conditions and parents of children with disorders, who may be more susceptible to be misled or to suffer psychological harm. In some cases, such as those regarding brain stimulation, alternative neurotherapies may present risks of physical harm. Potential non-physical harms include psychological harms (e.g., from misdiagnosis); opportunity costs of choosing unproven therapies over an established one; and financial costs of an ineffective treatment. Some techniques, such as neurofeedback and SPECT diagnostics, raise questions regarding conflicts of interest, as often providers are the primary individuals to have published peer-reviewed literature regarding their marketed service (Thibault et al., 2018). Finally, regarding legal aspects of alternative neurotherapies, we discuss issues regarding adequate licensing and the provision of services within one's scope of practice.

Conclusion: We recommend greater oversight of complementary and alternative neurotherapies from state licensing boards, clearer communication by providers about the limitations of neurotherapies, and further research to support the efficacy of these services.


  • Farah, M. J. & Gillihan S. J. (2012). The puzzle of neuroimaging and psychiatric diagnosis: technology and nosology in an evolving discipline. AJOB Neuroscience, 4.
  • Thibault, R. T., Lifshitz, M., & Raz, A. (2017). Neurofeedback or neuroplacebo? Brain,140(4), 862-864. doi:10.1093/brain/awx033
  • Thibault, R. T., Lifshitz, M. and Raz, A. (2018). "The climate of neurofeedback: scientific rigour and the perils of ideology." Brain 141(2), e11-e11.
  • Wexler, A., Nagappan, A., Kopyto, D., & Choi, R. (2020). Neuroenhancement for Sale: Assessing the Website Claims of Neurofeedback Providers in the USA. Journal of Cognitive Enhancement.

Disclosures: None


Neurotherapy or Placebo? The Ethical Implications of Offering EEG Neurofeedback for Clinical Indications and Neuroenhancement

Louiza Kalokairinou (1), Anna Wexler (1)
1. University of Pennsylvania

Electroencephalography (EEG) neurofeedback is a type of biofeedback that records brain activity, aiming to teach users how to control their brain functions. In recent decades, EEG neurofeedback has been offered by providers as a treatment for clinical indications, and for cognitive and performance enhancement (Wexler et al. 2020). Currently, the number of providers globally is estimated to exceed 15,000 (ISNR 2017). Yet, to-date, the scientific validity of EEG neurofeedback remains widely contested, as most studies indicate that it provides no greater benefit than sham (Thibault and Raz 2017). Thus, it is likely that the effect of neurofeedback is due to placebo—that is, individuals receive psychobiological benefits that stem from the overall therapeutic context (Finniss et al. 2010) rather than from the treatment itself. The use of placebo in the clinical setting is controversial, as it often requires some level of deception, challenging the autonomy of patients (Asai and Kadooka 2013). This is a particularly pertinent concern in the case of neurofeedback, since the evidence base of the technique is frequently misrepresented by providers employing hype language and making potentially misleading claims (Wexler et al. 2020). Despite the extensive literature exploring the efficacy of neurofeedback in both clinical and healthy populations, currently the ethical implications of offering placebo neurotherapies remain understudied.

Through the case of EEG neurofeedback, we explore the ethical aspects of offering placebo neurotherapies and neuroenhancement to the public. We begin by analyzing the ethical implications of offering placebo neurotherapies for clinical indications. In this regard, we consider issues related to the opportunity cost when neurofeedback is used in place of well-established alternatives; potential psychological harm when such services are offered to vulnerable populations, such as patients suffering from neuropsychiatric diseases or parents of children suffering from conditions like autism; and risks of financial harm. Next, we explore the applicability of relevant ethical concerns in the context of offering placebo for the purposes of neuroenhancement. In this context, we discuss the interplay between the placebo effect, truth-in-advertising and consumer protection.

We propose a set of recommendations for the ethical provision of EEG neurofeedback for both clinical indications and neuroenhancement, with specific emphasis on the content of informed consent and the conditions for transparent disclosure of information. These recommendations are intended to inform neurofeedback practitioners and regulators, aiming to contribute to the ethical provision of EEG neurofeedback services.


  • Asai, Atsushi, and Yasuhiro Kadooka. "Reexamination of the ethics of placebo use in clinical practice." Bioethics 27, no. 4 (2013): 186-193.
  • Finniss, Damien G., Ted J. Kaptchuk, Franklin Miller, and Fabrizio Benedetti. "Placebo effects: biological, clinical and ethical advances." Lancet 375, no. 9715 (2010): 686.
  • International Society for Neurofeedback and Research (ISNR) “In defense of neurofeedback.” Available at: https://isnr.org/in-defense-of-neurofeedback (2017)
  • Thibault, Robert T., and Amir Raz. "The psychology of neurofeedback: Clinical intervention even if applied placebo." American Psychologist 72, no. 7 (2017): 679.
  • Wexler, Anna, Ashwini Nagappan, Deena Kopyto, and Rebekah Choi. "Neuroenhancement for Sale: Assessing the Website Claims of Neurofeedback Providers in the USA." Journal of Cognitive Enhancement (2020): 1-10.

Disclosures: None


Ethical Issues in Intraoperative Neuroscience Studies: Assessing Subjects’ Recall of Informed Consent and Motivations for Participation

Anna Wexler (1), Rebekah J. Choi (1), Ashwin G. Ramayya (2), Nikhil Sharma (2), Brendan J. McShane (2), Love Y. Buch (2), Melanie P. Donley-Fletcher (2), Joshua I. Gold (2,3), Gordon H. Baltuch (2), Eran Klein (4,5), Sara Goering (4,5)
1. Department of Medical Ethics and Health Policy, University of Pennsylvania; 2. Department of Neurosurgery, University of Pennsylvania; 3. Department of Neuroscience, University of Pennsylvania; 4. Center for Sensorimotor Neural Engineering, University of Washington; 5. Department of Philosophy, University of Washington

Background: In recent decades, there has been an increasing number of studies utilizing intracranial electrophysiology in human subjects (Chiong et al. 2018). Some of these studies take advantage of intraoperative situations—such as Parkinson's disease (PD) patients undergoing deep brain stimulation (DBS) surgery—to advance basic neuroscience knowledge. However, the use of neurosurgical patients as human research subjects raises important ethical considerations, particularly regarding informed consent and undue influence (Chiong et al. 2018), as well as subjects' motivations regarding participation. Yet, to our knowledge, no study has empirically examined these considerations in a real-world research context.

Objective: To empirically examine ethical concerns regarding informed consent and undue influence in PD patients undergoing DBS placement who participated in an intraoperative neuroscience study.

Methods: This study was conducted as an ethics supplement to a parent NIH-funded intraoperative study in PD patients undergoing neurosurgery for DBS, which sought to better understand how activity patterns of neurons in various structures of the human brain contribute to higher functions like learning and decision-making. The present ethics study consisted of two semi-structured 30-minute interviews conducted preoperatively (approximately one week following informed consent and one week prior to surgery) and postoperatively (approximately five weeks following surgery) via telephone by an embedded ethicist. Preoperative interviews assessed subjects' motivations for participation in the parent study and their recall of information presented during the informed consent process; postoperative interviews assessed subjects' surgical experiences and reflections on participation in the research study.

Results: Twenty-two subjects completed interviews (mean age=60.9, mean time since diagnosis = 8.9 years). All participants cited altruism (i.e., helping others, specifically future Parkinson's patients) or advancing medical science as "very important" or "important" in their decision to participate in the study. Most (n=17; 77.3%) reported that there was either no risk or did not recall being informed about risk, despite there being increased risk of infection and potential loss of confidentiality. Correct recall of other aspects of the informed consent was poor (36.4% for study purpose; 50.0% for study protocol; 36.4% for study benefits).

Conclusion: Even though standard informed consent procedures were followed, participants demonstrated poor retention of study information. However, all subjects correctly understood that the study would not confer a direct therapeutic benefit to them. Given that the NIH has prioritized funding of intraoperative neurosurgical studies, future work should focus on ensuring subject understanding and retention of information presented during the informed consent process.


  • Chiong W, Leonard MK, Chang EF. Neurosurgical Patients as Human Research Subjects: Ethical Considerations in Intracranial Electrophysiology Research. Neurosurgery. 2018; 83(1):29-37.

Disclosures: None


Lessons Learned from the Ethical Challenges Posed by a First-in-Human Gene Transfer Trial for a Neurodegenerative Disease

Alexander A. Iyer (1), Dimah Saade (2), Diana X. Bharucha-Goebel (2,3), A. Reghan Foley (2), Gilberto 'Mike' Averion (2), Eduardo Paredes (2), Steven Gray (4), Carsten G. Bönnemann (2), Christine Grady (1), Annette Rid (1), Saskia Hendriks (1)
1. National Institutes of Health Clinical Center; 2. National Institute of Neurological Disorders and Stroke; 3. Children's National Hospital; 4. University of Texas Southwestern

Neurodegenerative diseases have emerged as promising targets of gene transfer (GT) [1]. The first "gene therapy" for such a disease received FDA approval in 2017 (Luxturna, RPE65-mutation-associated retinal dystrophy), and at least 22 GT trials involving neurodegenerative diseases are ongoing [2]. The severity and progressive nature of these diseases, as well as the limited existing treatment options and the perceived therapeutic promise of GT, raise underexplored research ethics challenges.

We present the insights of an interdisciplinary collaboration between clinical investigators, basic scientists, and ethicists involved in a pediatric first-in-human, intrathecal GT trial for Giant Axonal Neuropathy (GAN), an ultra-rare neurodegenerative disorder with childhood onset. We identify and analyze two ethical challenges navigated in this trial—both related to participant selection—that are likely to reemerge in similar trials.

First, the severity and relentlessly progressive nature of GAN—coupled with the lack of treatment options beyond supportive measures and the perceived therapeutic promise of GT—resulted in an unexpected challenge: the trial had more eligible and interested patients than available slots. Guidance for selecting among eligible patients in ‘high-demand' trials remains limited [3]. Any participant-selection strategy can affect at least four ethically relevant dimensions of a trial: social value, risk-benefit profile, fairness, and enrollment time/effort. We offer considerations for future trials facing this challenge.

Second, and relatedly, the GAN GT trial raised questions surrounding how to improve a trial's risk-benefit profile through participant selection—either through eligibility criteria design or through selection among eligible patients. This proved complicated, as the risks and potential benefits of experimental GT for participants differ in complex ways depending on a participant's stage of disease. In trials aiming to slow or halt neurodegeneration, younger patients could benefit more than those with advanced disease, given irreversible nerve damage in the latter group. For patients with advanced disease, however, a GT trial may represent an ‘only chance,' whereas patients with earlier-stage disease may have future opportunities to receive GT. With respect to risks, patients with earlier-stage disease might better tolerate complications, yet they also risk losing more quality-adjusted life years due to potential serious adverse events. We offer several preliminary approaches to weighing these considerations and the role of risk-benefit uncertainty in a given trial.

GT holds the promise of health improvements for patients with neurodegenerative diseases, but unresolved ethical questions surround its clinical testing. Our points-to-consider may assist investigators, reviewing committees, and others involved in GT trials.


  • [1] V. Sudhakar and R. M. Richardson, “Gene therapy for neurodegenerative diseases,” Neurother, 2019, doi: 10.1007/s13311-018-00694-0.
  • [2] W. Chen, Y. Hu, and D. Ju, “Gene therapy for neurodegenerative disorders: advances, insights and prospects,” Acta Pharm. Sin. B, 2020, doi: 10.1016/j.apsb.2020.01.015.
  • [3] A.A. Iyer, S. Hendriks, and A. Rid, “The Challenge of Selecting Participants Fairly in High-Demand Clinical Trials,” AJOB, 2020, doi: 10.1080/15265161.2019.1701746.

Disclosures: None


A Bird's-Eye View of the Field of Neuromodulation Technology: A Bibliographic Network Analysis on Transcranial Direct Current Stimulation (tDCS) Research Studies

Anita S. Jwa (1), Mark A. Hoffman (2)
1. Stanford Law School; 2. Department of Sociology, Stanford University

Recent development in neuroscience has accelerated technological advances in the modulation of brain functions. Scientists and funding agencies have recognized the potential of these novel technologies as promising tools for treatments of neurological diseases, and there has been a rapid increase in research studies in this area. However, given that most neuromodulation technologies are still at an exploratory stage, the risk of hyperbole and premature application should not be ignored. While researchers have begun to put more efforts into improving the validity and rigor of research studies, a macroscopic view of the dynamics in this burgeoning field is missing. Investigating the evolution of the field and associations among research studies will allow us to examine the maturity of the field and the diffusion of critical information within this research community. To fill this gap, the present study applies bibliographic network analysis methods to (1) assess the level of maturity of a neuromodulation technology and (2) investigate the prevalence and diffusion of ethical concerns, such as uncertainty around the efficacy of neuromodulation technology, within this research community. This study utilizes transcranial direct current stimulation (tDCS), an extensively studied neuromodulation technology, as a case study. Inspired by Shwed and Bearman (2010)'s groundbreaking study on scientific consensus building, we measure modularity score - an index of internal division of a citation network that reflects the level of contestation - in the network of tDCS research studies. Semantic network analysis and temporal analysis on citation tendency are used to examine emergence and spread of ethical concerns in the tDCS research community. The preliminary results show high modularity, which indicates that there is a substantial level of contention among tDCS studies and thus, tDCS has not yet matured enough to reach the stage where its efficacy, utility, and limitation are well defined. This study also reveals a significant increase of inconsistent findings among recent publications and identifies network patterns of diffusion of uncertainty. As the first attempt to apply network analysis methods to the field of neuromodulation technology, this study demonstrates the potential value of these methods in neuroethics research. This study will provide valuable information for various stakeholders, such as scientists, patients, or funding agencies, to make an informed decision on whether to conduct, participate in, or fund neuromodulation research and to promote safe and responsible investigation of neuromodulation technology.


  • Bikson, M. et al., (2018). Rigor and reproducibility in research with transcranial electrical stimulation: An NIMH-sponsored workshop. Brain Stimulation, 11(3), 465-480. doi:10.1016/j.brs.2017.12.008
  • Kuhn, T. S. (1962). The structure of scientific revolutions. Chicago: University of Chicago Press.
  • Shwed, U., & Bearman, P. S. (2010). The temporal structure of scientific consensus formation. American Sociological Review, 75(6), 817-840. doi:10.1177/0003122410388488
  • Wasserman, S., & Faust, K. (1994). Social network analysis: Methods and applications. New York, NY, US: Cambridge University Press.
  • Woods, A. J. et al. (2016). A technical guide to tDCS, and related non-invasive brain stimulation tools. Clinical neurophysiology : official journal of the International Federation of Clinical Neurophysiology, 127(2), 1031-1048. doi:10.1016/j.clinph.2015.11.012

Disclosures: None


I'll Wear This As a Badge of Honour Because It's Made Me Better': Illness, Self-Concept and Device Embodiment in DBS for Treatment-Resistant Depression

Cassandra J. Thomson (1), Rebecca A. Segrave (1), Paul B. Fitzgerald (2), Karyn E. Richardson (1), Eric Racine (3), Adrian Carter (1)
1. Turner Institute for Brain and Mental Health, School of Psychological Sciences, Monash University, Clayton; 2. Epworth Centre for Innovation in Mental Health, Epworth Healthcare and Department of Psychiatry, Central Clinical School, Monash University, Camberwell; 3. Pragmatic Health Ethics, Institut de recherches cliniques de Montréal; Depts of Medicine & Social and Preventive Medicine, Université de Montréal; Depts of Neurology and Neurosurgery & Medicine, and Biomed. Ethics Unit, McGill University

Background: Numerous research trials of deep brain stimulation (DBS) for treatment-resistant depression (TRD) have been conducted. While the efficacy and safety of DBS continues to be investigated, with mixed results, studies examining patient and family lived experiences remain absent. Concerns have been voiced about the potential for DBS to create changes in patients' personality and personal identity (1, 2); however, others suggest this is the intention of psychiatric DBS (3). Qualitative studies with patients who have undergone DBS for other clinical indications have revealed important insights into how the intervention impacts patients' perceptions of themselves, their bodies and relationships (4, 5). These complex and highly nuanced psychosocial experiences are not captured by psychopathology and functional scales used in clinical trials, but can have substantial implications for patient and caregiver wellbeing. Therefore, the aim of this study was to qualitatively examine how DBS for TRD impacts patient personality, self and relationships, from the perspective of both patients and caregivers as they prepare and adjust to life with DBS.

Methods: A prospective qualitative design was used. Participants were six patients and five caregivers (spouses, family). Patients were enrolled in a clinical trial of DBS of the bed nucleus of the stria terminalis. Semi-structured interviews were conducted with participants before DBS-implantation and 9-months after stimulation initiation. The 21 interviews were thematically analysed.

Results: Three primary themes identified during analysis were: (a) impact of mental illness and treatment on self-concept; (b) device embodiment, and (c) relationships and connection. Severe refractory depression had profoundly impacted who patients were, how they viewed themselves, and the quality and functioning of their relationships. Patients who benefited from DBS felt reconnected with their pre-morbid self, yet still far from their ideal self. Caregivers reported familiar elements of their loved-one re-emerging, but noted a persistence of qualities established during mental illness. While reductions in depression were broadly beneficial for relationships, the process of adjusting relationship dynamics created new challenges. All patients reported recharging difficulties and challenges adapting to the device.

Conclusions: Therapeutic response to DBS is a gradual and complex process that involves an evolving self-concept, adjusting relationship dynamics, and growing connection between body and device. This is the first qualitative study to provide in-depth insight into the lived experience of DBS for TRD. Narrative accounts should routinely be collected as they capture needs and priorities that can guide patient-centred approaches to DBS clinical interventions.


  • 1. Glannon W. Stimulating brains, altering minds. J Med Ethics. 2009;35(5):289-92.
  • 2. Schechtman M. Philosophical reflections on narrative and deep brain stimulation. J Clin Ethics. 2010;21(2):133-9.
  • 3. Synofzik M, Schlaepfer TE. Stimulating personality: ethical criteria for deep brain stimulation in psychiatric patients and for enhancement purposes. Biotechnology Journal. 2008;3(12):1511-20.
  • 4. de Haan S, Rietveld E, Stokhof M, Denys D. Effects of Deep Brain Stimulation on the Lived Experience of Obsessive-Compulsive Disorder Patients: In-Depth Interviews with 18 Patients. PLoS One. 2015;10(8):e0135524.
  • 5. Schüpbach WM, Agid Y. Psychosocial adjustment after deep brain stimulation in Parkinson's disease. Nat Clin Pract Neurol. 2008;4(2):58-9.

Disclosures: Disclosures: PBF has received equipment for research from MagVenture A/S, Nexstim, Neuronetics and Brainsway Ltd and funding for research from Neuronetics. He is a founder of TMS Clinics Australia.


What Medical Ethics Have to Offer Philosophy and a Pluralistic Global Community

James H. Rutherford
Grant Hospital, Columbus, Ohio

What medical ethics have to offer philosophy and a pluralistic global community.

Historically medical ethics have been able to incorporate facts and values, nature and nurture, and also faith and reason. Medicine is both descriptive and prescriptive. Medical ethics are based on a respect for life, understood as a respect for both personal dignity and our common humanity. The four principles of medical ethics understand human nature to be multidimensional. These characteristics of medicine and medical ethics contribute to the relevance of philosophy and also bring some coherence to the fields of moral and political philosophy.

Medical ethics are also one example of a very useful four-part framework of analysis for moral and political philosophy which is based on a multidimensional understanding of human nature. This is seen in folk psychology which is based on just introspection and considers there to be physical, social, mental and spiritual aspects of human nature, One can see a similar multidimensional understanding of human nature in the philosophy of Aristotle, the progressive evolutionary development of the brain (MacLean and Eccles), the pattern of development of our mental and moral capacities in childhood through experience (Piaget and Kohlberg), the stages of life described by Erickson, and the causes of pain described by the AMA, as well as the four principles of medical ethics.

The difficult or quandary problems in moral philosophy are usually not a dispute about good vs. evil, but a conflict between two perspectives of the good or what is the lesser of two evils. There is often a tension between individual needs and those of society or between the consequences of an action and one's perceived duty or principles about the right thing to do. This four-part framework of analysis thus does not always lead to an agreement, but it also does not ascribe to a postmodern condition in which everything is merely subjective, arbitrary, relative, or based on material utility.

In political philosophy medical ethics are a resource that can provide cross-cultural understanding and enable ethical dialogue. In the political context, medical ethics can provide a well-balanced source of affirmation, moderation, and accommodation in a pluralistic global community..

Medical ethics have a lot to offer at this particular time in history, in part, because they bring some coherence to moral philosophy and in political philosophy they have the capacity to accommodate pluralism in a global community without coercion or alienation.


  • Rutherford, James H. 1992. The Moral Foundations of United States
  • Constitutional Democracy. Pittsburgh: Dorrance.
  • Rutherford, James H. 1999 “An Ecological Organic Paradigm.” Journal of
  • Consciousness Studies, 6, No. 10:81-103.
  • Rutherford, James H. 2004. Moral and Political Philosophy. Columbus,
  • Ohio:Top 20 Publishing.

Disclosures: None


Neural Interventions and the Narrative Self: Why Means Matter for Meanings

Muriel Leuenberger (1)
1. University of Basel

The impact of neural interventions on the narrative self is a debated topic in neuroethics, especially regarding Deep Brain Stimulation.1 This paper adds a so-far neglected dimension to the debate: neural interventions can have a crucial and unique influence on the self-narrative by diminishing the meaning of events. The paper further examines what it is about the means of neural intervention that impacts meaning by comparing different methods. The events of our lives gain their personal meaning through their position in the self-narrative.2 By integrating events into a self-narrative, they receive the necessary context of intentions, beliefs, personal history and institutional settings, i.e. of intentionality, to be meaningful. To uphold the coherence of the self-narrative, self-change needs to be accounted for through explanations in intentional or biochemical terms, or a combination of both. However, only an explanation including intentional states delivers the content to directly ascribe meaning to events. Neural interventions can deprive events of meaning because they may favor a predominantly biochemical account. A loss of meaning is not inherently negative but it can be problematic, particularly if one is unprepared and if events are affected one was not willing to have stripped of meaning. To which degree this pull towards a biochemical view occurs depends on the specific means of neural intervention. By comparing Deep Brain Stimulation, Prozac, Ritalin, psychedelics, and psychotherapy, the paper identifies some main factors: the rate of change, the transparency of the causal chain, the involvement of the patient, and the presence of an acute phenomenological experience.


  • 1 Baylis, F. (2013). “I Am Who I Am”: On the Perceived Threats to Personal Identity from Deep Brain Stimulation. Neuroethics, 6(3), 513-526.
  • Hoffman, G. A. (2015). How Hyponarrativity May Hinder Antidepressants’ “Happy Ending”. Philosophy, Psychiatry, & Psychology, 22(4), 317-321.
  • Schechtman, M. (2010). Philosophical reflections on narrative and deep brain stimulation. Journal of Clinical Ethics, 21(2), 133-139.
  • 2 Bruner, J. S. (1990). Acts of meaning ([8th print.] ed.). Cambridge (Mass.) etc.: Harvard University Press.
  • MacIntyre, A. C. (1984). After virtue (2.Aufl ed.). Notre Dame (Indiana): University of Notre Dame Press.

Disclosures: None


Neurocomputational Ethics, Automation and the Future of Work

Elizabeth Eskander (1), Veljko Dubljevic (1)
1. North Carolina State University

Artificial Neural Networks (ANNs) and automation technology in general raise important social and ethical concerns, especially about accountability, autonomy, dignity, and justice [1]. We focus on the specific concerns arising from how the emerging automation technology will affect the workforce of the future, including increasing the indirect pressure to use neuroenhancement. It is clear that automation not only will cause widespread disruptions in society, it will have disparate outcomes on different groups. We adopt the view that we must engage with stakeholders or their representatives to understand the implications of a technology that might directly or indirectly affect the stakeholders' lives, livelihoods, or wellbeing. In this spirit, we conducted 12 interviews with a diverse sample of workers to understand the ethical and societal challenges from their perspective during the implementation of automation. Approval was obtained from the Institutional Review Board of the authors' institution (IRB no. 20276) before interviews were conducted. No personally identifying information was collected during the interviews. Similarly to prior work [2], data analysis was carried out concurrently with data collection to enable the integration of information from each step of the process. Codebook development and coding of responses followed an iterative process. Codes were developed by using abductive analysis, a form of qualitative content analysis that combines elements of both induction and deduction [3,4].

Our strategy employed five analytic stages: (i) familiarization with the data through transcript reading; (ii) identification of a thematic framework reflecting the ideas discussed; (iii) indexing the data, i.e., identifying patterns across the transcripts; (iv) charting the data, and (v) mapping and interpreting the data (making sense of the data holistically, see [5]). We find that although workers agree that automation will significantly impact future of the workforce with opening new jobs, they are apprehensive about the prospects of immediate job loss and want their employers to be straightforward in how the introduction of automation will affect them. We found that workers have sophisticated views; they understand the limitations of automation and identify the threat to others livelihood; they express value in their own vocation and refuse to believe that it could also be automated. We submit that these findings, based on direct inputs from workers, should be considered seriously in decision-making about questions of socioeconomic justice. This research could be useful to policymakers as they shape the regulations that will govern the implementation of ANNs and other kinds of automation technology.


  • [1] Ouchchy L, Coin A, Dubljević V. 2020. AI in the headlines: The portrayal of the ethical issues of artificial intelligence in the media, AI & Society. 2020. https://doi.org/10.1007/s00146-020-00965-5.
  • [2] Dubljević V, Saigle V, Racine E. 2014. The rising tide of tDCS in the media and academic literature, Neuron;82: 731-736.
  • [3] Timmermans, S. & Tavory, I. 2012. Theory Construction in Qualitative Research: From Grounded Theory to Abductive Analysis. Sociological Theory 30 (3):167-186.
  • [4] Peirce, C.S. 1934. Collected Papers of Charles Sanders Peirce. Vol. 5, Pragmatism and Pragmaticism, C. Hartshorne and P. Weiss (Eds.). Cambridge, MA: Harvard University Press.
  • [5] Ritchie, J. & Spencer L. 2011. Qualitative Data Analysis for Applied Policy Research, in Huberman, A.M & Miles, M.B. [Eds.]. The Qualitative Researcher’s Companion, Thousand Oaks: Sage, pp. 305-329.

Disclosures: None


Augmented Reality as the Next Wave of Neuroenhancement

Imre Bard (1), Peter B. Reiner (2)
1. London School of Economics and Political Science; 2. University of British Columbia

The propriety of neuroenhancement has been fodder for neuroethical analysis for two decades. Discussions have primarily focused upon biomedical interventions such as pharmaceuticals, genetic engineering, neuromodulation, and brain-computer interfaces. Important conceptual advances were achieved by the introduction of the notions of moral enhancement (Douglas, 2008) and epistemic enhancement (Danaher, 2015). However, these debates have recently plateaued, at least in part because technologies capable of achieving meaningful enhancement have not yet materialized. The most widely discussed enhancements, pharmaceuticals and electrical brain stimulation, remain niche practices and most importantly, have only moderate efficacy.

An alternative view suggests that instead of focusing ethical and societal reflection primarily on technologies that directly manipulate the brain, we should recognize that our minds are inseparably intertwined with and greatly enhanced by a variety of digital, technologies that exert their effects more indirectly (Fitz and Reiner, 2016; Reiner and Nagel, 2017). These include the smartphone and a range of other ubiquitous computational devices and infrastructures that have come to profoundly shape the ways in which we access and disseminate knowledge and information, how we interact with each other, how we do business, work, play and learn.

Pursuing this line of reasoning, the present study takes up the question of whether augmented reality (AR) functions as a form of neuroenhancement. Current AR devices are capable of expanding human sensory and perceptual capacities, and thereby qualify as bona fide neuroenhancements. Moreover, AR devices are gradually being deployed in a number of enterprise contexts where they provide hands-free, just-in-time, contextual insight that enables new modes of interaction with the environment. These AR devices represent an example of technologies that provide tangible neuroenhancement and offer an opportunity to revisit the cognitive enhancement debate in a novel real-world context. This presentation will draw on a set of interview studies conducted with enterprise users of AR, who have incorporated head-mounted AR devices into their workflow. This offers insights into the lived experience of AR as an extension of users' cognitive and perceptual toolkit and allows for an investigation of neuroethical issues related to continued use of, and reliance upon augmented reality.


  • Danaher, J. (2015) On the Need for Epistemic Enhancement: Democratic Legitimacy and the Enhancement Project. Law, Inn Tech. 7;5(1):85–112.
  • Douglas, T. (2008) Moral Enhancement. Journal of Applied Philosophy. 25(3):228–45.
  • Fitz, N.S., Reiner, P.B. (2016) Perspective: Time to expand the mind. Nature 531:S9–S9.
  • Reiner PB, Nagel SK (2017) Technologies of the Extended Mind: Defining the Issues In: Neuroethics: Anticipating the Future, (J. Illes ed.), pp. 108-122.

Disclosures: None


Social Nudging to Promote American Public Health During the COVID-19 Pandemic

Natalia Montes (1)
1. University of Washington

The negative reinforcement of social distancing (or the pressure to do so) has not been effective. Despite efforts in many American states to encourage social distancing, we have seen a rise in the number of Covid-19 cases across the nation in areas in which social distancing is not mandated nor enforced. Further, pressures to wear masks in public have been met with a backlash from the public who identify on both sides of the political spectrum. Efforts to incorporate Nudge Theory ought to be considered in city planning and local policy for the sake of public health and preventing the spread of Covid-19, especially in areas predisposed to negative public opinion of social distancing and mask-wearing. Nudge Theory is not dependent on top-down authority instruction and individual compliance, but instead relies on positive reinforcement. Public spaces could be retrofitted with technologies to provide citizens with social nudging to change their behavior in ways that promote public health. Nudge Theory has been successful in a number of countries such as Japan, which has developed an ultrasonic teenage deterrent to congregating as well as lighting techniques to prevent suicide in public train stations. In the UK, there exists the Behavioral Insights Team (the "Nudge Unit") dedicated to creating policy based on Nudge Theory. Given the positive results of using Nudge Theory to build social spaces and technologies to regulate human behavior, I argue that incorporating the principles of Nudge Theory ought to be considered in areas in which rapid changes in public environment behavior is crucially needed to taper the steep incline of coronavirus-related illnesses and deaths. Reliance on Nudge Theory is arguably more efficient than managing the resulting effects of culture wars or mediating deep-rooted avoidant behavior of certain communities reluctant to wear masks or social distance. There are legitimate worries that such methods are objectionable forms of social control that undermine personal autonomy. Although nudge theory-based measures can promote public health, the moral ends may not justify the means of regulating behavior. For this reason, transparency of newly developed social nudging implementation is key in respecting the agency of individuals.


  • Richarz, Allan. “The Amazing Psychology of Japanese Train Stations.” Bloomberg.com, Bloomberg, 22 May 2018, www.citylab.com/transportation/2018/05/the-amazing-psychology-of-japanese-train-stations/560822/?fbclid=IwAR3lbu8xEuW8pJ8hINQBi3LMvAD2KK1wo8aGjlM05u8xzObh_DYPeqJDveM.
  • “Nudging Behaviour - Economic and Social Research Council.” ESRC, 9 July 2020.esrc.ukri.org/about-us/50-years-of-esrc/50-achievements/nudging-behaviour.
  • Bella, Timothy. “Places without Social Distancing Have 35 Times More Potential Coronavirus Spread, Study Finds.” Washington Post, Fred Ryan, 15 May 2020, www.washingtonpost.com/nation/2020/05/15/social-distancing-study-coronavirus-spread.
  • “Coronavirus Disease 2019 (COVID-19).” Centers for Disease Control and Prevention, 11 Feb. 2020, www.cdc.gov/coronavirus/2019-ncov/prevent-getting-sick/cloth-face-cover-guidance.html.

Disclosures: None


Listening (Too Much) to Prozac: A Critical Appraisal of the Neuroethics Debate About Affective Neuroenhancement

Alexandre Erler (1)
1. The Chinese University of Hong Kong

Alongside its high public profile, the neuroethics debate about pharmacological cognitive enhancement has been criticized for frequently devolving into a speculative "bubble" (e.g. Lucke et al, 2011). This paper applies a similar empirical critique to a related debate, one that mostly predates the cognitive enhancement literature: namely the discussion of affective neuroenhancement, including "cosmetic psychopharmacology", following psychiatrist Peter Kramer's famous phrase in Listening to Prozac (Kramer, 1993) – a book that significantly shaped the ensuing ethical debate, including some of my own writings (e.g. Erler, 2012). I show that Kramer's book is largely responsible for promoting a highly influential motif, both in academic and popular treatments of affective neuroenhancement. According to that "wonder drug" motif, there already exists a pharmacological substance (or several) that can be used to substantially alter personality and mood in a desired manner, even among people with no diagnosable mental pathology. The changes in question are sustainable over the long term, making the said substance(s) a useful tool in the pursuit of professional and social success (and potentially creating pressure on everyone to take it).

While there is evidence that antidepressants like fluoxetine can decrease negative affect even in healthy subjects (Serretti et al., 2010), the wonder drug motif nevertheless seems to lack adequate empirical support, and neither does it clearly apply to any other existing neuro-intervention. This suggests that the debate around affective neuroenhancement has at least partly operated inside another speculative bubble. I argue, however, that it still need not be reduced to pure philosophical speculation. Indeed, the debate can have present-day practical relevance in at least two ways: (1) Insofar as it eschews the wonder drug motif to consider the possibility of overprescription, and of medicating away "normal" negative affect; (2) Insofar as there is actual evidence of the feasibility of modifying even "normal" personality traits, e.g. by purposefully altering one's behaviour (e.g. Hudson et al., 2020) – although the relevant changes will likely be slower and less spectacular than the pharmacological makeovers described by Kramer. That said, barring ground-breaking new findings about some neuro-intervention's effects, this debate about the ethics of deliberately reshaping personality will for now need to move outside the scope of neuroethics strictly understood. Furthermore, any discussions of affective neuroenhancement that rely on the wonder drug motif should be flagged more clearly as either speculative reflections, or alternatively, as advocacy for a particular biomedical research program.


  • ERLER, A. 2012. One Man's Authenticity Is Another Man's Betrayal: A Reply to Levy. Journal of Applied Philosophy, 29 (3), 257-65.
  • HUDSON, N. W., FRALEY, R. C., CHOPIK, W. J. & BRILEY, D. A. 2020. Change Goals Robustly Predict Trait Growth: A Mega-Analysis of a Dozen Intensive Longitudinal Studies Examining Volitional Change. Social Psychological and Personality Science.
  • KRAMER, P. D. 1993. Listening to Prozac, New York, N.Y., U.S.A., Viking.
  • LUCKE, J. C., BELL, S., PARTRIDGE, B. & HALL, W. D. 2011. Deflating the Neuroenhancement Bubble. AJOB Neuroscience, 2 (4), 38-43.
  • SERRETTI, A., CALATI, R., GORACCI, A., DI SIMPLICIO, M., CASTROGIOVANNI, P. & DE RONCHI, D. 2010. Antidepressants in Healthy Subjects: What Are the Psychotropic/Psychological Effects? Eur Neuropsychopharmacol, 20, 433-53.

Disclosures: None


What Can Neural Measures of Consciousness Tell Us About the Inner Lives of Individuals with Disorders of Consciousness?

Jasmine Walter (1)
1. Monash University

Disorders of consciousness (DOCs) pose a significant clinical and ethical challenge because they allow for complex forms of conscious experience in patients where intentional behaviour and possibilities for communication are limited or non-existent (1). Considerable effort has been devoted to the search for neural indexes of consciousness that could quantitatively assess an unresponsive individual's "level" of consciousness (2-4). However, there is a growing consensus that consciousness is a multidimensional phenomenon composed of properties that can vary independently, so the same result obtained with a unidimensional measure could correspond to a range of qualitatively different conscious states (5). This is important because properties like executive function, attentional control, linguistic comprehension, perceptual integration, working and long-term memory may have a significant influence on decisions about a patient's treatment, environment, and in some cases the withdrawal of life-sustaining treatment. Despite this, there is currently limited understanding of how results obtained from potential neural measures of consciousness map onto conscious experience in DOCs.

I propose that to make progress on this question, we can investigate how these measures behave in other conscious states where subjective reports are available. Many proposed measures of consciousness have been applied to characterise different short-term changes in consciousness within individuals (like sleep, anaesthesia, and psychedelic states) and longer-term interindividual changes (like those resulting from long term neurological and psychiatric conditions). Building our understanding of neural measures using this approach will clarify their strengths and limitations as tools for assessing conscious experience in DOCs, as well as in other states where communication is impaired.

I argue that results from these heterogeneous conscious states are sensitive to the presence of specific dimensions of experience, like sensory richness and the diversity and variability of mental activity. While these measures cannot, based on current evidence, be thought to index consciousness per se, they may still play a role in guiding future research and informing clinical considerations. This approach might also be a first step towards understanding the relevance of key neural measures of consciousness to wellbeing or quality of life in DOCs. Further research into potential measures of consciousness in a wide range of states will be helpful both in avoiding pitfalls linked to drawing overgeneralised inferences about DOC consciousness and in enhancing their utility as markers of specific neurocognitive capacities.


  • 1. Owen AM, Coleman MR, Boly M, Davis MH, Laureys S, Pickard JD. Detecting Awareness in the Vegetative State. Science. 2006; 313(5792):1402-1402.
  • 2. Casali AG, Gosseries O, Rosanova M, Boly M, Sarasso S, Casali KR, et al. A Theoretically Based Index of Consciousness Independent of Sensory Processing and Behavior. Science Translational Medicine. 2013; 5(198):198ra05-ra05.
  • 3. Sitt JD, King J-R, El Karoui I, Rohaut B, Faugeras F, Gramfort A, et al. Large scale screening of neural signatures of consciousness in patients in a vegetative or minimally conscious state. Brain. 2014; 137(8):2258-70.
  • 4. Seth AK, Barrett AB, Barnett L. Causal density and integrated information as measures of conscious level. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 2011; 369(1952):3748-67.
  • 5. Bayne T, Hohwy J, Owen AM. Are There Levels of Consciousness? Trends in Cognitive Sciences. 2016; 20(6):405-13.

Disclosures: None


Ethical Perspectives on the Development of Digital Mental Health Tools

Kellen Mermin-Bunnell (1), Nicholas Rouillard (2), Nicole Martinez-Martin (2)
1. Yale University; 2. Stanford University

Digital mental health technology is expected to improve access to and treatment options for mental health. Since the pandemic, there has been a surge in the use of digital mental health tools in order to allow for socially-distanced care. At the same time, many digital mental health tools are available in the consumer domain, where frameworks for ensuring accountability, safety and data protection are less established. Efforts to delineate and disseminate practical guidance for addressing ethical issues during development and design of digital mental health technology remain important to the effective application of these tools. A qualitative study utilizing semi-structured interviews was conducted with 30 developers of digital mental health technology, ranging from mental health apps, platforms for delivery of therapy and tools for diagnosing and predicting mental health issues. The expertise of participants included computer science, product development and clinical practice. The interviews were coded and analyzed to identify common themes. Primary issues that emerged in the interviews were the challenges associated with identifying appropriate expectations and standards for use of digital mental health tools; the interaction of business interests with therapeutic goals; and the implementation of digital mental health tools in diverse populations.


  • None

Disclosures: None


A Neuroethics Roadmap for Deep Phenotyping Research

Francis X. Shen (1,2,4,5), Benjamin Silverman (1,2), Patrick Monnette (2), Sara Kimble (3), Scott L. Rauch (1,2,3), Justin Baker (2,3)
1. Mass General Brigham; 2. Harvard Medical School; 3. McLean Hospital; 4. Harvard Law School; 5. University of Minnesota

Psychiatry has long needed a better and scalable way to capture the dynamics of behavior and its disturbances, quantitatively across multiple data channels, at high temporal resolution in real time. By combining 24/7 data on location, movement, email and text communications, social media, with brain scans, genetics/genomics, neuropsychological batteries, and clinical interviews, researchers will have an unprecedented amount of objective, individual-level data (Baker et al 2018; Torous et al 2017). Analyzing this data with ever-evolving artificial intelligence (AI) offers the possibility of intervening early with precision and could even prevent the most critical sentinel events. Ideally, this could one day include bringing the interventions to patients where they are in the real world in a convenient, efficient, effective and timely way (Insel 2017).

Yet the road to this innovative future is fraught with ethical dilemmas (Martinez-Martin, et al 2018; Nebeker et al 2019). For the past year, supported by an NIH Bioethics Administrative Supplement award (NIH 1U01MH116925-01) we have been exploring the ethical, legal, and social implications (ELSI) of deep phenotyping research, including an interdisciplinary expert stakeholder workshop in May 2020.

Consistent with recent ELSI analyses (Nebeker et al 2019; Martinez-Martin et al 2018), the bottom line of our embedded neuroethics research is that existing ethical guidance and legal regulation are not sufficient for deep phenotyping research. At present, there are regulatory gaps, inconsistency across research teams in ethics protocols, and a lack of consensus amongst IRBs in when and how deep phenotyping research should proceed. Efforts are underway to fill these gaps, notably those led by the Connected and Open Research Ethics (CORE) initiative at UC, San Diego.

But until the field develops more robust consensus guidelines, the onus clearly falls on individual research teams to take the lead in shaping the applied ethics of deep phenotyping.

To guide these ethics considerations, we present a Neuroethics Roadmap for Deep Phenotyping Research. The Roadmap identifies six key (and interrelated) domains: (1) informed consent, (2) equity, diversity, and access, (3) privacy and partnerships, (4) regulation and law, (5) return of results, and (6) duty to warn and duty to report. The roadmap paves the way for collaborative dialogue to promote careful design and execution of research and to encourage researchers to give reasons for their decisions related to ELSI issues.


  • Baker, Justin T., Laura T. Germine, Kerry J. Ressler, Scott L. Rauch, and William A. Carlezon. "Digital devices and continuous telemetry: opportunities for aligning psychiatry and neuroscience." Neuropsychopharmacology 43, no. 13 (2018): 2499-2503.
  • Torous, John, J. P. Onnela, and Matcheri Keshavan. "New dimensions and new tools to realize the potential of RDoC: digital phenotyping via smartphones and connected devices." Translational psychiatry 7, no. 3 (2017): e1053-e1053.
  • Insel, Thomas R. "Digital phenotyping: technology for a new science of behavior." Jama 318, no. 13 (2017): 1215-1216.
  • Martinez-Martin, Nicole, Thomas R. Insel, Paul Dagum, Henry T. Greely, and Mildred K. Cho. "Data mining for health: staking out the ethical territory of digital phenotyping." NPJ digital medicine 1, no. 1 (2018): 1-5.
  • Nebeker, Camille, Rebecca J. Bartlett Ellis, and John Torous. "Development of a decision-making checklist tool to support technology selection in digital health research." Translational behavioral medicine (2019).

Disclosures: None


Neuroethics in the Courtroom: Using Neuroscience to Challenge Life Without the Possibility of Parole for Young Adults Age 18-25

Fenella McLuskie (1,2), Kailey Nicholson (1,3), Elizabeth Escalante (1,4), Brenna Evans (5), Ian Hayes (1,3), Sarah Lagan (1), Madeleine Muller (1,6), Jennifer Near (1,3), Emily Rehmet (1), Francis X. Shen (1,2,5,7)
1. Mass General Brigham; 2. Harvard Law School; 3. Harvard University; 4. Tufts University; 5. University of Minnesota; 6. Northwestern University; 7. Harvard Medical School

Neuroscience evidence played a role in a series of landmark cases on criminal sentencing (Steinberg 2017), in which the U.S. Supreme Court ruled that those under age 18 cannot be put to death (Roper v. Simmons, 2005), cannot be sentenced to life without the possibility of parole (LWOP) for non-homicide crimes (Graham v. Florida, 2010), and cannot be automatically sentenced to LWOP for homicide offenses (Miller v. Alabama, 2012). Now, advocates are exploring the role that neuroscience may play in cases involving emerging ""adults"" ages 18-25 (Scott et al 2016).

In this study, we present the first ethical and legal empirical analysis of this new use of developmental neuroscience for defendants ages 18-25. We compiled a database of relevant cases: (i) decided between 2017 and 2020, (ii) involving a defendant/appellant aged 18-25 at the time of the crime, (iii) mentioned neuroscience in the opinion, and (iv) cited Miller. Through analysis of these cases, combined with semi-structured interviews with attorneys in these cases, we examine the ethics and efficacy of using neuroscience evidence to argue that mandatory LWOP is unconstitutional for emerging adults ages 18-25.

Our neuroethics analysis compares neuroscience as presented in the legal briefs to published neuroscientific studies on brain development in emerging adulthood. Previous analysis of neuroscience in juvenile justice has suggested that lawyers' use of neuroscience sometimes diverges from center science "lab neuroscience" (Shen, 2013, p. 999) and that some may deploy "brain overclaim." (Morse, 2006, p. 397) Our analysis suggests that attorneys face multiple ethical hurdles, including the translation of group-averaged neuroscience to individualized cases, and the need for courts to draw bright lines amidst science that suggests developmental trajectories.

As regards efficacy, we find that neuroscience has been mostly ineffective in lower federal courts because U.S. Supreme Court precedent (e.g. Miller) drew the line at age 18 on grounds other than solely developmental science. We find, however, that neuroscience may be effective in state constitutional claims and in promoting new legislation. In People v. Johnson (2020), for instance, an Illinois appellate court found that two codefendants (who were over 18 at the time of their offense) "made prima facie showings in their pleadings that evolving understandings of the brain psychology of adolescents require Miller to apply to them" (p. *20). In public policy, one recent advancement is the California Senate Bill 889 (2020), which would extend juvenile courts' jurisdiction from those under 18 to those under 20.


  • Jones, O. D., Schall J. D. & Shen F. X. (forthcoming 2020). Law and neuroscience (2nd ed.). New York, NY: Wolters Kluwer.
  • Morse, S. (2006). Brain overclaim syndrome and criminal responsibility: A diagnostic note. Ohio State Journal of Criminal Law, 3(2), 397.
  • Scott, Elizabeth S., Richard J. Bonnie, and Laurence Steinberg. Young adulthood as a transitional legal category: Science, social change, and justice policy. Fordham L. Rev. 85 (2016): 641.
  • Shen, F. X. (2013). Legislating neuroscience: the case of juvenile justice. Loyola of Los Angeles Law Review, 46(3), 985.
  • Steinberg, L. (2017) Adolescent Brain Science and Juvenile Justice Policymaking, Psychol. Pub. Pol’y & L. 23, 410.

Disclosures: None


Reverse Inference, Bayesian Confirmation, and the Neuroscience of Moral Reasoning

Gustavo Cevolani (1), Davide Coraci (1)
1. IMT School for Advanced Studies, Lucca

Philosophical debates on the neuroscience of moral reasoning mostly re- volve around the "normativity problem": what, if anything, empirical find- ings tell us about the nature and validity of moral norms and (meta-)ethical theories (e.g. Berker 2009). In this talk, we argue that a different problem is likely in need of more urgent attention. This is what we call the "inference problem": even assuming that neuroscientific findings are indeed relevant for ethical theorizing, are these results robust enough to sustain the normative implications we want to derive from them? The answer hinges on the reli- ability of a widespread inferential practice, leading from the "low" level of patterns of neural activation to the "high" level of cognitive processes. The weakness of this "reverse inference" strategy has recently attracted a great deal of attention from both scientists and epistemologists, and its present methodological status is highly controversial (e.g. Poldrack 2006).

In this paper, we take a philosophy of science perspective and offer an assessment of both the methodological status of reverse inference (RI) and of its implications for neuroethics. We start by characterizing RI using both classical case-studies (including, e.g., Greene et al. 2001) and recent meta-analyses on the neuroscience of moral reasoning (e.g. Eres, Louis, and Molenberghs 2018). Then, we survey different proposals in the philosophi- cal literature, that conceptualize RI either as Bayesian inference, in purely "likelihoodist" terms, or as a form of abductive reasoning. After highlight- ing the limits of such proposals, we argue that the notion of confirmation as studied in Bayesian philosophy of science (Crupi 2020) can help clarifying various aspects of RI while conforming to the intuitions of practising neu- roscientists. We conclude by exploring the implications of our analysis for current debates in neuroethics.


  • Berker, Selim (2009). “The Normative Insignificance of Neuroscience”. In: Philosophy and Public Affairs 37.4, pp. 293–329. doi: 10.1111/j.1088- 4963.2009.01164.x.
  • Crupi, Vincenzo (2020). “Confirmation”. In: The Stanford Encyclopedia of Philosophy. Ed. by Edward N. Zalta. Spring 2020. Metaphysics Research Lab, Stanford University.
  • Eres, Robert, Winnifred R Louis, and Pascal Molenberghs (2018). “Com- mon and distinct neural networks involved in fMRI studies investigating morality: an ALE meta-analysis”. In: Social neuroscience 13.4, pp. 384– 398.
  • Greene, Joshua D. et al. (2001). “An fMRI Investigation of Emotional En- gagement in Moral Judgment”. In: Science 293.5537, pp. 2105–2108.
  • Poldrack, R. A. (2006). “Can cognitive processes be inferred from neuroimaging data?” In: Trends in Cognitive Sciences 10.2, pp. 59–63. issn: 1364-6613. doi: 10.1016/j.tics.2005.12.004.

Disclosures: None


How Do Parents View Cognitive Enhancers for their Children? Evidence from India

Jayashree Dasgupta (1), Georgia Lockwood Estrin (2), Zenia Yadav (1), Ilina Singh (3)
1. Sangath, New Delhi, India; 2. Birkbeck, University of London; 3. University of Oxford

Background: Large numbers of children in Low and Middle Income Countries (LMICs) may not have access to resources that help them reach their full developmental potential. Cognitive enhancers like nootropics, nutritional supplements and stimulants, have the potential to be scaled up and widely distributed, offering a possibility to help such children from LMICs attain their true cognitive potential and future upward social mobility. However, cognitive enhancement research in children has almost exclusively been carried out in High Income Countries (HICs), with limited understanding of their acceptability in LMIC contexts. In a previous study by the same authors, exploring perspectives of clinicians, teachers and ethicists from India, experts expressed skepticism about use of nootropic drugs. However, little is known about the opinions of parents in LMICs who are the key decision makers for their children, towards cognitive enhancement and the use of such techniques.

Objectives: This study aims to explore parental opinions around cognitive enhancement with children in an LMIC context.

Methods: In depth interviews (N=9) and a focus group discussion (N=1) were conducted with parents of primary school-going children in Delhi, India to explore their perspectives on cognitive enhancement. Parents were recruited from high, middle and low socioeconomic status (SES) to reflect views of parents with varying access to resources within an LMIC context. Case vignettes (tailored to each SES) describing different cognitive enhancement techniques were used to facilitate discussion around each type of cognitive enhancer. Transcripts were analyzed using exploratory thematic analysis.

Results: The current study is ongoing and we present some preliminary findings. Analysis revealed limited understanding or prior knowledge about cognitive enhancement amongst parents across all SES groups. Parents expressed concerns about ingestion of drugs, supplements or stimulants for cognitive enhancement. Despite education not being presented in a vignette as a cognitive enhancer, it emerged that all parents viewed cognitive enhancement to mean education and effective methods of teaching. Education, including happiness and well-being, were highlighted as being of utmost importance for a child's success in current and later life. Parents with the lowest access to resources emphasized that education was critical for their child to achieve a better standard of living in the future.

Conclusion: Preliminary insights from this study highlight the importance of education as a cognitive enhancer, and the necessity to invest in effective implementation of education in LMIC settings to optimize attainment of cognitive potential in children.


  • 1. Lu, C., Black, M. M., & Richter, L. M. (2016). Risk of poor development in young children in low-income and middle-income countries: an estimation and analysis at the global, regional, and country level. The Lancet Global Health, 4(12), e916–e922. https://doi.org/10.1016/S2214-109X(16)30266-2
  • 2. Schelle, K. J., Faulmüller, N., Caviola, L., & Hewstone, M. (2014). Attitudes toward pharmacological cognitive enhancement-a review. Frontiers in Systems Neuroscience, 8(1 APR), 1–14. https://doi.org/10.3389/fnsys.2014.00053
  • 3. Colaneri, N., Sheldon, M., & Adesman, A. (2018). Pharmacological cognitive enhancement in pediatrics. Current Opinion in Pediatrics, 30(3), 430–437. https://doi.org/10.1097/MOP.0000000000000615
  • 4. Dasgupta J, Lockwood Estrin G, Summers J, Singh I (2019). Exploring Ethical Issues Around Use of Cognitive Enhancement Drugs for Children in Low Resource Settings. Paper presented at International Neuroethics Society Annual Meeting, Chicago.

Disclosures: None


Moral Psychopaths Against Ultimate Harm

Fatima Sabir (1)
1. Roskilde University

Two key figures in the moral bioenhancement debate, Ingmar Persson and Julian Savulescu, have argued for the use of neurointerventions to alter moral decision-making and behavior in ways that will ensure our fitness for the future, thereby preventing the "ultimate harms" of global warming, terrorism, and nuclear war (Persson & Savulescu, 2008). Michael Hauskeller has objected to their project, arguing that it compromises freedom, and indeed, our very humanity. As he puts it, "what makes us human in the first place is the ability to choose between good and evil" (Hauskeller 2017, p. 369). Moral bioenhancement, he argues, removes or restricts the ability to choose evil and thereby undercuts a fundamental aspect of our humanity.

I shall challenge the foundations of Hauskeller's critique: equating what makes us human with the ability to choose between good and evil is problematic. I point towards the unsettled nature of the question "what makes us human?" (Harris 2011), drawing attention to studies of non-human animals' ability to demonstrate morally relevant traits such as empathy, reciprocity, and cooperation (Brosnan & de Waal 2003), and I emphasize the relevance of neurodiversity (Silberman 2015). Moreover, I propose a thought experiment. This concerns moral psychopaths: individuals who know what is right and act morally, but lack all moral feelings. I argue that not only should Hauskeller prefer moral psychopaths to morally normal individuals, but that he should also support the use of what I shall call the Kant-pill. The Kant-pill changes morally normal individuals into moral psychopaths, and I shall argue that the Kant-pill is a form of moral bioenhancement that is consistent with the Hauskellerian framework of morality. I then briefly return to the theme of neurodiversity to address Hauskeller's claim that morally good deeds that merely emerge from feelings hold no moral worth; I shall argue that this exclusion is not merely questionable, but dangerous.


  • Brosnan, Sarah F. and Wall, Frans B.M. de (2003). “Monkeys reject unequal pay”. In: Nature 425, pp. 297-299.
  • Harris, John (2011). “Taking the “human” out of human rights”. In: Cambridge
  • Quarterly of Healthcare Ethics 20(1), pp. 9–20.
  • Hauskeller, Michael (2017). “Is It Desirable to Be Able to Do the Undesirable? Moral Bioenhancement and the Little Alex Problem”. In: Cambridge Quarterly of Healthcare Ethics 26.3, pp. 365–376.
  • Persson, Ingmar and Julian Savulescu (2008). “The Perils of Cognitive Enhancement and the Urgent Imperative to Enhance the Moral Character of Humanity”. In: Journal of Applied Philosophy 25.3, pp. 162–177.
  • Silberman, Steve (2015). Neurotribes - The Legacy of Autism and how to Think Smarter About People who Think Differently. Allen & Unwin.

Disclosures: None


"Livewire – A Stimulating Night of Neurotechnology": Combining Neuroscience, Ethics, and Interactive Media for Participatory Technology Design

Garrett M. Flynn (1), Jay Goldstuck (1), Morgan Kuligowski (1), Dominic Diaz (2), Katerina Zacharia (3), Marientina Gotsis (1), Dong Song (1)
1. University of Southern California, Los Angeles, CA; 2. Alterea Inc, Los Angeles, CA; 3. Loyola Marymount University, Los Angeles, CA

Direct-to-consumer (DTC) neurotechnology has come under intense scrutiny by academics and policymakers (Teunisse, Youssef, & Schmidt, 2019). Its capabilities to record and modulate neural activity have led to important challenges in governance, data ownership, and justice. Growing corporate and startup participation in the neurotechnology industry has brought these concerns into public awareness. Broader encouragement of public participation has resulted from the recognition that risky technologies must incorporate the values and concerns of a wide array of stakeholders. Nevertheless, reliance on expert opinion and technical information has obstructed genuine stakeholder participation in decision-making (Roeser & Pesch, 2016). On the other hand, artworks engaged with emerging technologies can enable concrete emotional-moral experiences and critical reflection on technological developments (Roeser, 2016). New media artists have incorporated real-time brain signals into their work since the 1960s (Nijholt, 2019). Yet the link between neurotechnology interaction and critical reflection on morality has only recently been explored at the 2016 BrainHack Hackathon (Roeser, Alfano, & Nevejan, 2018). To the best of our knowledge, there have been no efforts to deliberately produce neurotechnology art for qualitative research investigating emotional-moral responses in public audiences. We are producing Livewire: A Stimulating Night of Neurotechnology to fill this gap. Set in 2045 Los Angeles, Livewire is an immersive entertainment experience where university students and the general public of Los Angeles engage with fictional advancements in social neurotechnology. Participants will interact with a multi-brain neurofeedback system and discuss this experience with an interdisciplinary panel of neuroscientists, neurologists, and neuroethicists. We will use surveys and semi-structured interviews to determine how these interactions impact the values participants hold related to future neurotechnology. Challenges faced during the design and implementation of this immersive experience will also be discussed. Overall, we believe that Livewire will serve as a useful model to inform future participatory technology assessments and encourage collaborations between scientists, artists, and ethicists.


  • Nijholt, A. (2019). Introduction: Brain-Computer Interfaces for Artistic Expression. In Brain Art. https://doi.org/10.1007/978-3-030-14323-7_1
  • Roeser, S. (2016). How art can contribute to ethical reflection on risky technologies. In Mind you. The art of ethics in the information society.
  • Roeser, S., Alfano, V., & Nevejan, C. (2018). The Role of Art in Emotional-Moral Reflection on Risky and Controversial Technologies: the Case of BNCI. Ethical Theory and Moral Practice. https://doi.org/10.1007/s10677-018-9878-6
  • Roeser, S., & Pesch, U. (2016). An Emotional Deliberation Approach to Risk. Science Technology and Human Values, 41(2), 274–297. https://doi.org/10.1177/0162243915596231
  • Teunisse, W., Youssef, S., & Schmidt, M. (2019). Human enhancement through the lens of experimental and speculative neurotechnologies. Human Behavior and Emerging Technologies, 1(4), 361–372. https://doi.org/10.1002/hbe2.179

Disclosures: Disclosures: The authors disclose sponsorship by OpenBCI.


Standards for the Future: Neurotechnologies and Challenges on Emerging Technologies

Zully Amairany Montiel Fernández (1), Christian Peñaloza (2)
1. National Autonomous University of Mexico, Faculty of Political and Social Sciences; 2. Mirai Innovation Research Institute

Neurotechnologies allow us to access our brain data, provide ways to research and treat brain-related diseases, develop novel rehabilitation systems. Neurotechnologies are completely modifying the way we live, making industries think of new applications that can give a stronger development in fields such as education, medicine and wellness, entertainment, marketing and advertising, national security, among other areas. The implementation of neurotechnologies in our daily basis opens opportunities to study new cases, however, the development of a technology with such an impact also requires great responsibility, which makes us question how society, enterprises, and governments are going to deal with implications in fields that have not been deeply discussed previously. Due to the fact that Neurotechnology will impact diverse industries, it is necessary to consider the diverse implications in our society and make sure the safety and correct usability of the technology. Therefore, prestigious institutes such as the Institute of Electrical and Electronics Engineers (IEEE) have proposed the guideline "Standards roadmap: Neurotechnologies for Brain-Machine Interfacing" which discusses the implementation of standards and certain priorities in neurotechnologies. However, at Mirai Innovation Research Institute we have noticed this report is still lacking on some topics that will make the research richer so that new scenarios can be studied from a wider view, that is why in this paper, we provide an in-depth analysis of this document, performed interviews with experts in the field, and provide recommendations regarding topics such as ethics, social impact, safety, and legality. These recommendations will lead to the early adoption of neurotechnologies in the near future.


  • 1.Penaloza, C. I., & Nishio, S. (2018). BMI control of a third arm for multitasking. Science Robotics, 3(20), eaat1228.
  • 2.onet, O., Marinelli, M., Citi, L., Rossini, P. M., Rossini, L., Megali, G., & Dario, P. (2008). Defining brain–machine interface applications by matching interface performance with device requirements. Journal of Neuroscience Methods, 167(1), 91–104. doi:10.1016/j.jneumeth.2007.03.015.
  • 3.Lelievre Y, Washizawa Y, Rutkowski TM. Single trial BCI classification accuracy improvement for the novel virtual sound movement-based spatial auditory paradigm. In: Signal and information processing association annual summit and conference (APSIPA), 2013 Asia-Pacific. IEEE; 2013. p. 1–6.
  • 4. Lazarou, I., Nikolopoulos, S., Petrantonakis, P. C., Kompatsiaris, I., & Tsolaki, M. (2018). EEG-based brain–computer interfaces for communication and rehabilitation of people with motor impairment: a novel approach of the 21st century. Frontiers in human neuroscience, 12, 14.
  • 5.Lebedev M. A. and Nicolelis M. A., “Brain-Machine Interfaces: Past, Present and Future,” Trends in Neurosciences, Vol. 29, No. 9, 2006, pp. 536-546. doi:10.1016 j.tins.2006.07.004.

Disclosures: None


Will Cultural Beliefs and Attitudes Lead to Social Disparities in Neurotechnology Use?

Kristin Kostick (1), Lavina Kalwani (2), Peter Zuk (1), Katrina A. Muñoz (1), Laura Torgerson (1), Clarissa Sanchez (1), Rebecca Hsu (1), Demetrio Sierra-Mercado (1,3), Simon Outram (4), Barbara Koenig (4), Stacey Pereira (1), Amy McGuire (1), Gabriel Lázaro-Muñoz (1)
1. Center for Medical Ethics and Health Policy, Baylor College of Medicine; 2. Department of BioSciences, Rice University; 3. Department of Anatomy & Neurobiology, University of Puerto Rico School of Medicine; 4. Program in Bioethics, University of California, San Francisco

Adaptive deep brain stimulation (aDBS) is a powerful artificial-intelligence (AI)-driven neurotechnology with promising outcomes as a treatment for movement and psychiatric disorders and with (some believe) potential for use as a tool to enhance cognition or mood. However, the potential benefits of aDBS may not be universally received. Even in (hypothetical) scenarios of equal and unrestricted access, socially-patterned beliefs and attitudes about the nature of neurotechnologies (e.g. techn-optimism vs. techno-skepticism; beliefs about bodily integrity) may motivate different patterns of use and access that can in turn exacerbate broader socioconomic inequalities in receipt of their benefits. Little is known about the influence of beliefs and attitudes on the use neurotechnologies like aDBS.

We asked 23 researchers currently involved in clinical trials of aDBS to share their perspectives on what they foresee public attitudes toward aDBS to be as awareness of aDBS becomes more widespread. Under the direction of a medical anthropologist (KK), our research team collaboratively coded and analyzed interviews using thematic content analysis.

We found that an overwhelming majority of researchers (91%) believe the public holds ambivalent, uncertain or contingent views about the use of aDBS. Most agreed that acceptance is likely to be contingent upon whether or not the technology is safe and unlikely to malfunction (safety-focus), is private and secure and unlikely to be "hacked" or otherwise accessed by third parties (privacy-focus), and does not cause unwanted shifts in a user's autonomy or personality (autonomy-focus). Some researchers (52%) said that certain members of the public may distrust neurotechnologies like aDBS due to incorrect assumptions about them (e.g. that they can be used to read or control people's thoughts), and that distrust may be fueled by media depictions. Fewer researchers (26%) said they expect the public to be supportive of expanding aDBS use (e.g. "Everybody's going to want it."). Researchers said younger generations may be especially open to using neurotechnologies because of their broader familiarity with and optimism about technology. Researchers pointed out that members of the public who are not aware of or do not believe in the possibility of pursuing safeguards may pre-emptively reject use of neurotechnologies for treatment and/or enhancement.

Bio- and neuroethicists have long warned about the possibility for socioeconomic factors to generate disparities in access to potentially beneficial but costly neurotechnologies. While most of these arguments center around financial considerations, our findings show that access may also be influenced by sociocultural beliefs and attitudes.


  • None

Disclosures: None


Reframing Neuroethics Education for Engineers and Researchers: A Case Study at the Rehab Neural Engineering Labs

Juhi Farooqui (1,4), Sarah Dawod (3), Devapratim Sarma (2,4)
1. Program in Neural Computation, Carnegie Mellon University; 2. Department of Physical Medicine and Rehabilitation, University of Pittsburgh; 3. Center for Bioethics and Health Law, University of Pittsburgh; 4. Rehab Neural Engineering Labs, University of Pittsburgh

The rapid advancement of neurotechnology demands that research efforts meaningfully engage ethical considerations. Such engagement is often hindered by researchers' preconceptions of ethics. Researchers regularly encounter ethics as a regulatory framework, applied post-hoc to research plans (de Melo Martin et al., 2007). We propose a framework which cultivates a neuroethical ethos, fostering consideration of ethical principles during the development of research projects from the ideation phase through experimentation and onward. To this end, we established a discussion-based workshop series to cultivate this ethos among researchers, engineers, and clinicians at all levels in the Rehab Neural Engineering Labs (RNEL) at the University of Pittsburgh.

There are numerous approaches which foster researchers' awareness of ethical principles (Cho et al., 2008, Fisher et al., 2015). RNEL is a richly interdisciplinary and collaborative hub for rehabilitation-focused neurotechnology development (in both human and animal models), consisting of 12 principal investigators and their respective research teams. However, integration of neuroethical principles in the life cycle of research has been largely offloaded to external regulatory bodies such as IRBs or the FDA. This creates a system in which researchers need not continuously engage with neuroethical principles because they can be divorced from day to day research acts. To provide an avenue for the cultivation of a neuroethical ethos at RNEL, we draw on models of engagement developed at the Center for Neurotechnology at the University of Washington (Goering and Klein, 2020), specifically the SPECS Project (Tubig and McCusker, in press). We build on the SPECS discussion-rich format to foster mutual understanding. Discussion topics are selected by assaying open questions being considered by lab members and identifying issues highlighted by federal agencies and media outlets. These topics are organized into a series of complementary sessions, in which we pair dialogue with scholarly resources, including domain experts, practitioners, and academic literature, to present concepts and bolster discussion. Participants are asked to discuss how each topic can be integrated into their individual work. We also engage a leadership team that ranges across both neural engineering and bioethics. By allowing participants to opt-in, this approach avoids being regulatory—this allows researchers to enhance their knowledge of and ability to reason about ethical principles on their own terms.

Our preliminary efforts show that these discussions challenge researchers' understandings of ethical principles. In doing so, we aim to develop a lab-wide research ethos that encourages researchers to continuously engage ethical principles.


  • Cho, M. K., Tobin, S. L., Greely, H. T., McCormick, J., Boyce, A., & Magnus, D. (2008). Strangers at the benchside: research ethics consultation. The American journal of bioethics : AJOB, 8(3), 4–13.
  • de Melo-Martín, I., Palmer, L. I., & Fins, J. J. (2007). Viewpoint: developing a research ethics consultation service to foster responsive and responsible clinical research. Academic medicine : journal of the Association of American Medical Colleges, 82(9), 900–904.
  • Fisher, E., O’Rourke, M., Evans, R., Kennedy, E. B., Gorman, M. E., and Seager, T. P. (2015). Mapping the Integrative Field: Taking Stock of Socio-technical Collaborations. Journal of Responsible Innovation, 2(1): 39-61.
  • Goering, S. and Klein, K. (2020). Embedding Ethics in Neural Engineering: An Integrated Transdisciplinary Collaboration. A Guide to Field Philosophy: Case Studies and Practical Strategies (eds. Brister and Frodeman), 17-34, Routledge.
  • Tubig, P. and McCusker, D. (in press). Fostering the Trustworthiness of Researchers: SPECS and the Role of Ethical Reflexivity in Novel Neurotechnology Research. Research Ethics.

Disclosures: None


Neuroethics in the Times of Pandemic: Lessons from COVID-19

Ashutosh Kumar (1,2), Vikas Pareek (1,3), Chiman Kumari (1,4), Muneeb A. Faiq (1,5), Ravi K. Narayan (1,2)
1. Etiologically Elusive Disorders Research Network (EEDRN), New Delhi, India; 2. Department of Anatomy, All India Institute of Medical Sciences (AIIMS), Patna, India; 3. National Brain Research Center, Manesar, Haryana, India; 4. Department of Anatomy, Postgraduate Institute of Medical Education and Research (PGIMER), Chandigarh, India; 5. New York University (NYU) Langone Health Center, NYU Robert I Grossman School of Medicine

Ongoing pandemic of Coronavirus Disease-2019 (COVID-19) has not only taken enormous toll of human lives but has also affected private life of millions of individuals globally. Statutory breach of the privacy of the individuals for effective containment of the pandemic through contact tracing web-based applications, facial recognitions systems, and many other means have been applied across glob (1). Giant internet and social media enterprises like Google and Facebook have offered users' movement data to the legal authorities (2). There have been incidences when a public disclosure of identity of the COVID-19 suspect or confirmed case affected reputation, personal and professional life of the individual severely, in some cases leading to greater financial loss or threat to the life (3). During the pandemic, strict lockdowns have been imposed in many countries across the world. Although, that helped to slowdown spread of the pandemic, has severely affected inter-personal relationships and mental health of the individuals.

In the COVID-19 hotspots, direct access to the factual information about the status of the patients were severely limited, and in many cases, kin were not informed about the demise, or were even deprived seeing face of the deceased and arranging last rites (4). It is likely that, these factors, may have heavily affected mental health of the concerned persons.

Although, the prevailed approach for the epidemic containment violates fundamental principles of the human rights, some justify that citing the noble intent behind the act, and being done in the public interest. Considering urgency to respond to the imminent threat of the pandemic, and lack of the suitable alternatives, such invasive and ruthless containment measures have been allowed to let go.

With substantial pros and cons, the issue has presented an ethical dilemma that how to evaluate this unprecedented situation in terms of personal rights, autonomy, and liberty (5); and can these fundamental humanitarian principles be overlooked to facilitate a stricter containment of the pandemic. The issue demands a wide discussion involving general public, law agencies, and academia. It also necessitates role of international neuroethics society and other global ethics bodies to hold formal organizational and public discussions, and to come up with consensus guidelines that can be referred by the lawmakers.


  • 1. Gasser U, Ienca M, Scheibner J, Sleigh J, Vayena E. Health Policy Digital tools against COVID-19: taxonomy, ethical challenges, and navigation aid. Lancet. 2020;0(0). doi.org/10.1016/S2589-7500
  • 2. Location data gathered by Facebook, Google, other tech companies could be used to battle coronavirus spread - The Washington Post. [cited 2020 Jul 10]. Available from: https://www.washingtonpost.com/technology/2020/03/17/white-house-location-data-coronavirus/
  • 3. Privacy of COVID-19 suspects violated; names, addresses made public - The Week. [cited 2020 Jul 10]. Available from: https://www.theweek.in/news/india/2020/03/22/privacy-of-covid-19-suspects-violated-names-addresses-made-public.html
  • 4. Managing the ethical implications of coronavirus - Nature India. Available from: https://www.natureasia.com/en/nindia/article/10.1038/nindia.2020.67
  • 5. Jeffrey DI. Relational ethical approaches to the COVID-19 pandemic. J Med Ethics. 2020;0:medethics-2020-106264. doi.org/10.1136/medethics-2020-106264

Disclosures: None


Mapping the Dimensions of Agency

Andreas Schönau (1,2), Ishan Dasgupta (1,2), Timothy Brown (1,2), Erika Versalovic (1,2), Eran Klein (1,2,3), Sara Goering (1,2)
1. Department of Philosophy, University of Washington; 2. Center for Neurotechnology, University of Washington; 3. Oregon Health and Science University, Portland

Neural devices have the capacity to enable end users to regain abilities lost due to disease or injury – for instance, a deep brain stimulator (DBS) that allows a person with Parkinson's disease to regain the ability to fluently perform movements or a Brain Computer Interface (BCI) that enables a person with spinal cord injury to control a robotic arm. While end users recognize and appreciate the technologies' capacity to maintain or restore their capabilities (Kögel et al. 2020), the neuroethics literature is replete with examples of concerns expressed about agentive capacities: A perceived lack of control over the movement of a robotic arm might result in an altered sense of feeling responsible for that movement (Kellmeyer et al. 2016). Clinicians or researchers being able to record and access detailed information of a person's brain might raise privacy concerns (Klein/Rubel 2018). A disconnect between previous, current, and future understandings of the self might result in a sense of alienation (Schüpbach et al. 2006). The ability to receive and interpret sensory feedback might change whether someone trusts the implanted device or themselves (Collins et al. 2017).

Inquiries into the nature of these concerns and how to mitigate them has produced scholarship that often emphasizes one of those issues – responsibility, privacy, authenticity, or trust – selectively. However, we believe that examining these ethical dimensions separately fails to capture a key aspect of the experience of living with a neural device. In exploring their interrelations, we argue that their mutual significance for neuroethical research can be adequately captured if they are described under a unified heading of agency. On these grounds, we propose an "Agency Map" which brings together the diverse neuroethical dimensions and their interrelations into a comprehensive framework. With this, we offer a theoretically-grounded approach to understanding how these various dimensions are interwoven in an individual's experience of agency.


  • Collins, Kelly L./Guterstam, Arvid/Cronin, Jeneva/Olson, Jared D./Ehrsson, H. Henrik/Ojemann, Jeffrey G. (2017): Ownership of an artificial limb induced by electrical brain stimulation. In: Proceedings of the National Academy of Sciences 114, 166–171.
  • Kellmeyer, Philipp/Cochrane, Thomas/Müller, Oliver/Mitchell, Christine/Ball, Tonio/Fins, Joseph J./Biller-Andorno, Nikola (2016): The Effects of Closed-Loop Medical Devices on the Autonomy and Accountability of Persons and Systems. In: Cambridge Quarterly of Healthcare Ethics 25, 623–633.
  • Klein, Eran/Rubel, Alan (2018): Privacy and Ethics in Brain–Computer Interface Research. Routledge. 653–668.
  • Kögel, Johannes/Jox, Ralf J./Friedrich, Orsolya (2020): What is it like to use a BCI? – insights from an interview study with brain-computer interface users. In: BMC Medical Ethics 21, 2.
  • Schüpbach, M./Gargiulo, M./Welter, M. L./Mallet, L./Behar, C./Houeto, J. L./Maltete, D./Mesnage, V./Agid, Y. (2006): Neurosurgery in Parkinson disease A distressed mind in a repaired body? In: Neurology 66, 1811–1816.

Disclosures: None


An Isolated Experiment: What the COVID-19 Pandemic Can Tell Us About Addiction

Nicholas Sinclair-House (1,2,3), Sarah Osborn (1)
1. University of Sussex; 2. Sussex Addiction Research and Intervention Centre; 3. Sussex Crime Research Centre

Current models of addiction are predicated on the understanding that dopaminergic transmission is the shared substrate of action for addictive drugs. Yet, in recent years, a measure of doubt has been cast over the unified conception of addiction this has fostered, perhaps most directly by a series of studies demonstrating an environmentally-cued double dissociation of stimulant and sedative drugs. Preference for one drug over the other is influenced in opposite directions by the context of use: sedatives are preferred in a home environment, whilst stimulants are preferred outside the home. These results in animal studies have led some to question models of addiction built on unitary notions of drug reward, and the suggestion that they might be reflected in human drug-users is a compelling one. Such a finding would challenge our integrated concept of addiction and lend credence to the notion that addictions to drugs of different classes are separate and distinct from one another. The primary obstacles to approaching this question experimentally in humans would be the many ethical implications of placing large numbers of drug-using participants into prolonged, forced isolation. Yet, as a by-product of the international response to the COVID-19 pandemic, this experiment has now been conducted, with millions of people across the world socially isolating in their homes. Reasoning from laboratory findings, we can predict this widespread change inducing a shift in patterns of drug use, away from stimulant drugs, towards sedative drugs. Early results are indicative of just such a shift. International surveys have shown increases in alcohol and opiate use. Sewage monitoring across Europe has shown reduced cocaine and MDMA use, with a higher consumption of alcohol and benzodiazepines. The US, already in the midst of an opioid crisis, has seen a dramatic rise in suspected overdoses. The picture is far from clear as yet, but the limited data emerging to date align with expectations developed from animal studies, and are initially suggestive of an analogous effect in humans. Should these early signs prove accurate, it would have significant practical implications in terms of policy and treatment approaches, but could also give us reason to question the necessary and sufficient components of our current addiction models. One side-effect of the COVID-19 virus may yet be the necessity of reappraising ‘addiction' as an umbrella term under which can be grouped an unknown number of neurobiologically distinct conditions.


  • Badiani, A. (2013). Substance-specific environmental influence on drug use and drug preference in animals and humans. Current Opinions in Neurobiology, 23(4), 588-596.
  • Badiani, A., Caprioli, D. & De Pirro, S. (2019). Opposite environmental gating of the experienced utility (‘liking’) and decision utility (‘wanting’) of heroin versus cocaine in animals and humans: Implications for computational neuroscience. Psychopharmacology, 236, 2451–2471.
  • Crombag, H. S., Bossert, J. M., Koya, E. & Shaham, Y. (2008). Context-induced relapse to drug seeking: A review. Philosophical Transactions B: Biological Sciences, 363(1507), 3233-3243.
  • De Pirro, S., Galati, G., Pizzamiglio, L. & Badiani, A. (2018). The affective and neural correlates of heroin versus cocaine use in addiction are influenced by environmental setting but in opposite directions. Journal of Neuroscience, 38(22), 5182-5195.
  • European Monitoring Centre for Drugs and Drug Addiction (2020). Impact of COVID-19 on patterns of drug use and drug-related harms in Europe. EMCDDA Trendspotter briefing, Lisbon.

Disclosures: None


Harvesting Madness: The Unintended Consequences of Cannabis Prohibition

Nicholas Sinclair-House (1,2,3)
1. University of Sussex; 2. Sussex Addiction Research and Intervention Centre; 3. Sussex Crime Research Centre

The global prohibition of Cannabis was comprehensively enshrined in the 1961 UN Single Convention on Narcotic Drugs, a measure rooted in the unfounded belief that it was a drug with no medical benefit which was known to cause mental illness. This association with insanity was promulgated in significant measure by the US administration during the early 20th century as part of its drive to form international consensus in favour of prohibition. Criminalizing the drug had the inadvertent effect of promoting domestic production over import in Western countries, as clandestine cultivators in these traditionally non-producer nations sought to breed varieties which would flourish in cooler northern climes. Though representing very different interpretations of the 1961 Single Convention, the US and Dutch prohibition regimes acted in concert to accelerate this process, resulting in domestically-produced, hybridized ‘Skunk' Cannabis variants which quickly found favour throughout the Western world due to the facility of their clandestine domestic production and their markedly increased potency. Cannabis producers in the West have seen the advantage in selectively breeding in favour of significantly increasing the amount of THC present in the plant. However, increasing THC comes at the cost of reducing the amount of Cannabidiol, a potent anti-psychotic, leading to speculation that this changing ratio is impacting the mental health of users. It is this new form of Cannabis which almost exclusively dominates the medical and latterly recreational marijuana market in the US today. It is also this new form of Cannabis which has been the subject of scientific scrutiny for its genuine, demonstrable link to mental illness. Prohibition, founded in large part on the false association of Cannabis with mental illness, is directly responsible for its metamorphosis into a drug which genuinely merits this association. Any move to revisit Cannabis regulation must take account not only of the increased potential for harm emerging from its modern guise, but, crucially, also consider that the one thing guaranteed to perpetuate the trend is continued criminalization.


  • Di Forti, M., Morgan, C., Dazzan, P., Pariante, C., Mondelli, V., Marques, T.R., Handley, R., Luzi, S., Russo, M., Paparelli, A., Butt, A., Stilo, S.A., Wiffen, B., Powell, J. & Murray, R.M. (2009). High-potency Cannabis and the risk of psychosis, The British Journal of Psychiatry, 195(6), 488-491.
  • Lande, A. (1962). The Single Convention on Narcotic Drugs, 1961, International Organization, 16(04), 776-797.
  • McLaren, J., Swift, W., Dillon, P. & Allsop, S. (2008). Cannabis potency and contamination: A review of the literature, Addiction, 103(7), 1100-1109.
  • Mehmedic, Z., Chandra, S., Slade, D., Denham, H., Foster, S., Patel, A.S., Ross, S.A., Khan, I.A. & ElSohly, M.A. (2010). Potency trends of Δ9-THC and other cannabinoids in confiscated Cannabis preparations from 1993 to 2008, Journal of Forensic Sciences, 55(5), 1209-1217.
  • Uitermark, J. (2004). The origins and future of the Dutch approach towards drugs, Journal of Drug Issues, 34(3), 511-532.

Disclosures: None


Towards Exploring Adolescent Moral Intuition with the ADC Model of Moral Judgment

Elizabeth Eskander (1), Veljko Dubljevic (1)
1. North Carolina State University

While research and advancements into the normative moral judgements of general populations has become widespread among multiple fields in cognitive science, research into the developmental aspects of these judgements during adolescence has remained relatively stagnant since the days of its earliest attempts by Piaget and Kohlberg. All theories and studies that have been published since then have shown and are built on the premise that morality evolves from birth to adulthood. For any normative theory of moral judgements to be considered viable, it must be able to explain and integrate how this morality is developed. In this study, we use a more novel, intuition-based model which has already acquired significant empirical backing and can predict key component parts of moral judgment (Bialek, Terbek, and Handley 2014; Dubljević, Sattler and Racine 2018). The central idea of the Agent-Deed-Consequence(ADC) model is that intuitive evaluations of each of these individual component processes are subconscious heuristics that are integrated to create a judgement of moral acceptability or unacceptability (Dubljević and Racine 2014; Dubljević, Sattler and Racine 2018). Approval from the International Review Board (no. 17933) was obtained, allowing us to survey students from grades 6-12 on their judgements of different moral scenarios. After creating age-appropriate vignettes designed to reflect the ADC model, we conducted a preliminary survey on a group of 172 middle school students (grades 6-8) and found students struggled with distinguishing and comprehending the moral concepts and vocabulary that was presented to them in the survey. Common examples of such were differentiating between morals and ethics, knowing the meaning of the word morality, and most surprisingly, struggling to comprehend that "immoral" is the antithesis to the idea of "moral", even after it was explained to them. While adjustments to the survey and vignettes will need to be made in order to increase validity of our future results, the feedback we have already obtained show that adolescents produce moral judgements without having any strong insight or knowledge into what morality actually is. This indicates that, in accordance with the premise of our chosen model, there is a deeply intuitive aspect to moral judgement making that arrives subconsciously. In future studies we expect to see the youngest group of students making their judgements primarily based on the consequences, with the agent having the least impact, based on prior research on the values children place on these aspects during development (Schallenberger 1894)(Turiel 1983).


  • Dubljević, V., and E. Racine. 2014. The ADC of Moral Judgment: Opening the Black Box of Moral Intuitions with Heuristics about Agents, Deeds and Consequences. AJOB – Neuroscience, 5(4): 3-20.
  • Dubljević, V., Sattler, S., & Racine, E. (2018). Deciphering moral intuition: How agents, deeds, and consequences influence moral judgment. PLOS ONE, 13(10), e0204631. doi: 10.1371/journal.pone.0204631
  • Schallenberger, M. 1894. Children's Rights. Pedagogical Seminary, 3: 87-96.
  • Turiel, E. 1983. The development of social knowledge. Cambridge: Cambridge University Press.

Disclosures: None


The Moral Swarm: A Preliminary Analysis of Human Hive Minds for Enhancing Moral Deliberation

David M. Lyreskog (1,2), Gabriela Pavarini (1,2), Edward Jacobs (1,2), Ilina Singh (1,2)
1. Neuroscience, Ethics and Society (NEUROSEC), Department of Psychiatry, University of Oxford; 2. Wellcome Centre for Ethics and Humanities

Swarm ‘intelligence' occurs naturally in many species: starlings move in unison; termite communities build vast and intricate networks; honey bees swarm to find a new home.

Humans arguably display collective intelligence as well, in the form of cooperative actions, polling, voting, and consensus activities. Such activities are supported by a complex cognitive architecture that allows humans to share mental states, goals and action plans with one another (1). Until recently it has been impossible to capture ongoing emergent human intelligence. But this capture is key to bioethical understanding of real-time collective moral decision-making. Emerging ‘swarming technologies' powered by AI have been shown to significantly improve accuracy in medical diagnostics, social perceptiveness, and predictions in domains ranging from market developments to sporting event outcomes (2-5). These feats motivate the question whether decision-making processes of interest to bioethics could be improved with this technology; e.g. could swarm intelligence enhance human normative deliberation; or inform health policy reform by more accurately representing consensus and social acceptability?

The first part of this talk presents results of analytical work to investigate the ontological status of human swarming, focusing on the concepts of "swarming" and "hive mind". Next we examine the normative force of human swarming, at the levels of (a) tool development, (b) application design, (c) result interpretation, and (d) policy implementation. Finally, we posit the potential value of applied human swarming technology as a method in bioethics, using mental health ethics research with young people as a theoretical case study. We emphasize the potential gains in stakeholder involvement, citizenship promotion, and normative force of findings, but also highlight the potential risks in such a methodological approach.

If technology for human swarming is as effective in moral deliberation as it has been shown to be in other domains of decision-making, we may see a paradigm shift in research that aims for policy impacts, where the values of stakeholders are no longer considered as the average preferences of singular citizens, but rather as a collective emergent intelligence.


  • 1. Tomasello, M., Carpenter, M., Call, J., Behne, T., & Moll, H. (2005). Understanding and sharing intentions: The origins of cultural cognition. Behavioral and brain sciences, 28(5), 675-691.
  • 2. Patel, B. N., Rosenberg, L., Willcox, G., Baltaxe, D., Lyons, M., Irvin, J., ... & Langlotz, C. (2019). Human–machine partnership with artificial intelligence for chest radiograph diagnosis. NPJ digital medicine, 2(1), 1-10.
  • 3. Askay, D., Metcalf, L., Rosenberg, L., & Willcox, G. (2019, January). Enhancing group social perceptiveness through a swarm-based decision-making platform. In Proceedings of the 52nd Hawaii International Conference on System Sciences.
  • 4. Rosenberg, L., Pescetelli, N., & Willcox, G. (2017, October). Artificial Swarm Intelligence amplifies accuracy when predicting financial markets. In 2017 IEEE 8th Annual Ubiquitous Computing, Electronics and Mobile Communication Conference (UEMCON) (pp. 58-62). IEEE.
  • 5. Rosenberg, L., Baltaxe, D., & Pescetelli, N. (2016, October). Crowds vs swarms, a comparison of intelligence. In 2016 swarm/human blended intelligence workshop (SHBI) (pp. 1-4). IEEE.

Disclosures: None


Is Neuroimaging-Based Psychosis Risk Prediction Ethically Justified in Clinical Practice?

Natalie M. Lane (1), Stuart A. Hunter (2), Stephen M. Lawrie (3)
1. Department of Psychiatry, NHS Lanarkshire, Scotland, UK; 2. Department of Psychiatry, NHS Lothian, Scotland, UK; 3. Division of Psychiatry, University of Edinburgh, Scotland, UK

The rapidly developing field of neuroimaging-based psychosis risk prediction brings promise of meaningfully improving clinical outcomes for those at high risk of psychosis [1]. Structural and functional neuroimaging has a central role in optimizing the predictive power of current risk models [2]. However, it is imperative that scientific innovation proceeds in tandem with ethical consideration. This work furthers existing ethical discussion by considering the ramifications of the clinical application of predictive tools, and evaluating the ethical implications of new risk models incorporating neuroimaging. The current state of the science of predictive testing for psychosis, with a particular focus on neuroimaging biomarkers, is outlined. We then proceed to ethical analysis employing the four principles of biomedical ethics as a conceptual framework [3]. Ethical priorities include, firstly, ensuring that predictive testing is carried out in a manner which promotes autonomy through valid informed consent, and disclosure of results that are meaningful to the individual undergoing testing. Secondly, minimizing the risk of harm associated with the predictive testing process, such as adverse effects from neuroimaging procedures, experiencing stigma at being labelled ‘high-risk', receiving misleading results, and having data privacy compromised. Thirdly, promoting the many prospective benefits of risk prediction, including early intervention; avoidance of modifiable risk factors; heightened understanding of subthreshold psychotic experiences; access to education and peer support; opportunity for advanced planning; and enhanced prognostication [4]. Finally, upholding distributive justice by ensuring cost-effectiveness; facilitating equal access to testing; and avoiding reinforcement of structural biases [5]. We conclude with a call for scientific advancement to proceed in unison with ethical evaluation informed by the empirical study of the views of high risk individuals and their families. This collaborative approach will help ensure that predictive testing progresses in an ethically acceptable manner that minimizes potential adverse effects, and maximizes meaningful benefits for those at high risk of psychosis.


  • 1 Lane NM, Hunter SA, Lawrie SM. The benefit of foresight? An ethical evaluation of predictive testing for psychosis in clinical practice. Neuroimage Clin 2020;:102228.
  • 2 Hunter SA, Lawrie SM. Imaging and Genetic Biomarkers Predicting Transition to Psychosis. Curr Top Behav Neurosci 2018;40:353–88.
  • 3 Beauchamp TL, Childress JF. Principles of Biomedical Ethics. Oxford University Press 2001.
  • 4 Correll CU, Galling B, Pawar A, et al. Comparison of Early Intervention Services vs Treatment as Usual for Early-Phase Psychosis: A Systematic Review, Meta-analysis, and Meta-regression. JAMA Psychiatry 2018;75:555–65.
  • 5 Corsico P. The risks of risk. Regulating the use of machine learning for psychosis prediction. Int J Law Psychiatry 2019;66:101479.

Disclosures: None


Researchers' Perspectives on Changes in Personality, Mood, and Behavior in Adaptive Deep Brain Stimulation Trials

Peter Zuk (1), Clarissa Sanchez (1), Kristin Kostick (1), Laura Torgerson (1), Katrina A. Muñoz (1), Rebecca Hsu (1), Jill Oliver Robinson (1), Lavina Kalwani (2), Demetrio Sierra-Mercado (3,1), Simon Outram (4), Barbara A. Koenig (4), Stacey Pereira (1), Amy L. McGuire (1), Gabriel Lázaro-Muñoz (1)
1. Center for Medical Ethics and Health Policy, Baylor College of Medicine; 2. Department of BioSciences, Rice University; 3. Department of Anatomy & Neurobiology, University of Puerto Rico School of Medicine; 4. Program in Bioethics, University of California, San Francisco

Attached separately (special formatting)


  • None

Disclosures: None


Should We Use Brain Data to Predict Autism Prior to the Onset of Symptoms? Lessons from Genomics for Predictive MRI

Kate MacDuffie (1,2), Annette Estes (1), Holly Peay (3), John R. Pruett Jr. (4), Benjamin Wilfond (2)
1. University of Washington; 2. Seattle Children's Research Institute; 3. RTI International; 4. Washington University School of Medicine in St. Louis

Symptoms of autism spectrum disorder (ASD) are absent in the first year of life but tend to emerge by age 2. Recently, brain-based techniques (MRI, EEG) applied to infants at high familial risk for ASD have detected brain changes at 6-12 months which accurately predict later ASD diagnosis.1-3 These techniques have remarkably high positive predictive values (PPV; 72-100%), and could allow earlier and superior prediction compared to existing symptom checklist-based methods (e.g., M-CHAT; PPV= 40-60%)4.

These predictive MRI approaches could potentially shift the timing of ASD detection and intervention into the first year of life, accelerating neuroscientific investigations into the early pathogenesis of ASD and enabling targeted enrollment into research trials of presymptomatic interventions. The evidence base for presymptomatic interventions is still being built, however, with consensus on "best practice" for interventions delivered in infancy still likely years away. These circumstances raise an important ethical question for presymptomatic ASD research that has been widely debated in the genomics literature: is disclosure of a predictive diagnosis to parents justified when there is no immediate treatment available?

Prior to 2005, international guidelines and position papers on genetic testing offered ambiguous or inconclusive recommendations about whether testing should be offered for childhood-onset disorders with no proven intervention.5 We evaluated current guidelines and consensus statements (2006-2019) and found evidence that guidance is starting to coalesce around the idea that parents should have discretion in decisions about whether to pursue predictive testing for their children. Allowing parental discretion to pursue testing is advised most strongly in the context of familial disorders, where the uncertainty of whether a child will develop symptoms can be a source of worry and anticipatory distress. The apparent trend towards allowing parents more discretion in genetic testing decisions provides useful guidance for the emerging context of predictive MRI for ASD—however, additional research is required to determine if the potential benefits of ASD prediction (i.e., to allow lifestyle and family planning, enable targeted monitoring and/or trials of experimental presymptomatic interventions) outweigh the risks (i.e., potential for false positive/negative results, inequitable access to follow-up testing and support resources, adverse psychosocial impacts). This presentation will lay out next steps for empirically evaluating the benefits and risks of MRI-based presymptomatic prediction of ASD, drawing to large extent upon prior work on the ethics of genetic/genomic testing in childhood.


  • 1. Emerson RW, Adams C, Nishino T, et al. Functional neuroimaging of high-risk 6-month-old infants predicts a diagnosis of autism at 24 months of age. Sci Transl Med. 2017;9(393):eaag2882. doi:10.1126/scitranslmed.aag2882.
  • 2. Hazlett HC, Gu H, Munsell BC, et al. Early brain development in infants at high risk for autism spectrum disorder. Nature. 2017;542(7641):348-351. doi:10.1038/nature21369.
  • 3. Gabard-Durnam LJ, Wilkinson C, Kapur K, Tager-Flusberg H, Levin AR, Nelson CA. Longitudinal EEG power in the first postnatal year differentiates autism outcomes. Nat Commun. 2019;10(1):4188–12. doi:10.1038/s41467-019-12202-9.
  • 4. Sturner R, Howard B, Bergmann P, et al. Accurate Autism Screening at the 18-Month Well-Child Visit Requires Different Strategies than at 24 Months. J Autism Dev Disord. 2017;47(10):3296-3310. doi:10.1007/s10803-017-3231-0.
  • 5. Borry P, Stultiens L, Nys H, Cassiman J-J, Dierickx K. Presymptomatic and predictive genetic testing in minors: a systematic review of guidelines and position papers. Clin Genet. 2006;70(5):374-381. doi:10.1111/j.1399-0004.2006.00692.x.

Disclosures: None


Neuroethics Methodology and the Is-Ought Gap

Peter Zuk (1)
1. Center for Medical Ethics and Health Policy, Baylor College of Medicine

Attached separately (special formatting)


  • Brody, B. (2003). Assessing Empirical Research in Bioethics. In Taking Issue: Pluralism and Casuistry in Bioethics (pp. 57-65). Washington, D.C.: Georgetown University Press.
  • Davies, R., Ives, J., Dunn, M. (2015). A systematic review of empirical bioethics methodologies. BMC Med Ethics 16: article 15. https://doi.org/10.1186/s12910-015-0010-3.
  • Davidson, D. (1973). On the Very Idea of a Conceptual Scheme. Proceedings and Addresses of the American Philosophical Association 47: 5-20. https://doi.org/10.2307/3129898.
  • Pavarini, G., Singh, I. (2018). Pragmatic Neuroethics: Lived Experiences as a Source of Moral Knowledge. Cambridge Quarterly of Healthcare Ethics 27(4): 578-589. https://doi.org/10.1017/S0963180118000105.

Disclosures: None


Can DBS Cause an Identity Crisis?

Nada Gligorov (1)
1. Icahn School of Medicine at Mount Sinai

A growing segment of the neuroethics literature is focused on the effects of brain technology, including deep brain stimulation (DBS), brain computer interfaces (BCI) and direct-to-consumer neurotechnologies, on personal identity and authenticity (Gilbert et al., 2018; Maslen et al., 2015). This literature provides varied assessments on whether the changes induced by brain technology will have a positive or a negative effect on personal identity. Some commentators argue that the potential to directly control brain function will allow individuals, particularly those who have neurological of psychiatric conditions, to become more authentic (Levy, 2011). Others highlight the potential perils of direct brain intervention and cite a number of potentially perilous alterations to personal identity and authenticity (Kreitmair, 2019).

My primary aim is to present a new way of characterizing personal continuity that can be used to assess whether changes to self caused by brain interventions can result in a discontinuity of self over time. I formulate, what I call, the self-concept view which identifies personal identity over time with the maintenance of a concept of self over time. I argue that a concept of self is parasitic on the ability to self-attribute physical and psychological features and that the temporal element of the self emerges as individuals become able to self-attribute features that require endurance over time. I challenge dominant views of narrative identity (DeGrazia & DeGrazia, 2005) to show that forming and maintaining a self-concept that establishes personal continuity does not depend on the maintenance of a narrative. Instead, I argue that individuals can establish personal continuity before they are able to form a narrative and can keep personal continuity even after they are no longer able to maintain it. Moreover, I argue that individuals can maintain a self-concept despite even significant shifts in self-attributions that come with physical and psychological changes over time. Using the self-concept view, I identify the type of changes that might be the cause of an identity crisis, which is a first-person feeling of discontinuity. I argue that given the resiliency of self-concepts, identity crises are rare and are unlikely to occur even with the use of invasive brain technology. I conclude, however, that individuals ought to be able to choose even radical changes to their self-concept, including those that may cause personal discontinuity.


  • DeGrazia, D. (2005). Human Identity and Bioethics. Cambridge University Press.
  • Gilbert, F., Viaña, J. N. M., & Ineichen, C. (2018). Deflating the “DBS causes personality changes” bubble. Neuroethics. https://doi.org/10.1007/s12152-018-9373-8
  • Kreitmair, K. V. (2019). Dimensions of Ethical Direct-to-Consumer Neurotechnologies. AJOB Neuroscience, 10(4), 152–166. https://doi.org/10.1080/21507740.2019.1665120
  • Levy, N. (2011). Enhancing Authenticity: Enhancing Authenticity. Journal of Applied Philosophy, 28(3), 308–318. https://doi.org/10.1111/j.1468-5930.2011.00532.x
  • Maslen, H., Pugh, J., & Savulescu, J. (2015). The Ethics of Deep Brain Stimulation for the Treatment of Anorexia Nervosa. Neuroethics, 8(3), 215–230. https://doi.org/10.1007/s12152-015-9240-9

Disclosures: None


Neuronavigation in Non-Invasive Neuromodulation: An Ethical Imperative to Improve Clinical Practice

Tabitha Moses (1)
1. Wayne State University School of Medicine

Non-invasive neuromodulation is gaining popularity in psychiatric treatment. Repetitive transcranial magnetic stimulation (rTMS) is FDA-approved to treat depression and obsessive compulsive disorder. These approvals are associated with specific treatment protocols targeting precise neural locations; however, the generally accepted clinical techniques for neural targeting have a wide margin for error (1). rTMS stimulation affects neural activity within certain regions of the brain. Depending on the location targeted, rTMS has the ability to adjust all facets of our personalities including our views on morality and empathy (2,3). Despite these significant effects of neuromodulation, current techniques for neural targeting in clinical practice are crude and rely on rough estimates that we know are not accurate.

Given the ability of repeated sessions of rTMS to generate a variety of long-term cognitive and behavioral changes dependent on the location affected (4), it is imperative that rTMS treatment focus on the correct area of the brain, not only to ensure effective treatment but also to ensure that there are no unexpected behavioral changes as a result of treatment. In research, the methods for locating neural targets used in clinical practice are no longer acceptable. More advanced neuronavigation tools allow researchers to adjust for individual variability in brain and skull shape and size. Unfortunately, this requires neuroimaging, and the methods and required equipment are expensive. The billing and reimbursement system for rTMS treatment does not allow for this, and those physicians who deliver this treatment are not trained to use the neuronavigation equipment. Nonetheless, cost and need for training should not be barriers to developing best practices in medicine.

Non-invasive neuromodulation provides valuable opportunities to alter brain function and provide treatment for a range of disorders. Unfortunately, the current techniques for providing this treatment do not allow for accurate treatment. This is concerning from the standpoint of tracking treatment success but more alarming is the fact that stimulating the wrong neural target could alter fundamental facets of the patient's personality. That risk is difficult to convey to a patient and although this may have once been a risk worth taking for these patients, we now have the equipment to ensure targeting is accurate. It is unethical to continue to use these crude methods given our knowledge of the risks and our technological ability to do better. Since the technology to ensure accurate targeting now exists, clinical practice must embrace these techniques.


  • Darby RR, Pascual-Leone A. Moral Enhancement Using Non-invasive Brain Stimulation. Front Hum Neurosci. 2017;11:77. doi:10.3389/fnhum.2017.00077
  • Klomjai W, Katz R, Lackmy-Vallée A. Basic principles of transcranial magnetic stimulation (TMS) and repetitive TMS (rTMS). Ann Phys Rehabil Med. 2015;58(4):208-213. doi:10.1016/j.rehab.2015.05.005
  • Ruohonen J, Karhu J. Navigated transcranial magnetic stimulation. Neurophysiol Clin. 2010;40(1):7-17. doi:10.1016/j.neucli.2010.01.006
  • Yang CC, Khalifa N, Völlm B. The effects of repetitive transcranial magnetic stimulation on empathy: A systematic review and meta-analysis. Psychol Med. 2018;48(5):737-750. doi:10.1017/S003329171700232X

Disclosures: None


Moralizing Mental Illness: Psychiatry's Pervasive Problem

Tabitha Moses (1)
1. Wayne State University School of Medicine

The history of psychiatry is fraught with maltreatment and mistrust. For centuries, morality guided treatment principles: a person had the choice to get better; if they did not, it was because they were weak. Medicine has progressed considerably. We recognize that psychiatric conditions are no different from other medical conditions insofar as they are not a symptom of moral failing; however, although our intellectual comprehension of these disorders has expanded, it appears our behavior and emotional responses towards these patients have not.

The most widely accepted example of this problem can be seen in treatments of substance use disorders (SUDs), wherein many of the current treatment approaches are still grounded in antiquated punitive approaches to psychiatry (1). Unlike with traditional physical ailments (e.g. hypertension), patients with SUDs are typically expected to be free from symptoms--drug use--prior to receiving treatment and to remain so during treatment otherwise they may be penalized (2). Although SUDs may be considered to be a special case, these issues are found across multiple psychiatric disorders. Patients are deemed inherently untrustworthy and subjected to demeaning and humiliating experiences to expose the truth (3). In SUD treatment, patients are expected to provide observed urine samples, patients with eating disorders or who engage in non-suicidal self-injury may lose all rights to physical privacy. These patients can be asked to sign treatment contracts confirming that they promise to not engage in the "bad" behavior at the risk of being removed from treatment. Unlike other fields of medicine, coercion and physical force are accepted despite ethical concerns and no consistent association with positive outcomes (4). These experiences can be dehumanizing and counterproductive to treatment efforts (5).

Patients with psychiatric disorders seek help for behaviors beyond their control; however, outdated treatment modalities send the message that the behaviors are fully controllable. Logically, it is clear that if a patient seeking treatment does not have complete control; however, the provider may experience cognitive dissonance between their own intellectual and emotional responses. As scientific understanding improved, it was assumed that attitudes would improve alongside it; however, throughout psychiatric treatment, we see the remnants of the fundamentally flawed and damaging approach that suggests patients have the ability to simply choose not to engage in disordered behavior. There is a need for recognition of these problems throughout psychiatry to allow for changes that ensure that no treatments remain grounded in this antiquated approach.


  • Hem MH, Gjerberg E, Husum TL, Pedersen R. Ethical challenges when using coercion in mental healthcare: A systematic literature review. Nurs Ethics. 2018;25(1):92-110. doi:10.1177/0969733016629770
  • Newton-Howes G, Mullen R. Coercion in psychiatric care: Systematic review of correlates and themes. Psychiatr Serv. 2011;62(5):465-470. doi:10.1176/ps.62.5.pss6205_0465
  • Padwa H, Kaplan CD. Translating Science to Practice: Lessons Learned Implementing Evidence-Based Treatments in US Substance Use Disorder Treatment Programs. Eur J Crim Policy Res. 2018;24(2):171-182. doi:10.1007/s10610-017-9360-3
  • Tingleff EB, Bradley SK, Gildberg FA, Munksgaard G, Hounsgaard L. “Treat me with respect”. A systematic review and thematic analysis of psychiatric patients’ reported perceptions of the situations associated with the process of coercion. J Psychiatr Ment Health Nurs. 2017;24(9-10):681-698. doi:10.1111/jpm.12410
  • Winograd RP, Presnall N, Stringfellow E, et al. The case for a medication first approach to the treatment of opioid use disorder. Am J Drug Alcohol Abuse. 2019;45(4):333-340. doi:10.1080/00952990.2019.1605372

Disclosures: None


The Extended Global Workspace: A Proposal for Grounding Neural Data Privacy on Mental Integrity

Abel Wajnerman Paz (1)
1. Universidad Alberto Hurtado

Goering and Yuste claim that treating legally personal neural data (ND) as a body organ could help to avoid main ND sharing risks. This ‘organic' approach entails that they would be protected by our right to bodily integrity and, therefore, can be donated exclusively by individual patients and cannot be commercialized (Yuste, Goering et al. 2017). Chile recently took this approach as a starting point for a bill that will be considered by its Senate during 2020 and which will determine how ND is managed in the country.

However, it is not clear what the legal basis of the proposal is. Legally treating something as something else requires at least some kind of analogical reasoning (Lamond 2006), which is not provided by the authors. I will try to fill this gap by suggesting a possible analogy. This will be accomplished through a naturalized conceptual analysis of ND (i.e., elucidating the notion of ND by using concepts from neuroscience).

A crucial weakness of the organic analogy is that ND are often not constituted by any organic material. Thus, I suggest that the analogy should be based on neurocognitive (e.g., computational or information-theoretic) properties of the brain, which are independent from their physical medium (Piccinini & Bahar 2013) and therefore can be implemented by a non-biological structure. That is, ND protection should be grounded on psychological (instead of merely physical) integrity.

ND are defined by the fact that they carry semantic information (Piccinini & Scarantino 2011) about (and therefore provide informational access to) our neural and neurocognitive processes. Based on this feature, I argue that ND can be considered as analogous to our ‘global neuronal workspace', a system in charge of broadcasting information about these processes, which is often considered the neural basis of consciousness (e.g. Dehaene & Changeux 2004). Unlike other kinds of personal information, the capacity to extract and diffuse information about our brains is not merely something we have the right to control but, fundamentally, a part of what we are, of our neurocognitive identity. The link between ND and our psychological self can constitute a legal basis for making ND protection more stringent than usual personal data protection, as Goering and Yuste recommend. Moreover, the present proposal achieves this by conceptually underpinning the domain of a ‘neurocognitive self', thus motivating (and contributing to) the characterization of a distinctive set of ‘neurorights' aimed at protecting its integrity.


  • Dehaene, S. & Changeux, J. P. (2004). Neural Mechanisms for Access to Consciousness. In M. S. Gazzaniga (Ed.), The cognitive neurosciences. Cambridge, MA, US: MIT Press. pp. 1145-1157.
  • Lamond, G. (2006). Precedent and Analogy in Legal Reasoning, in Edward N. Zalta (ed.) The Stanford Encyclopedia of Philosophy, URL = <https://plato.stanford.edu/archives/spr2016/entries/legal-reas-prec/>.
  • Piccinini, G., Bahar, S. (2013). Neural Computation and the Computational Theory of Cognition, Cognitive Science 34, 453–488.
  • Piccinini, G., Scarantino, A. (2011). Computation vs. Information Processing: why their Difference Matters to Cognitive Science, Studies in History and Philosophy of Science 41, 237-246.
  • Yuste, R., Goering, S., Bi, G., Carmena, J. M., Carter, A., Fins, J. J., ... & Kellmeyer, P. (2017). Four ethical priorities for neurotechnologies and AI. Nature News, 551(7679), 159.

Disclosures: None


Time to Stop Celebratory Explosions: The Eclipsing Neuroethical Harms of Fireworks

May Chammaa (1)
1. Wayne State University School of Medicine

In the month around July 4th in the USA, approximately 180 people seek emergency treatment daily for firework-related injuries(1). Safety advocates warn of the harms of fireworks especially as victims of injuries are frequently bystanders and children. In addition to these significant physical harms, what has not been recorded is the number of individuals who are psychologically harmed by fireworks. Fireworks have a notable adverse effect on mental function–directly due to their powerful munition-like nature and indirectly through their impact on the environment–warranting the examination of the neuroethical implications of their continued use(2–4). This is a pertinent topic because although there has been some increased awareness of the harm of fireworks in parts of the USA and globally, there has also been an increased rate of fireworks use partly due to states loosening restrictions, leading to a shift from professional to amateur use. With this shift, there is an associated increase in injury rate: a 2020 study demonstrated that after loosening restrictions on fireworks in West Virginia, there was a 39% higher rate of injuries(5). Thus, as we continue to allow the use of fireworks, we must pause and contemplate the risks of their continued use, the societal obligations we have to each other, and what it means of our priorities when we condone their use despite the damage they cause to our communities. The cumulative negative impact of fireworks, their minimal benefit, as well as the plethora of celebratory alternatives with the same purpose, give a clear clue as to what our actions ought to be to protect the mental health and function of our communities. It is critical that we advocate for policy implementations to reflect our awareness of the ramifications of fireworks. We need to establish a solution that encompasses different aspects of the problem: regulation on fireworks contents to decrease environmental harm, control on who can use them to decrease injuries, and restriction on the times of their use so that the community is knowledgeable of their use in advance to decrease the damage of these surprise explosions. When approaching the solution, two different versions stand out: advocating for a ban versus advocating for higher regulation. This presentation will argue against advocating for a ban by comparing this issue to the consequences of historically choosing the ban route such as, banning substances and alcohol.


  • 1. Fireworks | CPSC.gov. Accessed July 10, 2020. https://www.cpsc.gov/Safety-Education/Safety-Education-Centers/Fireworks
  • 2. López C. How fireworks affect you, from increased anxiety and memory problems - Insider. Insider. Published 2020. Accessed July 10, 2020. https://www.insider.com/how-fireworks-affect-you-from-increased-anxiety-and-memory-problems-2020-7?amp&__twitter_impression=true
  • 3. Cabrera LY, Tesluk J, Chakraborti M, Matthews R, Illes J. Brain matters: From environmental ethics to environmental neuroethics. Environ Heal A Glob Access Sci Source. 2016;15(1):20. doi:10.1186/s12940-016-0114-3
  • 4. Shriver AJ, Cabrera LY, Illes J. The American Journal of Bioethics Environmental Neuroethics: Bridging Environmental Ethics and Mental Health. Environ Neuroethics Bridg Environ Ethics Ment Heal. 2017;17(9):26-27. doi:10.1080/15265161.2017.1353172
  • 5. Rudisill TM, Preamble K, Pilkerton C. The liberalization of fireworks legislation and its effects on firework-related injuries in West Virginia. BMC Public Health. 2020;20(1):137. doi:10.1186/s12889-020-8249-0

Disclosures: None


Neurobiological Genesis of Axiology: Considerations in Mexican Legal Norms

Liebet Alery Platas Villa (1,2), Martínez Bulle-Goyri Víctor Manuel (1), Arnaldo Platas Martínez (2)
1. National Autonomous University of México; 2. Veracruzana University

Can neuroscience support policy conclusions? Without a doubt: values come before standards. Diego Gracia says it: ""...value is more elemental than rules, laws or principles of action"". As a consequence of the above, it is possible to consider a link within the norms and neuroscience on the horizon of the neurobiological genesis of axiology. Let us dissect the above from the epistemic sphere of neuroscience: neurosciences are sowing new answers in areas where philosophical and humanistic traditions have dominated; neuroscience allows scientific enlightenment as a multidisciplinary field, which enables us to address challenges that not only involve the individual, but also his or her social dynamism. Law is a social phenomenon that is built through consensus and allows for the harmonization of the community. The law is a set of norms and principles that, although determined by the State through procedures that lead to social organization, also originate and are oriented towards an axiological community scheme: values subordinate to social plurality, but are consolidated, understood and emanated from an axiological brain. For John Rawls, justice is the result of social consensus that allows a constitutional value

to be broad and to consider all subjects living in that society. The attribution of the value or the content of the value allows to establish a series of elements that make possible the interpretation of it. In Mexico the current legal system seeks to stand out in the position of a non-inclusive positivism, where although it is true that the legal norms constitute a quite important plexus in the development of society, it must also be taken into account that in it are found a series of principles of enormous axiological content that are determining for the adequate application of the norms. It is not enough that the Mexican State is the only constructor of legal norms, since this would lead to a panorama of social injustice. It is necessary to consider the neurobiological origin of values, their contextual determinants and their transcendence towards the normative, allowing for the universality, progressiveness, interdependence and indivisibility of legal norms.


  • Gracia, D. (2011). La cuestión del valor. Discurso de Ingreso de Diego Gracia en la Real Academia de Ciencias Morales y Políticas (págs. 9-171). Madrid: Discurso de Ingreso de Diego Gracia en la Real Academia de Ciencias Morales y Políticas.
  • Rommelfanger, K. S., Jeong, S.-J., Montojo, C., & Zirlinger, M. (2019). Neuroethics: Think global. Neuron, 363-364.
  • Verplaetse, J. (2009 ). The Moral Brain. Essays on the Evolutionary and Neuroscientific Aspects of Morality. London: Springer Dordrecht Heidelberg .
  • Ethics DOI:10.1023/A:1009799221540
  • CHANGEUX, J. P et al., (2005) NEUROBIOLOGY OF HUMAN VALUES. Springer
  • ISBN: 9783540262534

Disclosures: None


Researchers' Perspectives on the Most Pressing Ethical Issues in Pediatric Deep Brain Stimulation

Lavina Kalwani (1), Kristin Kostick (2), Laura Torgerson (2), Katrina Muñoz (2), Saira Elizabeth Alex (3), Eric A. Storch (4), Jennifer Blumenthal-Barby (2), Gabriel Lázaro-Muñoz (2)
1. Department of Biosciences, Rice University; 2. Center for Medical Ethics and Health Policy, Baylor College of Medicine; 3. Baylor College of Medicine; 4. Menninger Department of Psychiatry and Behavioral Sciences, Baylor College of Medicine

Background: Deep brain stimulation (DBS) devices have become increasingly effective and accessible therapeutic options for adults with movement (e.g., Parkinson's Disease, essential tremor, dystonia) and psychiatric (e.g., obsessive compulsive disorder, depression) disorders (Schermer, 2011). Given the favorable outcomes observed in adult populations, there is increasing viability of utilizing DBS technologies for pediatric populations (pediatric DBS, or pDBS). Indeed, pDBS is used in youth with movement disorders including dystonia (Elkaim et al., 2018; Hale et al., 2020). However, pediatric patients come with accentuated challenges as a vulnerable and developmentally distinct population. Thus, it is essential to better understand and address ethical concerns that may arise in the setting of pDBS. To our knowledge, no empirical neuroethics research has been reported on the pressing ethical issues of pDBS.

Approach: In-depth, semi-structured interviews were conducted with researchers (n=27) who work with pediatric dystonia patients. Researchers were asked about their perspectives on the most pressing ethical issues surrounding pDBS, as well as differences between pDBS for movement disorders as compared to psychiatric or seizure disorders. Thematic content analysis was performed to identify themes emerging from coded interview segments.

Results: Researchers were primarily concerned with four thematic categories in the application of pDBS. Over half (56%) cited the risks of the procedure on developing brains and uncertainties in expected outcomes. Just under half (48%) of researchers were concerned about effectively/appropriately educating and communicating with patients and families, particularly when developing patient/family understanding and setting expectations. An additional prominent (44%) ethical issue was the decision-making roles of clinicians, families, and patients, and balancing the degree of decision-making influence of these three stakeholders. In the context of a pediatric population, consent was indicated as a pressing ethical issue by 41% of researchers. Additional thematic categories included considering long-term effects (22%), weighing the risk/benefit ratio (22%), institutional barriers (22%), and issues of access, primarily surrounding cost and insurance (22%). Moreover, preliminary analysis suggests heightened concern for informed decision-making in psychiatric patients as compared to dystonia patients, and for pediatric dystonia patients as compared to seizure patients.

Conclusion: While clinical risks and uncertainties comprised the most prevalent thematic concern, the majority (74%) of researchers alluded to one or more of three informed decision-making components: consent, decision-making roles, and education/communication. There is a pressing and urgent need to evaluate decision-making approaches and clarify guidelines for assessing risk. Further research is needed to better understand researcher concerns as they relate to differing applications of pDBS.


  • Elkaim, L. M., De Vloo, P., Kalia, S. K., Lozano, A. M., & Ibrahim, G. M. (2018). Deep brain stimulation for childhood dystonia: current evidence and emerging practice. Expert review of neurotherapeutics, 18(10), 773-784.
  • Hale, A. T., Monsour, M. A., Rolston, J. D., Naftel, R. P., & Englot, D. J. (2020). Deep brain stimulation in pediatric dystonia: a systematic review. Neurosurgical review, 43(3), 873-880.
  • Schermer, M. (2011). Ethical issues in deep brain stimulation. Frontiers in integrative neuroscience, 5, 17.

Disclosures: None


Machine Learning and Neural Signals: Potential Problems of Low Algorithmic Interpretability

Asad Beck (1)
1. Department of Biology, University of Washington

The heightened sophistication of neurotechnological devices has been driven partly by increased sophistication of machine learning algorithms (MLAs). However, more sophisticated MLAs tend to be less interpretable – it becomes more difficult to understand the relationship between input and output.1 This leads to certain MLAs functioning more akin to black boxes, obscuring patterns learned by the algorithm. I argue that, when applied to neural signals, MLAs that function more akin to black boxes may present considerable ethical issues that may be more difficult to resolve, marking the need to require a minimum degree of interpretability. This argument will be explored in context of bias, responsibility, and identity.

Bias refers to systematic errors made by algorithms that favor certain outcomes over others. MLAs must first be trained on collected data. If these data are primarily collected from men, for example, the algorithm would be biased to data from men. If this MLA is then used to drive stimulation of the amygdala to improve symptoms of fear, then it may make its decisions assuming the individual has the brain of a man, rather than a woman.

Responsibility refers to being the cause of and accountable to some outcome. Issues of responsibility regarding MLAs are often resolved using ideas of user liability.2 In the context of neural devices, however, user liability may be more unclear. For instance, if an algorithm effectively learned to elicit fear via stimulation in response to certain neural signals, it would be difficult to discern whether the ensuing outcome was caused by the individual or the device.

Identity refers to one's sense of selfhood. One of the primary potential risks posed to identity by stimulation is when an end-user's actions "do not flow from her intentions or beliefs but rather are the result of direct brain manipulation".3 In the broader context of brain-computer interface (BCI), previously discussed issues of bias and responsibility may be related to identity. For instance, if a biased algorithm coerces the user to act in ways they would not otherwise, they may begin to feel alienated from themselves.

As more sophisticated MLAs are applied to BCIs, researchers will need to consider whether robust MLAs with lower interpretability are acceptable, or if only more interpretable MLAs should be used – at the potential cost of less sophistication. Because of the potential complications associated with black box algorithms, a minimum degree of interpretability may be necessary in this context.


  • 1. James G, Witten D, Hastie T, Tibshirani R. An introduction to statistical learning. Vol 112: Springer; 2013.
  • 2. Reed C, Kennedy E, Silva S. Responsibility, Autonomy and Accountability: legal liability for machine learning. Queen Mary School of Law Legal Studies Research Paper. 2016(243).
  • 3. Baylis F. “I am who I am”: on the perceived threats to personal identity from deep brain stimulation. Neuroethics. 2013;6(3):513-526.

Disclosures: None


Relationship Between Patients' Desired Control of their Deep Brain Stimulator and Subjective Global Control

Amanda R. Merner (1), Paul J. Ford (2,3), Cynthia S. Kubu (2,3)
1. Department of Psychological Sciences, Case Western Reserve University; 2. Center for Neurological Restoration, Cleveland Clinic; 3. Cleveland Clinic Lerner College of Medicine of Case Western Reserve University

Patients with Parkinson's Disease (PD) who undergo deep brain stimulation (DBS) surgery relinquish some bodily control to their DBS team to gain greater control of their symptoms and restore their ability to engage in valued activities. Patients' perceptions of control may differ given their reliance on the DBS team for adjustments and the invasiveness of surgery. This topic has generated interest in the neuroethics' literature, with some arguing that DBS results in a loss of control while others assert DBS can enhance autonomy and control. 1,2 The current study examined the relationship between patients' desired control of the deep brain stimulator and perceptions of global life control.

A consecutive cohort of 52 patients with PD participated in a repeated measures, prospective study over three assessment points (pre-surgery, post-surgery months 3 and 6). This study employed a mixed-methods approach including semi-structured interviews to assess patients' desire to control their DBS devices and their perceptions of global control at each time point. Visual analogue scales were used to quantify patients' perceptions of control, and qualitative interview data were coded to examine themes in patients' personal experiences related to changes in their perceptions of control. Due to technical difficulties, only a subset (N=39) of qualitative data are available, however this sample size was sufficient to achieve data saturation.

Participants reported significant increases in their perception of global control over time (Mean Change=1.42, SD=2.55) and significant declines in their desired control of the device (Mean Change=-2.41, SD=4.41). These changes were unrelated to improvements in motor symptoms. Improvements in global control were negatively correlated with decline in desired device control (r=-.31, p=0.038). Qualitative data further supported our quantitative findings, with patients citing trust in the surgical team, their expertise, and themes of shared control and open communication with the team as reasons for desiring less control. Patients reported increased global control beyond improved motor symptoms, emphasizing restored ability to self-regulate across multiple domains, and increased independence.

There are few empirical data documenting changes in patients' desire for device control throughout treatment and how device control may relate to patients' perceptions of global control. Given the growing interest in expanding use of neuromodulation devices, it is critical to understand how these different types of control fluctuate within patients during treatment, and the underlying factors contributing to these changes. Our data highlight distinctions in different types of control and have implications for the development of patient-controlled neurostimulation devices.


  • 1. Gilbert F, Goddard E, Viana J N M, Carter A, Horne M. I miss being me: Phenomenological effects of deep brain stimulation. American Journal of Bioethics. Neuroscience 2017; 8:2, 96-109.
  • 2. Glannon W. Neuromodulation, agency and autonomy. Brain Topography 2014; 27:46-54.

Disclosures: "Amanda R. Merner: Reports no disclosures Cynthia S. Kubu: Received grant funding from National Institutes of Health during this study.Paul J. Ford: Received grant funding from the National Institutes of Health during this study. The study was funded by the National Institute of Neurological Disorders and Stroke, Award Number RC1NS068086. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institute of Neurological Disorders and Stroke or the National Institutes of Health."


Measuring Mind-Body Beliefs as a Potential Cause of Psychosocial Side-Effects in DBS Patients: Lessons & Recommendations for a Research Program

Christopher Pham (1,2), Kate MacDuffie (1,2,3), Erika Versalovic (1,2), Sara Goering (1,2), Eran Klein (1,2,4)
1. Center for Sensorimotor Neural Engineering; 2. Department of Philosophy, University of Washington; 3. Treuman Katz Center for Pediatric Bioethics, Seattle Children’s Research Institute; 4. Oregon Health & Science University School of Medicine

Neuroethicists have long been aware that deep brain stimulation (DBS) can cause a variety of psychosocial changes. However, such changes are poorly understood (Schüpbach 2006, Gilbert 2018, de Haan 2017). A recent literature review (Gilbert 2018) reveals that many theoretical explanations for these changes have been offered, yet few of them have concrete empirical backing. Calls have been made for greater investigation of psychosocial side effects of DBS to better understand their true prevalence and etiology.

Many of these theoretical explanations are difficult to measure (de Haan 2017). As a result, one significant challenge in substantiating them is a deficiency in available research methodology. We took on this challenge by developing a research tool designed to study one hypothesis in particular. Mecacci & Haselager (2014) suggest that distress ensues when patients who are dualists (believing the mind is immaterial, distinct from the brain) undergo DBS and a subsequent conceptual shift toward materialism (where the mind and brain are identical), a result of personal realizations about the brain following experience with DBS and/or the physician's materialistic explanations. This distress purportedly explains psychosocial maladaptation to DBS. Our research tool assesses conceptions of mind (materialism or dualism) in DBS patients. We developed this tool by aggregating survey questions from literature to form our own scale. The scale was validated using Exploratory Factor Analysis on data collected through Amazon's Mechanical Turk. Additionally, a draft of our research tool was pretested on patients undergoing DBS for Parkinson's. Pretesting on the target population is an essential step in developing research tools for behavioral sciences (Boateng 2018).

While these are the first steps for developing a tool to measure the impacts of DBS on conceptions of mind, we think there are important lessons to be learned from the development process as a whole. First, research tool development ideally occurs in an interdisciplinary environment, where neuroscientists, clinicians, and ethicists work together. Second, tool development requires testing in clinical and non-clinical populations. And third, tool development may require multiple methodologies to triangulate complex issues (like conceptions of mind). While our near-term goal is to better understand psychosocial adaptation and changes in mind-body beliefs, our broader goal is to provide an example of a research process that could be used by

neuroethicists to verify other difficult-to-study hypotheses explaining psychosocial changes caused by DBS.Wide adoption of similar efforts could help satisfy calls for greater substantiation of neuroethical theories.



  • 1] Schüpbach M, Gargiulo M, Welter ML, et al. Neurosurgery in Parkinson disease: a distressed mind in a repaired body?. Neurology. 2006;66(12):1811-1816. doi:10.1212/01.wnl.0000234880.51322.16
  • [2] Gilbert F, Viaña, JNM & Ineichen, C. Deflating the “DBS causes personality changes” bubble. Neuroethics. 2018. https://doi.org/10.1007/s12152-018-9373-8
  • [3] de Haan S, Rietveld E, Stokhof M, Denys D. Becoming more oneself? Changes in personality following DBS treatment for psychiatric disorders: Experiences of OCD patients and general considerations. PLoS ONE. 2017;12(4): e0175748. https://doi.org/10.1371/journal.pone.0175748
  • [4] Mecacci G & Haselager WFG. Stimulating the Self: The Influence of Conceptual Frameworks on Reactions to Deep Brain Stimulation. AJOB Neuroscience. 2014;5(4): 30-39. DOI: 10.1080/21507740.2014.951776
  • [5] Boateng, GO, Neilands TB, Frongillo EA, et al. Best Practices for Developing and Validating Scales for Health, Social, and Behavioral Research: A Primer. Frontiers in public health. 2018;6(149). doi:10.3389/fpubh.2018.00149

Disclosures: None


Discerning Fact from (Science) Fiction: Undergraduate Perceptions of Neurotechnology

Adam Steiner (1), Elizabeth Tiegs (1), McKayla Kurtz (1), Serena Miller (1), Russell Tesmer (1)
1. Minnesota State University, Mankato

The consumption of media via streaming services and other means has drastically increased in the past year. Undergraduate student's media consumption alters their understanding of the underlying science presented. Additionally, the neurotechnological information presented by these media sources often distorts reality, which may significantly interfere with their willingness and ability to interact with neuroscience. In particular, students are more susceptible to misinformation in science-fiction media, demonstrating a need to differentiate scientific fact from fiction. Emerging neurotechnologies are becoming increasingly popular in science-fiction. For example, the improper usage and understanding of electroencephalograms, as depicted by popular media, has the potential to influence our determinations of guilt and innocence. Lack of initiatives and data regarding the effects of misinformation in neurotechnology at a critical stage in student development, risk the propagation of bad neuroscience. However, direct engagement of undergraduates shows that there is willingness to engage with neurotechnological issues. When educators select topics that are more important to undergraduates, engagement increases . This strongly indicates that to facilitate an understanding of neurotechnological facts, students must be engaged by educators.

Our preliminary data indicates that students who consume more science-fiction based media express more confusion over whether or not the usage of neurotechnology reflects actual views held by neuroscientists. They question their ability to distinguish the facts from science-fiction. Previous attempts at correcting media bias via ‘fake news inoculation' have shown success by guiding users through the various manipulated aspects of the news.

We hypothesize that the novel neurotechnologies depicted in science-fiction media obscure the reality of the neuroscience and associated ethical issues; students who consume more of these media show a greater misunderstanding of neurotechnology. Our approach surveys undergraduate students to ascertain 1) their levels and type of media consumption; 2) their trust in the validity of neurotechnology presented with vignettes pulled from currently available media (can a polygraph prove innocence?, can an EEG really read a mind?); 3) the sources of their information /conclusions; and 4) are students more likely to engage with neuroethical issues if they believe the neurotechnologies being depicted are accurate? Initial results will discuss our findings and seek to identify particular areas of misinformation that most strongly influence a student's belief in neurotechnology and neuroethics. These data will inform the creation of an educational module designed to inoculate students against the misleading claims presented in science-fiction media.


  • Barnett, M., Wagner, H., Gatling, A., Anderson, J., Houle, M., & Kafka, A. (2006). The impact of science-fiction film on student understanding of science. Journal of Science Education and Technology, 15(2), 179–191. http://doi.org/10.1007/s10956-006-9001-y
  • Illes, J., Moser, M. A., Mccormick, J. B., Racine, E., Blakeslee, S., Caplan, A., … Weiss, S. (2010). NeuroTalk: Improving the Communication of Neuroscience NIH Public Access. Nat Rev Neurosci, 11(1), 61. http://doi.org/10.1038/nrn2773
  • Abu-Odeh, D., Dziobek, D., Jimenez, N. T., Barbey, C., & Dubinsky, J. M. (2015). Active learning in a neuroethics course positively impacts moral judgment development in undergraduates. Journal of undergraduate neuroscience education : JUNE : a publication of FUN, Faculty for Undergraduate Neuroscience, 13(2), A110–A119
  • Morein-Zamir, S., & Sahakian, B. J. (2010). Neuroethics and public engagement training needed for neuroscientists. Trends in Cognitive Sciences, 14(2), 49–51. http://doi.org/10.1016/j.tics.2009.10.007
  • Roozenbeek, J., & van der Linden, S. (2019). Fake news game confers psychological resistance against online misinformation. Palgrave Communications, 5(1), 1–10. http://doi.org/10.1057/s41599-019-0279-9

Disclosures: None


Deep–Brain Stimulation for Alzheimer's Disease: Narratives, Stigmas, and Metrics for Success

Hannah Skye Martens (1), Timothy Brown (2)
1. University of Illinois Chicago; 2. University of Washington, Seattle

The recently-completed ADvance trial tested the safety of deep-brain stimulation of the Fornix (what they call DBS-f) as a treatment for mid-stage Alzheimer's Disease (AD) (Ponce et al, 2016). The ADvance II trial—establishing the efficacy of DBS-f is currently underway (Leoutsakos et al, 2018). DBS-f, however, comes with a number of potential moral implications: the possibility that informed consent may not be possible, the risk of causing adverse side-effects, and the goals of therapy. But while commenters have already remarked on these possible problems (Siegel et al, 2017), many of the cultural and interpersonal problems patients (and their caretakers) will face are not as well-unexplored. It is not clear what role DBS-f will play in broader narratives of (and stigmas about) Alzheimer's, it is not clear what role it will play in caretaker-patient relationships, and—perhaps most crucially—it is not clear what role DBS-f will play in how people with AD perceive or trust themselves. In our presentation, we call for greater attention to the broader narratives about AD employed both by caretakers and people with AD in order see if DBS-f will worsen or alleviate the negative effects of these narratives. We argue that DBS-f is positioned to do the latter.

We will begin by describing existing narratives about Alzheimer's that dominate our cultural and biomedical perceptions of the disease: that is, how we often describe people with Alzheimer's in terms of their loss of personhood and through horror-styled narratives (Taylor, 2008). Using these conceptions of the disease, the we will follow the trajectory of how interventions like DBS-f are meant to treat AD and raise potential concerns of how these narratives complicate what it means to consent to them, what the goals of treatment are, and how we measure treatment success. Then, we will draw on alternative (read: feminist, relational) views to give an account of how we ought to consider and relate to people with Alzheimer's—with special attention to the role of trust. Finally, we will explore the implications of our view on the use of neurotechnological interventions on Alzheimer's—like the DBS-f used for the ADvance trials. We will argue that the therapeutic aims of DBS-f participate in the aforementioned, potentially harmful, narratives.


  • Leoutsakos, Jeannie-Marie S., Haijuan Yan, William S. Anderson, Wael F. Asaad, Gordon Baltuch, Anna Burke, M. Mallar Chakravarty et al. "Deep brain stimulation targeting the fornix for mild Alzheimer dementia (the ADvance trial): a two year follow-up including results of delayed activation." Journal of Alzheimer's Disease 64, no. 2 (2018): 597-606.
  • Ponce, Francisco A., Wael F. Asaad, Kelly D. Foote, William S. Anderson, G. Rees Cosgrove, Gordon H. Baltuch, Kara Beasley et al. "Bilateral deep brain stimulation of the fornix for Alzheimer's disease: surgical safety in the ADvance trial." Journal of neurosurgery 125, no. 1 (2016): 75-84.
  • Siegel, Andrew M., Marna S. Barrett, and Mahendra T. Bhati. "Deep brain stimulation for Alzheimer’s disease: ethical challenges for clinical research." Journal of Alzheimer's Disease 56, no. 2 (2017): 429-439.
  • Taylor, Janelle S. "On recognition, caring, and dementia." Medical anthropology quarterly 22, no. 4 (2008): 313-335.

Disclosures: None


Gender Affirming Imaging: The Neuroethical Considerations of Corroborating Brain Imaging to Transgender Identity

Brianna Carolyn Sohl (1)
1. Wayne State University School of Medicine

Scientific and medical research is increasingly being directed toward understanding the lesbian, gay, bisexual, transgender, and queer (LGBTQ+) community. Within these efforts, there is increasing focus on understanding the differences between biological sex and gender identity. Emerging from these data are projects utilizing neuroimaging which specifically target the brain for a deeper understanding of the neuroanatomical and neurophysiological differences and similarities among cisgender and transgender persons. For instance, data suggests that volume differences exist between cisgender and transgender persons in the putamen and thalamus (1). In addition, others are using neuroimaging as a means to track longitudinal changes in brain matter as a transgender person takes long-term hormone therapy such as testosterone or estrogen (2). One must ask however: are the benefits of distinguishing unique brain pattern characteristics to transgender persons significant enough to outweigh the risk of inadvertently creating a so-called screening test which would validate/invalidate a persons gender identity?

Some argue that these data will inform clinicians about the effects of long term hormone use on the brain and other organs (3), demonstrate the presence or absence neural plasticity (4), or earlier identification of transgender identity (1). However, this may lead to several issues if a baseline were to be established. This could create a screening tool that may result in incidental findings which could be interpreted as neuroanatomical or neurophysiological sex being in opposition to the person's gender identity. This may then create a conflict in which the patient's identity is contrasted to what may be viewed as a more objective finding. This may create psychological distress in said person and undermine their ability to determine their own gender identity. These ethical considerations must be taken into account as neuroimaging progresses and the LGBTQ+ community becomes a greater focus of neuroscientific research.


  • 1. Baldinger-Melich P, Urquijo Castro MF, Seiger R, et al. Sex Matters: A Multivariate Pattern Analysis of Sex- and Gender-Related Neuroanatomical Differences in Cis- and Transgender Individuals Using Structural Magnetic Resonance Imaging. Cereb Cortex. 2020;30(3):1345-1356. doi:10.1093/cercor/bhz170
  • 2. Spizzirri G, Duran FLS, Chaim-Avancini TM, et al. Grey and white matter volumes either in treatment-naïve or hormone-treated transgender women: A voxel-based morphometry study. Sci Rep. 2018;8(1):736. doi:10.1038/s41598-017-17563-z
  • 3. Testosterone therapy for transgender men- ClinicalKey. Accessed July 10, 2020. https://www-clinicalkey-com.proxy.lib.wayne.edu/#!/content/playContent/1-s2.0-S221385871600036X?returnurl=null&referrer=null
  • 4. Kranz GS, Hahn A, Kaufmann U, et al. Effects of testosterone treatment on hypothalamic neuroplasticity in female-to-male transgender individuals. Brain Struct Funct. 2018;223(1):321-328. doi:10.1007/s00429-017-1494-z

Disclosures: None


Legal Approach to Neuroscience, Neuroethics, and Society: Focusing on Memory and Law

Juhee Eom (1)
1. Yonsei University

Nowadays Brain Science, neuroscience and Artificial intelligence are rapidly evolving day by day. Brain imaging techniques, invasive and non-invasive brain stimulation are used extensively not only in the diagnosis, treatment and care of disease such as Parkinson's disease, depression, dementia, memory dysfunction , but also in the enhancement of human brain cognition, as well as in legal regime such as criminal liability. For the first time in Korean, a research team led by Professor Chun Kee Chung at Department of Neurosurgery, Seoul National University Hospital published the article about proving the relationship between hippocampus and memory function through direct electrical stimulation of the brain's core in International Journal ‘Brain Stimulation', recently in January 2020. These findings raise the need to examine and review the ethical, legal, social implications of the possibility of intervention and control of a person's memory and neuro- enhancement.

Therefore, the purpose of this presentation is to assess the possibility of ethical, legal and social acceptance in line with the developmental aspects of neuroscience and to investigate the legal implications of neuroscientific development in Korea. By examining the framework of fundamental rights regarding meaning, function and role of human memory including right to freedom within Korean legal regime, it was determined about the normative basis of neuroscientific research and cure of brain function that can be not as a risk to society but as a legal safeguard. As specific legal review, it includes the protection of human research subject, relationship between memory and law and scope of right regarding memory. As a result, some points to consider in discussing the factor of future legal amendment were also presented.


  • None

Disclosures: None


Can I Trust the Device in My Head?' Closed-loop DBS and Navigating Threats to Agency

Timothy Emmanuel Brown (1)
1. University of Washington

Deep–Brain Stimulators (DBS) has become an effective means of treating a variety of neurological conditions, and an investigational device for treating psychiatric conditions. Most DBS systems work in an open-loop way, applying constant stimulation over the lifetime of the treatment. The next generation of DBS systems, however, are closed-loop, recording the user's neural activity in real time, and adjusting the level of the stimulator in order to fit their needs or demands (Arlotti et al. 2018; Houston et al. 2019). However, depending on what neural activity is recorded, what features are extracted from those recordings, and how those features are used to modulate stimulation parameters, different modes of controlling DBS will likely produce different experiences for their users—with differing practical and moral outcomes.

In this presentation, I suggest that many of the most pressing moral issues with CL-DBS concern the design and use of specific components of the closed-loop system: its neural sensing platform, its feature extraction algorithms, and its stimulation capabilities. I'll maintain that the design of each of these components of a closed-loop system can change the user's ability to make decisions in ways that reflect the assumptions, biases, and decisions made by a variety of stakeholders and institutions in the design process. The way each component is designed, I argue, comes with deep implications for whether or not users can trust these systems to support their agency. I close with a series of recommendations for investigators, device manufacturers, medical professionals to follow as they design and implement trustworthy CL-DBS systems.


  • Arlotti, Mattia, Sara Marceglia, Guglielmo Foffani, Jens Volkmann, Andres M Lozano, Elena Moro, Filippo Cogiamanian, et al. 2018. “Eight-hours adaptive deep brain stimulation in patients with Parkinson disease.” Neurology 90 (11): e971–e976.
  • Brown, Timothy Emmanuel. n.d. “Building Intricate Partnerships with Neurotechnology: Deep-Brain Stimulation and Relational Agency.” International Journal of Feminist Approaches to Bioethics.
  • Houston, Brady, Margaret Thompson, Andrew Ko, and Howard Chizeck. 2019. “A machine-learning approach to volitional control of a closed-loop deep brain stimulation system.” Journal of Neural Engineering 16 (1): 016004.

Disclosures: None


Teaching Neuroethics in a Time of Crisis

Ann E. Fink (1), Gillian Hue (2)
1. Lehigh University; 2. Emory University

Teaching neuroethics provides a special value in times of crisis. The COVID-19 pandemic presents neuroscientists with responsibilities requiring explicit consideration of individual and social values. In December 2019, COVID-19 caused severe illness and death in China before spreading across the world. The WHO urged countries to "take urgent and aggressive action"[1] and many entered quarantine arrangements to reduce disease transmission. In the USA, public buildings shuttered; employers and educational institutions made dramatic shifts to remote work and learning. Reasonable accommodations and policies previously considered impossible or unfair arrangements for disabled people[2], were immediately and seemingly effortlessly implemented. This pandemic has starkly illuminated a capacity yet unwillingness to respond nimbly to accessibility demands, in addition to the failure of eviscerated American public health systems and the nation's patchwork for-profit, predatory health care structure. Long-standing economic and health inequities, layered onto structural racism, resulted in disproportionate illness and death in marginalized communities[3,4]. This health impact of structural racism, accompanied by visibility of racially motivated murders and police brutality against Black Americans, sparked a revitalized movement for racial justice. This movement demands the restructuring of systems of living, working, and care, not only to repudiate white supremacy but also to promote the flourishing of historically oppressed populations. Neuroethics education at this time is important. First, the historical misuse of scientific authority has paved the way for disproportionate illness in marginalized communities. A neuroethics perspective recognizes the deeply social nature of scientific inquiry and encourages critical analysis[5]. Students ask questions about power and participation in the sciences, who sets research priorities, and how distinct cultural priorities may allow different lenses on neuroscience and health. Students learn to examine multiple perspectives on the interpretation of scientific data, and question how neuroscientific knowledge and technologies may be used to help or harm. Such crucial skills allow students and instructors to explore the chains of causality and responsibility leading to a public health crisis and the politicized and nuanced responses to said crises. They may consider how a pandemic inevitably occurs via the intentional strengths, weaknesses and inequities built into a society. Finally, an ethics view allows for novel conceptualizations of accountability and justice. In this presentation, we discuss how neuroethics relates to the multiple concurrent crises unfolding in the USA, how classroom conversations may go in useful directions, and how critical pedagogical approaches yield insight and a basis for transformative action.


  • 1. World Health Organization. (2020). Timeline of WHO's response to COVID-19. World Health Organization. https://www.who.int/news-room/detail/29-06-2020-covidtimeline.
  • 2. Pak, C. (2020, April 30). Disability, Visibility and the COVID-19 Crisis. Medical Humanities. https://blogs.bmj.com/medical-humanities/2020/04/30/disability-visibility-and-the-covid-19-crisis/.
  • 3. Creary, Melissa and Fleming, Paul J. (2020, May 2). Opinion: What's behind the racial disparity in COVID-19 cases. Opinion | What's behind the racial disparity in Michigan coronavirus cases | Bridge Magazine. https://www.bridgemi.com/guest-commentary/opinion-whats-behind-racial-disparity-covid-19-cases.
  • 4. Chowkwanyun, M., & Reed, A. L. (2020). Racial Health Disparities and Covid-19 — Caution and Context. New England Journal of Medicine. doi:10.1056/NEJMp2012910
  • 5. Fink, A.E. (2020). Graphic Neuroethics: A Comics-Making Curriculum (Part I of II). The Neuroethics Blog. Retrieved on July 10, 2020, from http://www.theneuroethicsblog.com/2020/04/graphic-neuroethics-comics-making.html

Disclosures: None


What Do We Owe Participants of Neural Research? Tracing the Discourse Around Post Trial Obligations

Ishan Dasgupta (1)
1. University of Washington

Attached separately (special formatting)


  • Fins JJ. Deep brain stimulation, deontology and duty: the moral obligation of non-abandonment at the neural interface. Journal of neural engineering. 2009 Oct;6(5):050201.
  • Grady C. The challenge of assuring continued post-trial access to beneficial treatment. Yale J. Health Pol’y L. & Ethics. 2005;5:425.
  • Lázaro-Muñoz G, Yoshor D, Beauchamp MS, Goodman WK, McGuire AL. Continued access to investigational brain implants. Nature Reviews Neuroscience. 2018 Jun;19(6):317.
  • LeGreco M, Tracy SJ. Discourse tracing as qualitative practice. Qualitative Inquiry. 2009 Nov;15(9):1516-43.
  • Sofaer N, Strech D. Reasons why post-trial access to trial drugs should, or need not be ensured to research participants: a systematic review. Public Health Ethics. 2011 Jul 1;4(2):160-84.

Disclosures: None


Medical Artificial Intelligence: Futuristic Prospects and Ethical Responsibilities

Vishruth Nagam (1)
1. Vista del Lago High School

With the advent of artificial intelligence (AI), its potential applications in medicine and healthcare have become more and more of a reality. Several studies have suggested that, when concerning patient diagnosis for neurological conditions, AI is highly sensitive to small pattern changes and can have sensitivity and specificity, as well as positive and negative predictive, values upwards of 70% [1,2]. AI systems also do not need to take breaks and can greatly boost the efficiency of diagnosis; furthermore, if utilized in combination with clinicians, as a previous study has shown, AI systems are capable of improving behavioral-health patient outcomes by more than 30% [3]. However, the feasibility of AI systems may be compromised by their need for extensive validation and access to large volumes of well-selected data [1]. AI manufacturers thus must be held responsible to develop and program automated data collection methods into AI networks, which can eliminate the usage of human resources to constantly update AI databases and ensure AI systems' abilities to quickly adapt to changing medical environments. In addition, the manufacturing costs of AI machines can present significant financial burdens for hospitals and medical practices. AI developers must be wary of this phenomenon and work to reduce manufacturing costs as much as possible; if necessary, governments at the local, state, and federal levels must also provide monetary assistance for medical facilities seeking to implement AI systems. For medical AI to be implemented at a large scale, however, AI systems and products must also be patented. Within the United States, patent law demands adherence to subject matter eligibility, a test of whether the product targets a patent-ineligible concept (abstract thought, laws of nature, etc.) and is different enough from existing products with patents to be considered original [4]. Medical AI does not pass the second step, due to its diagnosis method of relating known patterns to new cases being similar to that of doctors [4]. AI manufacturers must therefore differentiate AI products from conventional clinical diagnosis methods by, for example, detailing the computing techniques used in AI processing. Federal patent offices are also morally responsible to patent AI products for medical implementation, as AI can help solve many public health issues that involve governments and the legal system. AI's scale and reach can be greatly expanded in the future if both AI companies and governments can utilize the proposed solutions surrounding feasibility and patentability.


  • [1] Bruffaerts, Rose. "Machine learning in neurology: what neurologists can learn from machines and vice versa." Journal of Neurology, vol. 265, no. 11, 2018, p. 2745+. Academic OneFile.
  • [2] Zhang, Chenzi, et al. "Tu1217 The Use of Convolutional Neural Artificial Intelligence Network to Aid the Diagnosis and Classification of Early Esophageal Neoplasia. A Feasibility Study." Gastrointestinal Endoscopy, vol. 85, no. 5, 2017, pp. AB581-AB582. Academic OneFile.
  • [3] Reddy, Krishna. “Advantages and Disadvantages of Artificial Intelligence.” WiseStep, WiseStep.
  • [4] Tull, Susan Y. "PATENTING THE FUTURE OF MEDICINE: The Intersection of Patent Law and Artificial Intelligence in Medicine." Landslide, Jan.-Feb. 2018. Academic OneFile.

Disclosures: None


Potential End-User Perspectives of a Brain-Machine Interface for Epilepsy

Patrick Haylock (1), Narelle Warren (1), John Gardner (1), Adrian Carter (1)
1. Monash University

Epilepsy remains a challenging condition to treat and the last 30 years of innovations in antiepileptic drugs have so far failed to improve outcomes for newly diagnosed epilepsy (Chen, Brodie, Liew & Kwan, 2018). Brain machine interfaces (BMIs) offer a new avenue for treating drug-resistant epilepsy (Kassiri et al., 2017). BMIs raise challenging questions around important ethical considerations such as informed consent, autonomy, and identity (Klein & Nam, 2016), however there is no research to date on the perspectives of end-users with epilepsy regarding these devices and their ethical concerns

We conducted focus group interviews with seven females with epilepsy (Mage = 36; Myears with condition = 14) to identify their expectations, views, and concerns regarding a new BMI developed for the treatment of epilepsy. Interviews were transcribed and analysed using thematic analysis (Braun & Clarke, 2006). We found that participants desired to ‘trust' themselves, having lost trust in their ability to perform specific tasks, such as memorising information, living independently, or driving. As such, recipient attitudes towards their self are likely to be mediated by any effective BMI, and these attitudes should be considered when planning device implantation and removal. Participants were concerned with the aesthetics of the device and were often unwilling to wear a device that might attract stigma, force disclosures of their condition, or affect the ways in which their appearance (such as hairstyles) could be managed. This demonstrates that potential end-users may forgo otherwise desirable benefits (e.g. seizure control) when their ability to freely choose how they appear to the world is restricted. Participants were also interested in the opinions of their loved ones in deciding whether to use the device or not. While some felt that the eventual decision was theirs alone, participants acknowledged that epilepsy affected others in their lives (‘It's not just about me, it's about them') and that loved ones would need to be involved in any decision-making process.

This pilot study shows that potential recipients of a BMI designed to treat epilepsy have hopes and concerns that are directly relevant to ethical concerns around identity, autonomy, and informed consent. The study further demonstrates that including participants at the early stages of the development of BMIs is important for ensuring that the devices and implantation processes are suitable for the eventual end-users. Such measures may reduce the need for recipients to compromise core concerns (e.g. aesthetics) to obtain highly valued outcomes.



  • Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative research in psychology, 3(2), 77-101. doi: 10.1191/1478088706qp063oa
  • Chen, Z., Brodie, M. J., Liew, D., & Kwan, P. (2018). Treatment outcomes in patients with newly diagnosed epilepsy treated with established and new antiepileptic drugs: a 30-year longitudinal cohort study. JAMA neurology, 75(3), 279-286. doi: 10.1001/jamaneurol.2017.3949
  • Kassiri, H., Tonekaboni, S., Salam, M. T., Soltani, N., Abdelhalim, K., Velazquez, J. L. P., & Genov, R. (2017). Closed-loop neurostimulators: A survey and a seizure-predicting design example for intractable epilepsy treatment. IEEE transactions on biomedical circuits and systems, 11(5), 1026-1040. doi: 10.1109/TBCAS.2017.2694638
  • Klein, E., & Nam, C. S. (2016). Neuroethics and brain-computer interfaces (BCIs). Brain-Computer Interfaces, 3(3), 123-125. doi: 10.1080/2326263X.2016.1210989

Disclosures: None


Our Digital Future: The Criteria of Immersion for Virtual Reality Experiences in Neural Technologies

Andreas Schönau (1,2), Tony Franzky (3), Philipp Kellmeyer (4,5)
1. Department of Philosophy, University of Washington; 2. Center for Neurotechnology, University of Washington; 3. Institute of Media Cultural Studies, University of Freiburg; 4. Neuroethics and AI Ethics Lab, Department of Neurosurgery, University Medical Center Freiburg; 5. Responsible AI Research Group, Freiburg Institute for Advanced Studies (FRIAS), University of Freiburg

The development of virtual reality (VR) has vastly grown within the last 10 years and offers promising prospects for increasing the efficacy of testing, training, and rehabilitating agentive capacities for people experiencing conditions that compromise their agency. Early studies prototyped virtual experiences that allow end users to non-invasively exert control over a movement actuator that is depicted in the virtual environment, e.g. by using an EEG-based BCI-VR system to facilitate upper limb rehabilitation for post stroke patients (Achanccaray er al. 2017, Vourvopoulos et al 2019). Despite such promising usage scenarios, it is still debated what criteria are necessary to create a virtual environment that is immersive enough to generate an optimal user experience (Kellmeyer 2018).

Building upon the work of Slater (2009), we offer conceptual input to this open question by presenting three criteria that need to be met in order to generate a truly immersive experience in VR: Plausibility illusion, presence illusion, and agency illusion. 1. Plausibility illusion refers to the persuasiveness of the virtual environment, e.g. by embedding the user in a scenario that makes sense contextually. 2. Presence illusion denotes the end users' embodiment in a virtual character that generates the impression of actually being there, e.g. by allowing the user to visually perceive a virtual body with arms and legs. 3. Agency illusion allows the user to control a movement actuator in the virtual environment through an input device, e.g. by giving the user control over his virtual arm to manipulate objects.

After determining these criteria as conceptual markers, we reviewed recent VR publications and found that most studies meet any two of those three criteria but rarely all three (see, among others, Achanccaray er al. 2017, Abbasi-Asl et al. 2019). While, in most cases, those virtual experiences were sufficient for meeting the individual study goal, our conceptual distinction helps to identify neglected criteria in study design and offers a theoretical framework for assessing the expected immersiveness of the experimental setup. We believe that meeting all three criteria within a single virtual experience can significantly improve the immersiveness experienced by end users, which might mitigate unintended side effects of certain VR paradigms and help to make VR-based interventions in neurology and psychiatry ethically more viable. We hope that this preliminary conceptual work is successful in grounding the digital path for future studies that aim at facilitating VR systems within the neurotechnological space.



  • Abbasi-Asl, Reza/Keshavarzi, Mohammad/Chan, Dorian Yao (2019): Brain-Computer Interface in Virtual Reality. In: 2019 9th International IEEE/EMBS Conference on Neural Engineering (NER). San Francisco, CA, USA: IEEE. 1220–1224.
  • Achanccaray, David/Acuna, Kevin/Carranza, Erick/Andreu-Perez, Javier (2017): A virtual reality and brain computer interface system for upper limb rehabilitation of post stroke patients. In: 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). Naples: IEEE. 1–5.
  • Kellmeyer, Philipp (2018): Neurophilosophical and Ethical Aspects of Virtual Reality Therapy in Neurology and Psychiatry. In: Cambridge Quarterly of Healthcare Ethics 27, 610–627.
  • Slater, Mel (2009): Place illusion and plausibility can lead to realistic behaviour in immersive virtual environments. In: Philosophical Transactions of the Royal Society B: Biological Sciences 364, 3549–3557.
  • Tham, Jason/Duin, Ann Hill/Gee, Laura/Ernst, Nathan/Abdelqader, Bilal/McGrath, Megan (2018): Understanding Virtual Reality: Presence, Embodiment, and Professional Practice. In: IEEE Transactions on Professional Communication 61, 178–195.
  • Vourvopoulos, Athanasios/Jorge, Carolina/Abreu, Rodolfo/Figueiredo, Patrícia/Fernandes, Jean-Claude/Bermúdez i Badia, Sergi (2019): Efficacy and Brain Imaging Correlates of an Immersive Motor Imagery BCI-Driven VR System for Upper Limb Motor Rehabilitation: A Clinical Case Report. In: Frontiers in Human Neuroscience 13, 244.

Disclosures: None


Public Perceptions of Potential Actors and Adverse Effects from Brain Data Collection

Allison Kunstler (1), Karan Desai (1), Shikhar S. Gupta (1), Emma Ritter (2), William L. D. Krenzer (1), Nita A. Farahany (1,2)
1. Duke University; 2. Duke University School of Law

Scholars have long discussed the public's concerns regarding the collection and release of personal (e.g., social security number; SSN) and brain information (e.g., thoughts in one's mind; Farah, 2011; Hallinan et al., 2014; Ienca, 2015; Moore, 2017). They generally agree that personal information is vital to privacy. However, neurotechnology increasingly has the ability to convert brain data into interpretable information, introducing a new threat to personal privacy (Alpert, 2007; Bonaci, Calo, & Chizeck, 2014). Because of these ever-evolving capabilities of neurotechnology, it is vital to gain a better understanding of people's beliefs about the harms surrounding the accessibility of their brain information. We previously conducted surveys to understand how participants compared the privacy of their brain information to other types of sensitive data (PEW, 2014). Surprisingly, participants ranked their SSN as more sensitive than their neurological information and activity. Participants may have rated brain information lower due to an inability to identify adverse events that could occur from mishandled brain data. In order to test this hypothesis, we developed a new survey to test how participants evaluated privacy threats to different types of information. This survey asked participants (N = 109; 50 females, 1 other; Mage = 35.60 with SD = 10.17) to brainstorm malicious actors and possible adverse consequences related to a breach of different types of sensitive information. We found that while participants consistently cited their employer and employment trouble as the leading malicious actor and potential adverse consequence from a breach of their brain information (both accounting for approximately 25% of responses), the next leading response in both categories was that there was "no one" and "nothing" that could happen from their being access to their brain information. These findings suggest that while people are concerned with their brain information collection, they are inconsistent with what the potential harms could be. Thus, these results, while promising, demonstrate a need for greater understanding of the neuroethical implications of brain information collection. This understanding could better guide research, consumer education, and policymaking to mitigate any potential harms that could come from obtaining and decoding shared brain data.


  • Alpert, S. (2007) Brain Privacy: How Can We Protect It? The American Journal of Bioethics, 7:9, 70-73, DOI: 10.1080/15265160701518862
  • Bonaci, T., Calo, R., & Chizeck, H. J. (2014). App stores for the brain: Privacy & security in brain-computer interfaces. IEEE.
  • Farah, M. J. (2011). Neuroscience and neuroethics in the 21st century.
  • PEW Research Center, Public perceptions of privacy and security in the Post-Snowden era. Retrieved from http://www.pewinternet.org/2014/11/12/public-privacy-perceptions/(2014).
  • Yuste, R., Goering, S., Bi, G., Carmena, J. M., Carter, A., Fins, J. J., ... & Kellmeyer, P. (2017). Four ethical priorities for neurotechnologies and AI. Nature News, 551(7679), 159.

Disclosures: None


Emerging Neuralink Brain Machine Interface Technology: An Oversight Proposal to Address the Ethical, Legal, and Social Implications

Ali Said (1), Luis Sanchez (1), Michael Olek (1)
1. Touro University Nevada College of Osteopathic Medicine

Humanity is heading towards a new era with the advancements being made in artificial intelligence (AI) and neuroscience. Amongst these advances is the development of brain machine interface (BMI) technology, which holds the potential to form a bridge between AI and the human brain, forever altering the distinction between man and machine. A forerunner in the endeavor to create BMI technology is Neuralink. Their innovations include an invasive brain implant which will function as a BMI that consists of an array of polymer probes made up of 3072 electrode channels along with their development of a neurosurgical robot which can insert the polymer probes (Musk & Neuralink, 2019). Neuralink's BMI technology has been tested on various animal models with some success and they have submitted for Food and Drug Administration (FDA) approval with the intent to begin testing technology on human volunteers in 2020 (Rahman, 2019)(Novet, 2019). If successful in the anticipated clinical trials, Neuralink's BMI has the potential to tackle brain lesions and disorders (Musk & Neuralink, 2019). However, it is speculated that such technology would provide a direct avenue for the enhancement of a wide array of a user's cognitive abilities. A number of ethical, legal, and social implications (ELSI) arise regarding Neuralink's BMI technology, particularly due to its anticipated human trials and access to brain data. Although our focus is primarily centered around Neuralink's BMI, the implications brought up extend to other BMI technologies.

If the Neuralink BMI has the potential of transitioning into a medical and cognitive enhancing device (Dadia & Greenbaum, 2019), then the technology must be overseen with diligence and care by the society it may be impacting. It is therefore of utmost importance that policy is established in preparation for this. Since the ELSI of medical devices fall outside of the FDA's oversight, we call for a BMI technology oversight committee to be added as an extension of the Presidential Commission for the Study of Bioethical Issues (Executive Order No. 134521, 2009). This committee would prepare for and address through policy the ELSI that may arise as BMI technologies advance.


  • Dadia, T., & Greenbaum, D. (2019). Neuralink: The Ethical ‘Rithmatic of Reading and Writing to the Brain. AJOB Neuroscience, 10(4), 187-189. doi:10.1080/21507740.2019.1665129
  • Exec. Order No. 13521, 3 C.F.R. 62671-62673 (2009).
  • Musk, E., & Neuralink. (2019). An Integrated Brain-Machine Interface Platform With Thousands of Channels. Journal of Medical Internet Research, 21(10). doi:10.2196/16194
  • Novet, J. (2019, July 18). Elon Musk's brain-machine start-up plans human trials in 2020. Retrieved August 31, 2020, from https://www.cnbc.com/2019/07/17/elon-musk-brain-machine-startup-neuralink-plans-human-trials-in-2020.html
  • Rahman, A. (2019, July 19). Neuralink is awaiting FDA approval for its brain technology, says Musk. Retrieved August 31, 2020, from https://www.nsmedicaldevices.com/news/neuralink-brain-controlling-elon-musk/

Disclosures: None


The Need for Diversity in iPSC Repositories of Rare Genetic Neurological Disorders

Meera E. Modi (1,2), Brianna R. Golewski (1), Elizabeth D. Buttermore (1,2), Pin-Fang Chen (1,2), Maya Chopra (1), Kira A. Dies (1), Mustafa Sahin (1,2)
1. Translational Neuroscience Center, Boston Children’s Hospital; 2. Department of Neurology, Harvard Medical School

In light of the call to anti-racist action, we have undertaken the question of race (as socially constructed) in our induced pluripotent stem cell (iPSC) repository featuring a collection of cell lines derived from patients with neurological disorders. iPSCs are a critical resource for the development of novel therapeutic strategies with translational validity. As such, the need for genetic diversity in cell lines for therapeutic development is critical to ensure that mechanism-based strategies and transplant-based regenerative strategies will work across the target population (1,2). Retrospective analysis of our samples indicates that without active recruitment, there is incomplete representation of patients across ethnic backgrounds. However, the philosophy of post-racial medicine reminds us that the genetic variation between individuals across ethnic backgrounds is less than the variation within local populations (3). Consequently, the value in actively working to increase racial and ethnic diversity in our iPSC repository for therapeutic development purposes is unclear. There is an equal likelihood of occurrence across ethnic groups of de novo mutations that give rise to many of the disorders in our repository, including tuberous sclerosis complex (TSC; 4). The samples in the repository, though, do not recapitulate the ethnic diversity of the broader community. Of the 11 samples that have been collected from children with TSC, only 1 is from a patient of color (POC) while in a similar collection of 39 samples with neuropsychiatric pathologies, 10 are from POCs. The difference in representation of samples between a genetically defined neurological disorder and a symptomatically defined psychiatric disorder may reflect differences in clinical practice, as patients are recruited to the repository through medical clinics. As the goal of the repository is to curate a collection of lines that represent the diversity of mutations within single genes that give rise to different phenotypic manifestations, recruitment based on ethnic diversity is secondary to recruitment based on pathogenic diversity. However, as the pathogenic mutations arise uniformly across ethnic groups, recruitment based on pathogenic variation should de facto result in representation that reflects the ethnic diversity of the community. The failure of our samples to reflect this diversity could be explained by a number of factors including the failure of diagnosis of rare genetic disorders in POCs, failures in clinical follow-up or a failure of inclusion in biomedical research. In any case, the limited diversity of the iPSC repository is an indicator of broader disparities in the medical care of POCs.


  • 1. Chang, EA , Tomov, ML , Suhr, ST , Luo, J, Olmsted, ZT ,Paluh, JL & Cibelli, J. (2015) Derivation of Ethnically Diverse Human Induced Pluripotent Stem Cell Lines. Scientific Reports 5:15234. DOI: 10.1038/srep15234.
  • 2. Ghaffari LT, Starr A, Nelson AT and Sattler R (2018) Representing Diversity in the Dish: Using Patient-Derived in Vitro Models to Recreate the Heterogeneity of Neurological Disease. Front. Neurosci. 12:56. doi: 10.3389/fnins.2018.00056
  • 3. Perez-Rodriguez, J & de la Fuente, A (2017) Now is the Time for a Postracial Medicine: Biomedical Research, the National Institutes of Health, and the Perpetuation of Scientific Racism, The American Journal of Bioethics, 17:9, 36-47.
  • 4. Northrup, H, Krueger, DA (2013) Tuberous Sclerosis Complex Diagnostic Criteria Update: Recommendations of the 2012 International Tuberous Sclerosis Complex Consensus Conference. Pediatr Neurol. 2013 October ; 49(4): 243–254. doi:10.1016/j.pediatrneurol.2013.08.001.

Disclosures: None


Researchers’ Views on Perceived Risks of Device Removal Following Brain Implant Research

Demetrio Sierra-Mercado (1,2), Peter Zuk (1), Kristin Kostick (1), Laura Torgerson (1), Rebecca Hsu (1), Jill Oliver Robinson (1), Katrina A. Muñoz (1), Clarissa Sanchez (1), Lavina Kalwani (3), Simon Outram (4), Barbara Koenig (4), Stacey Pereira (1), Amy McGuire (1), Gabriel Lázaro-Muñoz (1)
1. Center for Medical Ethics and Health Policy, Baylor College of Medicine; 2. Department of Anatomy & Neurobiology, University of Puerto Rico School of Medicine; 3. Department of Biosciences, Rice University; 4. Program in Bioethics, University of California, San Francisco.

Background: Clinical research with brain implant devices such as adaptive deep brain stimulation (aDBS) has led to ethical discussions about device removal, including who ought to cover the financial costs of removal upon study conclusion (Sierra-Mercado & Zuk et al., 2019). A related question about device removal is how to assess the risks involved, including physical risks such as hemorrhage and infection (Patel et al., 2015, Chen et al., 2017).

Approach: Using in-depth, semi-structured interviews, we examined aDBS researcher (n=23) perspectives regarding post-trial device removal.

Results: Researchers largely believed that the physical risks of removing the neural components of the device outweigh the potential benefits of removal, citing the broad risks associated with neurosurgery. For participants who do not receive desired level of benefit, leaving the neural components of the device implanted but deactivated is typically recommended over removal.

However, some researchers said that this strategy carries opportunity costs, such as potentially preventing participants from undergoing MRI, and that leaving a device implanted but deactivated may itself involve physical risks of device erosion, infection, or other unknown long-term risks.

Other researchers pointed to the potential risk of continued device presence influencing psychological distress (described as the device "bothering" the participant). Thus, in cases of psychological distress, a few researchers suggested that removal count as medically indicated rather than as elective.

Finally, researchers highlighted financial risks posed by aDBS and similar devices associated with medically necessary or elective removal. Some research grants cover the cost of removal, even at times in the case of elective removal. But researchers said that this is often only during the grant period, after which researchers have fewer resources with which to facilitate device removal. Combined with uncertainty about whether insurance will pay for maintenance or removal, financial issues were thus seen as a non-medical risk participants faced.

Conclusion: There was broad agreement that removing neural components of the device carried risks associated with neurosurgery. However, researchers expressed various views on the acceptability of taking these risks. Researchers also appealed to several distinct categories of risk relevant to device removal: 1) physical, 2) financial, 3) opportunity costs, 4) psychological, and 5) risks of long-term dormant device. Overall, researchers expressed a commitment to honoring a participants' informed preferences for device removal. A more exhaustive analysis of both medical and non-medical risks associated with device removal will help to ensure that participants' preferences are fully informed.


  • Chen T, Mirzadeh Z, Lambert M, González O, Moran A, Shetter AG, and Ponce F.A. (2017). Cost of deept brain stimulation infection resulting in explantation. Stereotactic and Functional Neurosurgery. 95:117-124.
  • Patel DM, Walker HC, Brooks R, Ditty B, and Guthrie BL. (2015). Adverse events associated with deep brain stimulation for movement disorders: analysis of 510 consecutive cases. Neurosurgery. 11:190-199.
  • Sierra-Mercado D, Zuk P, Beauchamp MS, Sheth SA, Yoshor D, Goodman WK, McGuire AL, Lázaro-Muñoz G. (2019) Device Removal Following Brain Implant Research. Neuron. 103:759-761.

Disclosures: None


Criteria Clinicians’ Consider Before Offering Pediatric Deep Brain Stimulation

Laura Torgerson (1), Jennifer Blumenthal-Barby (1), Kristin Kostick (1), Eric A. Storch (2), Katrina Munoz (1), Gabriel Lázaro-Muñoz (1)
1. Center for Medical Ethics and Health Policy, Baylor College of Medicine; 2. Menninger Department of Psychiatry and Behavioral Sciences, Baylor College of Medicine

Background: Pediatric deep brain stimulation (pDBS) is offered under an FDA-humanitarian device exemption for children ages seven and up with refractory dystonia.1 Despite increasing use of pDBS for dystonia in the U.S. and other countries, it is unclear how clinicians determine which pediatric dystonia patients are good candidates. Recent studies suggest that some pediatric dystonia patients benefit more than others based on disease etiology, duration of disease, and severity of symptoms,1-4 however, little consideration is given to social and psychological factors.

Study Aims: To identify and understand the criteria pediatric movement disorders clinicians use when determining whether to offer pDBS for dystonia to a patient.

Methods: We conducted 28 in-depth, semi-structured interviews with dystonia clinicians who treat children and work in active DBS programs and asked about: 1) key factors taken into consideration when deciding whether to offer pediatric DBS; 2) clinical characteristics that make a good candidate for pDBS; and 3) social and psychological factors that make a good candidate for pDBS. Thematic content analysis was used to identify themes.

Results: The most frequently mentioned key factors considered by clinicians before offering DBS were etiology of the dystonia (39%), exhaustion of other treatments (39%), weighing risks and benefits of an invasive brain procedure (29%), and the current quality of life of the child (21%). The clinical characteristics most commonly raised by clinicians to describe what makes a good candidate for pDBS included how severely dystonia impacted their quality of life (46%), genetic status (39%), and shorter disease duration and/or younger age (25%). When discussing which social and psychological factors make a good candidate, every clinician mentioned the importance of family support (100%), while a good understanding of what DBS surgery and treatment entails (39%), and realistic expectations of outcomes (25%) were also noted.

Conclusion: These findings suggest that there are key medical factors that clinicians consider when determining who is a good candidate for pDBS (e.g., etiology, failed treatments, genetic status). However, clinicians also consider the ways dystonia impacts the patient's quality of life, along with non-clinical factors such as their perception of family support and their perception of the patient/family's understanding and expectations of DBS. Non-clinical considerations are especially relevant from an ethical standpoint. In particular, perception of family support could be a highly subjective determination that could potentially limit access to pDBS if their family structure is perceived as "unsupportive."


  • 1. Elkaim, L., Alotaibi, N., Sigal, A., Alotaibi, H., Lipsman, N., Kalia, S., Fehlings, D., Lozano, A., Ibrahim, G., Weil, A., Fallah, A., Wang, A., Tu, A., & Obaid, S. (2018). Deep brain stimulation for pediatric dystonia: a meta‐analysis with individual participant data. Developmental Medicine and Child Neurology, 61(1), 49–56.
  • 2. Cif, L. & Coubes, P. (2017). Historical developments in children’s deep brain stimulation. European Journal of Paediatric Neurology, 21(1), 109–117.
  • 3. DiFrancesco, M., Halpern, C., Hurtig, H., Baltuch, G., & Heuer, G. (2012). Pediatric indications for deep brain stimulation. Child’s Nervous System, 28(10), 1701–1714.
  • 4. Muñoz KA, Blumenthal-Barby J, Storch EA, Torgerson L, Lázaro-Muñoz G (2020). Pediatric Deep Brain stimulation for Dystonia: Current State and Ethical Considerations. Camb Q Healthc Ethics.

Disclosures: None