Close Menu

Workshop Schedule & Abstracts: Brain-based and Artificial Intelligence: Socio-ethical Conversations in Computing and Neurotechnology

 

 

AI Brain Based Intelligence Workshop Banner

Workshop Schedule & Abstracts: Brain-based and Artificial Intelligence: Socio-ethical Conversations in Computing and Neurotechnology

May 10-11, 2018, Chicago, IL
Illinois Tech-Downtown Campus
565 West Adams Street
Morris Hall 

Organized by the Center for the Study of Ethics in the Professions, Illinois Institute of Technology

Overview of Workshop

Register for this event via EventBrite

PDF of Schedule

Thursday, May 10, 2018

9:00 Introduction

9:15 Keynote: Mikhal Levedev, Duke University
Brain-Machine Interfaces that Multitask 

10:00 Coffee Break

10:20 Martin Glick, University of Göttingen
Weighing Risk For Automated and Brain-Computer Interfaced Laborers 

10:50 Richard L. Wilson & Michael W. Nestor, Towson University & Hussman Institute for Autism
BCI’s, Robots, and  War: An Anticipatory Ethical Analysis 

11:20 Michelle Elena Pefianco Thomas, San Francisco State University
AI Machines Need an Agency-Based, Virtue-Reliabilist Epistemic Theory

11:50 Shlomo Engelson Argamon, Illinois Institute of Technology
It's the Autonomy, Stupid! On reasonable and unreasonable risk-assessment of AI

12:20 Lunch

1:25 Ioan Muntean, University of North Carolina, Ashville
A Multi-Objective Approach to Artificial Intelligence and Artificial Morality: A Case For Machine Learning Decision 

1:55 Alexander Morgan, Rice University
Gaining Perspective: On the Neurocomputational Mechanisms of Subjectivity 

2:25 Monika Sziron, Illinois Institute of Technology
Humans, Non-human Animals, & Computing: In History and  the Future  

2:55 Coffee Break

3:15 Andrew Lopez, Wake Forest University
Artificial Intelligence and Medical Virtues 

3:45 Bill Frey and Jose Cruz, University of Puerto Rico at Mayagüez
Bias in Emotion and Big Data: Issues and Questions 

4:15 Coffee Break

4:35 Keynote: Maria Gini, University of Minnesota Twin Cities
Artificial Intelligence: What Will the Future Be

5:20 Break

5:30  Introduction: Christine Himes, Dean of the Lewis College of Human Sciences

Panel Discussion: AI Across Disciplines 
Gady Agam, Aron Culotta, Adam Hock, Jonathon Larson, Val Martin, Ullica Segerstrale Ankit Srivastava, and Ray Trygstad, Illinois Institute of Technology

Moderated by Elisabeth Hildt

 

Friday, May 11, 2018

9:00 Roman Taraban, Texas Tech University
Finding a Common Language Across Three Domains: Psychology, Neuroscience, and AI

9:30 Thomas Grote & Ezio Di Nucci, University of Tübingen & University of Copenhagen
Algorithmic Decision-Making and the Control Problem 

10:00 Wolfhart Totschnig, Universidad Diego Portales
Fully Autonomous AI

10:30 Coffee Break

11:00 Keynote: Mark Coeckelbergh, University  of Vienna
Robotics and Artificial Intelligence: Ethical and  Societal Challenges

11:45 Lunch

1:00 Fabrice Jotterand, Medical College of Wisconsin
AIs and the Boundaries of Legal Personhood: Why Uniqueness Matters

1:30 Zhengqing Zhang, Colorado School of Mines
Robots as Others Expectations Moral Agents: From Human to Interactive Relationships 

2:00  William Bauer, North Carolina State University
Virtuous v. Utilitarian Artificial Moral Agents

2:30 David Gunkel, Northern Illinois University
The Right(s) Question: Can and Should AI Have Standing?

3:00 Coffee Break

3:20 Qin Zhu, Thomas Williams and Blake Jackson, Colorado School of Mines
Blame-Laden Moral Rebukes and the Morally Competent Robot: A Confucian Ethical Perspective 

3:50 Matthew A. Butkus, McNeese State University
Neuroscience and Ethical Decision-Making in Artificial Agents 

4:20 Tyler Jaynes, Utah Valley University
Energy as a Universal Right: Applications for Artificial and Biological Beings

4:50 Richard L. Wilson and Michael W. Nestor, Towson University & Hussman Institute for Autism
CRISPR, Gene Editing, and Cognitive Enhancement

5:20 Closing Remarks



Abstracts

 

Keynote

Brain-Machine Interfaces that Multitask
Mikhail Lebedev (Duke University)

Brain-machine interfaces (BMIs) strive to restore function to people with sensory, motor and cognitive disabilities. While BMIs are typically implemented in controlled laboratory settings, making them versatile and applicable to real life situations is a significant challenge. In real life, we can flexibly and independently control multiple behavioral variables, such as programming motor goals, orienting attention in space, fixating objects with the eyes, and remembering relevant information. Several neurophysiological experiments, conducted in monkeys, have manipulated multiple behavioral variables in a controlled way, and these variables were then decoded from the activity of cortical neuronal ensemble. Moreover, in BMI experiments, multiple behavioral variables have been extracted from ensemble activity in real time, such as controlling two virtual arms simultaneously. Finally, brain-machine-brain interfaces (BMBIs) have simultaneously extracted motor intentions from brain activity and generated artificial sensations using intracortical microstimulation. Such versatile BMIs could be translated in the future into clinical applications for restoration and rehabilitation of neural function.

Weighing Risk For Automated and Brain-Computer Interfaced Laborers
Martin Glick (University of Göttingen)

Who will be held ethically responsible for the way Artificially Intelligent systems interact with their environment is a question that programmers and designers will need to contend with, especially is it regards automated work. Supposing that we are dealing with non-malicious A.I. (1), and instead a benevolent kind we will need to establish a hierarchy that goes beyond the simple dictate to do no harm. Being in the world is fraught with a web of ethical and practical decisions that Philosophers deal with and depend on a set of principles which range from Altruism to Egoism. I argue that Engineers and Cognitive Scientists will eventually have to establish in the grounding of directives that go beyond mere automation (2), a choice between ethical principles that weigh risk between themselves and others. The idea that the automated workers won’t always be designed to protect human life first is, sure the stuff of dystopian sci-fi, but also an impending reality as Management (3) becomes involved and equipment requires untold labor and expenses. Just like Process Safety Engineers weigh Human Error in workplace risks, so too will they have to weigh Robot Error and this inevitably will take from the nitty-gritty work done by Ethicists principles of interaction. I propose an introduction to the range of realistic ethical principles that practitioners will need to think about as automation becomes a figure of the work-place environment. This will involve a spectrum of responsibility ranging from pure mechanical workers to Brain-Computer interfaced labor (4) that implies a more direct sense of ethical responsibility. The moral questions at hand with these interfaced systems will become more complex as humans play a less remote role and interact concurrently with the decision-making process of Artificial Intelligence (5).

1. https://www.eff.org/deeplinks/2018/02/malicious-use-artificial-intelligence-forecasting-prevention-and-mitigation

2. Müller, Vincent C. & Bostrom, Nick (2016). Future progress in artificial intelligence: A survey of expert opinion. In Fundamental Issues of Artificial Intelligence. Springer. pp. 553-571.

3. Khalil, Omar E. M. (1993). Artificial decision-making and artificial ethics: A management concern. Journal of Business Ethics 12 (4):313 - 321.

4. Drozdek, Adam (1992). Moral dimension of man and artificial intelligence. AI and Society 6 (3):271-280.

5. Müller, Vincent C. (ed.) (2016). _Risks of artificial intelligence_. CRC Press - Chapman & Hall.

BCI’s, Robots, and War: An Anticipatory Ethical Analysis
Richard L. Wilson & Michael W. Nestor (Towson University & The Hussman Institute for Autism)

This paper will present an analysis of Brain Computer Interfaces (BCI’s) and relate this development to developments in robots and AI, with an emphasis on how both will potentially be used in warfare. We will emphasize the need for an interdisciplinary approach that employs a variety of perspectives on the subject. BCI research has been focused on a variety of issues related to health care for disabled individuals. Much of this work has been related to restoration of function. Restoration of function ranges from restoring the ability to communicate, to control capacities for people with severe neuro muscular disabilities, to creating an interface between an individual and a prosthetic, to the development of implants for ears and eyes. As stated by Blank, “A rapidly expanding and provocative area of research involves brain (or neural) implants that attach directly to the surface of the brain or in the neocortex. The major impetus for brain implants has come from research designed to circumvent those areas of the brain that became dysfunctional after a stroke or other head injuries and implant pace-maker like devices to treat mood disorders (Reuters 2005b). Brain implants electrically stimulate, block, or record impulses from single neurons or groups of neurons where the functional associations of these neurons or groups of neurons are suspected. Advanced research in brain implants involves the creation of interfaces between neural systems and computer chips.”(Blank, 38).

All of these developments in technology have created a variety of ethical issues for the stakeholders involved. After describing different types of BCI’s, we will identify the stakeholders affected by BCI’s, Robots and AI in warfare. A stakeholder based anticipatory ethical analysis address ethical issues with these technologies.

One example that will be explored is the silent talk project. “Silent Talk” is a DARPA (Defense Advanced Research Projects Agency) initiative, that is funded by the government to produce a non-invasive BCI helmet that would allow soldiers to communicate silently with one another using only their thoughts. The technology would work by detecting “pre-speech” word-specific neural impulses, analyzing them, then sending the information to the receiving partner(s). The technology is still in its infancy, but DARPA scientists are trying to identify EEG (Electroencephalography) patterns that identify specific words, and also if those patterns are universal for every person.

This paper will present an analysis of Brain Computer Interfaces (BCI’s) that emphasizes the need for an interdisciplinary approach that employs a variety of perspectives on the subject. The discussion will conclude with an anticipatory ethical analysis discussing the future promises and barriers to the future development of BCI’s, Robotics, and potential uses in war.

Artemiadis, Panagiotis ed. Neurobotics: From Brain Machine Interfaces to Rehabilitation Robotics, Springer, New York, 2014.

Berger, Theodore W. and John K. Chapin, Greg A. Gerhardt, Dennis J. McFarland, Jose C. Principe, Walid V. Soussou, Dawn M. Taylor and Patrick A. Tresco, Brain-Computer Interfaces, An International Assessment of Research and Development Trends, Springer, 2008.

Blank, Robert H. Intervention in the Brain Politcis, Policy, and Ethics, The MIT Press, 2013.

Brey, Phillip Anticipating ethical issues in emerging IT, Ethics and Information Technology, 14: 305-317, 2012.

Grubler, Gerd and Elizabeth Hoildt, eds., Brain-Computer Interfaces in Their Ethical, Social and Cultural Contexts, Springer, New York, 2014.

Guger, Chrristoph, and Theresa Vaughan, Brendan Allison, Eds., Brain-Computer Interface Research A State-of-the-Art- Summary-3, Springer, New York, 2014.

Johnson, Debora, Trends in biotechnology vol.28,No.12.h

Kuiken T. Targeted reinnervation for improved prosthetic function. Phys Med Rehabil Clin N Am. 2006 Feb;17(1):1-13.

Kuiken TA, Miller LA, Lipschutz RD, Lock BA, Stubblefield K, Marasco PD, Zhou P, Dumanian GA. Targeted reinnervation for enhanced prosthetic arm function in a woman with a proximal amputation: a case study. Lancet. 2007 Feb 3;369(9559):371-80.

Rao, Rajesh P. N., Brain Computer Interfacing An Introduction, Cambridge University Press, 2013.

Schwartz AB, Cui XT, Weber DJ, Moran DW. Brain-controlled interfaces: movement restoration with neural prosthetics. Neuron. 2006 Oct 5;52(1):205-20.

Tan, Desney S. NS Anton Nijholt, Brain=Computer Interfaces Applying our Minds to Human-Computer Interaction, Springer, WA, USA, 2010.

AI Machines Need an Agency-Based, Virtue-Reliabilist Epistemic Theory
Michelle Elena Pefianco Thomas (San Francisco State University)

This paper proposes that if an A.I. machine, using a deep machine-learning based artificial neural network was tasked to create an algorithm based on Fairweather and Montemayor’s theory of virtue theoretic epistemic psychology (VTEP) that is virtue-reliabilist and agency-based using an attention-assertion model (AAM), then theoretically, we would be closer to an artificial intelligence machine that approximates human epistemic acts of attention and assertion. How can we view these machines as agents and hold them in the same way that we view individual humans as agents. Another type of collective agency is the deep machine-learning artificial neural network machine. The individual agents are the individual artificial neural nodes, and the connections between them. The machine aggregates the data from all of the decisions and performs an action as assertable content. The machine is constrained by the attention it fixes only on the pattern it finds through the many iterations, and over time masks the irrelevant data thereby solving the many-many problem. The machine asserts the content once it has learned a reliable pattern to find the epistemically relevant information.

This paper shows how this machine is a collective agent that can pick out relevant information but there is still the problem of motivation and starting inquiries. These machines are now going to be halting inquiries, starting inquiries, tasking themselves, and creating their own goals. This independent ability to halt inquiry and start inquiry satisfies the requirement of curiosity and motivation. This paper proposes that over time, these machines will continue (much like humans on an evolutionary scale or on an individual scale in childhood) to be faster and faster at discerning what is relevant information. The indepence of these machines, the increasing speed of this process, and the process itself, are forms of motivation and curiosity.

One critic might wonder if these new machines have or will obtain enough cognitive integration such that humans are convinced that they know information in an analogous way to humans with the same degree of self-awareness and creativity. Researchers compare these machines to one-year old toddlers who are learning to process vast amounts of information and which information to disregard. This paper argues that if machines follow the same path as humans with a near analogous processing system, they may yet achieve what we would all reliably agree to define as the human ability to know and to create.

In conclusion, deep machine-learning artificial neural networks have reached the point at which they can be considered collective agents, pick out relevant information by attention, can assert content, be curious, open new inquiries, halt inquiries, be virtuously sensitive to relevant information, be virtuously insensitive to irrelevant information, and solve the many-many frame problem. Furthermore, we may inadvertently be constructing A.I. that have human prejudicial constraints and built-in frames of epistemic injustice in their systems. It is important ethically to make sure that these machines are not making errors in perception and cognition that are prejudicial, continue certain biases, or exclude certain diverse communities.

It's the Autonomy, Stupid! On reasonable and unreasonable risk-assessment of AI
Shlomo Engelson Argamon (Illinois Institute of Technology) 

Scarcely a week goes by that an op-ed, a trade magazine article, or a journal editorial doesn’t appear that sketches an existential threat posed by artificial intelligence (AI), and some approach, intellectual or regulatory, to dealing with it. Academics such as Nik Bostrom or the late Stephen Hawking and technologists like Elon Musk and Bill Gates have expressed grave concern about what they consider the existential danger of AI and the urgent necessity that we find ways to sufficiently constrain intelligences whose goals may not be aligned with our own.

However, for all of these overhyped fears about AI taking over the world and turning it all into paperclips or the equivalent, there is a far greater threat that is already deployed, and from which we are being distracted by the focus on the “intelligence-threat”. This is the proliferation of autonomous control systems, without direct human oversight, regardless of intelligence. Examples of such systems range from long-used autopilots on commercial flights and automated trading systems that measure their edges in milliseconds, to self-driving cars, adaptive climate control systems, autonomous military drones, and so on and on. This autonomy-threat is far greater in the near-term, and more certain in the long-term, than the intelligence-threat, and thus merits a great deal more attention, both in intellectual and policy terms.

The vast majority of such systems are not particularly intelligent, in any meaningful sense of the word. The smartest among them are, perhaps, idiot savants, that work well only in a specific domain. Their power, and their risk, comes rather from the fact that they are autonomous and operate without meaningful human control. Even without nefarious goals of their own or the bogeyman of superintelligence, such autonomy, often coupled with extremely fast decision times, can cause great damage before any human can detect or prevent it.

In this talk, I will discuss the three main factors to consider in evaluating this risk: Power, Transparency, and Trustworthiness, and how understanding them may help us to develop strategies to mitigate the risk posed by the autonomy-threat.

A Multi-Objective Approach to Artificial Intelligence and Artificial Morality: A Case for Machine Learning Decision
Ioan Muntean (University of North Carolina, Asheville)

In this paper we explore some of the challenges raised by new developments in artificial intelligence (AI) to decision theory (DT) which has been developed mostly for human, rational agents. Whether DT, in its standard form, can be adapted to accommodate artificial intelligence is one of the path explored in the philosophical literature on decision theory. We are interested in a more general foundational issue: integrate ethics in decision theory (Colyvan, Cox, and Steele 2010), with a special emphasis on artificial agents. This paper proposes a specific type of agent, called multi-objective agent as a worth pursuing model. One multi-objective algorithm is presented and contrasted with the algorithms that optimize one function only (single objective algorithms) used in standard DT implemented in AI (Doumpos 2013). The alternatives to the multi-objective agents are: first, a single objective agent for which ethical considerations play the role of constraints on its output and, second, an agent in which ethics restricts the combination of input variables. The multi-objective agent takes all the variables and optimizes two functions, called here the factual objective and the normative objective. For all the three agents we explore similarities and differences with models of morality inspired from Neuroscience.

The AI and the AM are here composed in a multi-objective based on a form of machine learning algorithm. This agent is designed such that factual objectives are taken as independent from value-based objectives. We discuss the philosophical implications of designing multi-objective agents. One promising path followed here is a form of moral coherentism (Lynch 2009).

We operate here under certain presumptions. We first assume a standard decision theory with its epistemological and pragmatic results (mostly as formulated by Jeffrey. We also assume that there roughly a distinction between input variables which are factual, or descriptive, and variables which are normative or represent values and norms. This corresponds to the infamous “ought-is” distinction: we do not assume that there are irreductible moral facts, but we represent them in different subspaces of variables. In the case of the AM, agents are presented with a set of factual variables and a set of normative variables. The fact-value distinction is sometimes vague and ambiguous (D. Brink 1989; Sinnott-Armstrong 2007). We nevertheless operate here under various idealizations and abstractions of our model.

Second, we assume we can obtain, for most agents an optimization function for the factual variables and an optimization function for the normative variables. We then discuss the option of a multi-objective agent that does not optimize each subspace of variables but is able to detect a Pareto front and to optimize the decision it makes in a region of the output space, where the compromise between the factual objective and the normative objective is optimal. We conjecture that a Pareto front between these two objectives can be detected by machine learning for a set of empirical test datasets.

This multi-objective agent proposed uses machine learning as an algorithm to detect the Pareto optimization front and the area in which the agent can operate in the space of the both factual and normative variables. When such a front is too weak or does not exist, the agent is able to suspend its decision. This also signals that the representation of the normative and factual variable is not as independent as it was expected to. Whether the brain operates on a similar multi-objective model is a question tangentially discussed in this paper. First it is an empirical question that needs more evidence to be supported, and second it may be the case that the moral brain does not optimize a moral objective at all.

Finally, we discuss whether such an agent integrates well with existing discussion on hybrid artificial moral agents(Allen, Smit, and Wallach 2005; Allen and Wallach 2009; Abney, Lin, and Bekey 2011). We also argue in what sense this agent instantiates well a form of moral coherentism advocated by several philosophers: D. Brink, (1989, Ch. 5), G. Sayre-McCord (1996), W. Sinnott-Armstrong (2007, chap. 10) and M. Lynch (2009, chap. 8).

Abney, Keith, Patrick Lin, and George Bekey, eds. 2011. Robot Ethics: The Ethical and Social Implications of Robotics. Cambridge, Mass: The MIT Press.

Allen, Colin, Iva Smit, and Wendell Wallach. 2005. “Artificial Morality: Top-down, Bottom-up, and Hybrid Approaches.” Ethics and Information Technology 7 (3): 149–55. https://doi.org/10.1007/s10676-006-0004-4.

Allen, Colin, and Wendell Wallach. 2009. Moral Machines: Teaching Robots Right from Wrong. Oxford; New York: Oxford University Press.

Brink, David. 1989. Moral Realism and the Foundations of Ethics. Cambridge; New York: Cambridge University Press.

Brink, David Owen. 1989. Moral Realism and the Foundations of Ethics. Cambridge; New York: Cambridge University Press.

Colyvan, Mark, Damian Cox, and Katie Steele. 2010. “Modelling the Moral Dimension of Decisions.” Nous 44 (3): 503–29.

Doumpos, Michael. 2013. Multicriteria Decision Aid and Artificial Intelligence Links, Theory and Applications. Hoboken, NJ: Wiley-Blackwell.

Lynch, Michael P. 2009. Truth as One and Many. Oxford: Oxford University Press.

Sayre-McCord, Geoffrey. 1996. “Coherentist Epistemology and Moral Theory.” In Moral Knowledge?, edited by Walter Sinnott-Armstrong and Mark Timmons, 137–89. Oxford University Press.

Sinnott-Armstrong, Walter. 2007. Moral Skepticism. Oxford University Press, USA.

Gaining Perspective: On the Neurocomputational Mechanisms of Subjectivity
Alexander Morgan (Rice University)

Most current computational or mechanistic theories of consciousness hold that conscious states are produced by complex information processing mechanisms that serve to orchestrate and integrate disparate vehicles of information together into a unified but differentiated informational state. For example, early neurophysiological speculations about perceptual experience emphasized the importance of binding together distinct, distributed encodings of different features of a perceived object into a unified representation of the object. Neurocognitive theories have emphasized the importance of mechanisms that integrate disparate representations and broadcast them throughout the cognitive system. Giulio Tononi and others have recently developed a mathematical measure of the amount of integrated information within a system, which they claim quantifies the extent to which the system is conscious.

While these theories appeal to distinctive processes of information integration and broadcast, these processes are specified very generally and are not characterized in terms of the specific computational problems they solve. This has bred worries that these theories massively over-generalize. For example, Scott Aaronson has recently argued that Tononi’s information integration theory entails that a massive two-dimensional grid of XOR gates is conscious. More broadly, Eric Schwitzgebel has argued that almost all contemporary computational theories of consciousness encompass entire nations, like the USA. The central problem here is not just that these consequences are highly counterintuitive; it is that they highlight a lacuna in existing computational theories. These theories fail to illuminate what it takes, computationally, for an integrated informational state to qualify as a state of a conscious subject.

I propose to fill this lacuna by drawing from research on the neurocomputational mechanisms that mediate a subject’s perspective on the external world. I argue that the perspectival aspects of a subject’s experience, at least in mammals, are realized by neural populations in the posterior parietal cortex (PPC). It is widely recognized that signals from various sensorimotor systems converge upon the PPC and are there encoded in a variety of sensor- and effector-specific reference frames. The PPC is thought to integrate and orchestrate these modality-specific signals into a functionally unified, modality-independent egocentric frame of reference, which encodes the egocentric spatial locations of objects and binds together their various perceptible properties. This egocentric spatial framework is thought to facilitate the sensorimotor computations that allow the subject to attend towards and act upon perceived objects. There is also considerable evidence that this framework mediates a stable representation of the egocentric spatial locations of objects that is continuously updated on the basis of motor commands as the subject moves through the world, thus grounding a distinction between the subject and mind-independent reality. I propose that this specific computational function of the PPC, whether it is implemented by the PPC or computationally equivalent mechanisms, fills a lacuna in existing computational theories of consciousness by helping to make intelligible how an integrated informational state could be part of a subject’s perspective on the external world. Interestingly, this suggests that subjectivity is inherently tied to agency.

Humans, Non-Human Animals, and Computing – In History and the Future
Monika Sziron (Illinois Institute of Technology)

Finding inspiration from biological entities, both human and non-human animals, has played a significant role in computing. Today we study and utilize the intelligent behavior of humans and non-human animals to develop better AI. This presentation highlights the history of using human and non-human animal intelligence in computing. Various algorithms and innovations will be discussed. Stressing the importance of non-human animals in the future of computing, I propose that the problems of non-human animals are problems that computer scientists will need to take into consideration. Just like humans, animals now have complex and developing relationships with computing. Whether in the form of being studied by computer scientists or physically interacting with computing technology, non-human animals are influenced by computing. I propose that it is within the best interest of computer scientists to maintain the integrity of these biological forms, for the sake of not only computer science, but other fields that utilize the intelligence of non-human animals, like neuroscience.

Artificial Intelligence and Medical Virtues
Andrew Lopez (Wake Forest University)

Advancing artificial intelligence and its bioethical implications has as much to teach us about our future as our present. If the pace of research and development of AI exceeds our ability to form ethical common ground, humanity may fall behind the curve. To be the most effective tool, AI through machine learning develops capacity to become self-improving, learn independently, and advance beyond its programming to accomplish tasks in ways humans did not program directly, and may not anticipate. An instrumentalist perspective of technology serving human purposes would no longer apply to AI and machine ethics if AI advances far enough to operate without direct human involvement. The morphing moral status of AI must be addressed as the technology transitions from being a tool to potentially an entity of its own. Could AI become moral agents that act and abide by ethical frameworks independent of their programming? Can machine learning practice virtues in medicine, and how does this compare to humans? Humans may no longer be able to claim praise, blame, or responsibility for the actions of machines. This shift in ethical responsibility would radically alter bioethics of virtue. The creation of medical AI should follow a value-sensitive design process reflective upon the practice and perfection of virtues. This research asks questions of volition and intention, how one learns virtues and where virtue comes from, and how AI might practice virtues essential to medicine.

Bias in Emotion and Big Data: Issues and Questions
José A. Cruz-Cruz & William J. Frey (University of Puerto Rico at Mayagüez)

The issue of bias is important in ethics and information systems. One of us is working on an NSF grant (Cultivating Responsible Well-Being in STEM) to develop educational modules to integrate cultivating moral emotions into the STEM curriculum. The other is exploring applications of Big Data to respond to business, social, and economic problems in Puerto Rico. In this presentation, we want to explore bias in emotions (brain-based intelligence) and in Big Data (Artificial Intelligence). Bias takes different forms in these kinds of intelligence. But we will suggest ways to identify and remediate bias that are mutually reinforcing.

Emotions have traditionally been classified as non-cognitive and, therefore, outside the scope of STEM education. But Martha Nussbaum presents an appraisal theory of emotions that spells out cultivation as realigning emotions around truthfulness, rationality, and appropriateness. We will briefly summarize teaching modules that use emotions to reduce STEM student disengagement from the social, ethical, and global. Cultivating emotions removes bias by (1) critically reflecting on constitutive appraisals and (2) pivoting to underlying, supporting theories.

We are also collaborated on a project sponsored by our university’s Office of the Chancellor to see how Big Data can be used to solve difficult social, political, and economic problems. Here issues of bias can lead to problems of distributive justice. Big Data provides algorithms that are built into machine learning systems or “smart” systems. These, in turn, provide support to decision makers as they form public policy or business plans. The problem is that the data collected mirrors any possible biases or discrimination vitiating the data itself; often this is introduced through sloppy methods of data collection or selection criteria distorted by bias. For example, developed organizations and countries with practices of keeping accurate and thorough records are better positioned to take advantage of Big Data and smart systems than less developed organizations and countries with less rigorous practices. The presenters seek to identify and comprehend how the quality of information and collection methods link to embedded bias.

Our presentation will concentrate on explaining and providing examples of these sources of bias and outlining how we address these issues in the classroom, especially in the context of STEM studies. We will conclude with suggestions of how these two sources of bias converge. For example, bias in emotion may lead to irregularities in data collection; cultivating emotions or understanding how they can distort the interpretation of key regulatory concepts provides a way of removing this bias. On the other side, Big Data can also inform smart systems that can check emotional biases and help to better inform and interpret the regulatory concepts that underlie public policy. Instead of an either-or approach, we hope to suggest both-and possibilities.

Keynote

Artificial Intelligence: what will the future be?
Maria Gini (University of Minnesota Twin Cities)

Every day we read in the scientific and popular press about advances in AI and how AI is changing our lives. Things are moving at a fast pace, with no obvious end in sight. What will AI be ten or twenty years from now? A technology so pervasive in our daily lives that we will no longer think about it? A dream that has failed to materialize? A mix of successes and failures still far from achieving its promises? Will intelligent systems be part of our daily lives, help us with routine tasks, handle dangerous jobs, and keep us company?

In this talk we will explore the state of the art in intelligent systems through the lenses of specific projects and discuss how AI technologies could be steered to address open problems. Examples will include helping diagnose autism in toddlers, and providing voice-based personal assistants to people with cognitive/motor/sensor impairment.

Finding a Common Language Across Three Domains: Psychology, Neuroscience, and AI
Roman Taraban & Lakshmojee Kodura (Texas Tech University)

When one looks across the landscape of theories and models in cognitive psychology, cognitive neuroscience, and AI, the term computational has become part of the common language across these three domains. Computational systems are further characterized as involving symbolic representation and intelligence, consistent with the physical symbol system hypothesis of Simon and Newell. Beyond these shared constructs, the three domains break out into separate directions regarding what is workable within the current goals and constraints imposed on the problems addressed in the specific domains.

In this presentation, we describe a hybrid cognitive-AI approach to an applied problem that currently dominates research in our lab, which is identifying the conceptual structure of topics in undergraduate engineering majors’ compositions related to ethical issues in engineering practice. This project incorporates emerging methods in text analysis, including naïve Bayesian analysis, latent Dirichlet allocation analyses, and methods of identifying the central concepts in the distribution of topics within students’ essays. Across the three domains of interest, we regard this as a prototypical example involving the representations responsible for learning, individual differences, and intelligent behavior of computational systems.

Aligning work on this project, as one example, across the language of the three domains faces several seemingly insurmountable challenges. In more detail, within psychology there is a disconnect between the functional, compositional nature of language representation, as exemplified in Fodor’s Language of Thought hypothesis, and converse brain-style and reductionist models, as exemplified in neural network models. The bridge from neural network models to AI’s deep-learning networks is easier to make. However, both models are fundamentally limited in their ability to do more than copy and imitate. An example from AI is Tay.ai (Vincent, 2016), which was Microsoft’s effort to improve the conversation abilities of a machine agent. Tay had a short life of 24-hours online due to the unfortunate outcome of mimicking racist conversations that were fed to it. Further, the challenge to cognitive neuroscience is its granularity. Whatever the language of the brain is, it will not be recovered within the current practice of localizing and analyzing neural activation in multiple voxels at a time (a voxel contains a few million neurons and several billion synapses). At present, the alignment challenge may be more of an epistemological one, involving an understanding of the necessary representational elements and properties in language. John Searle (Mishlove, 2010) provides an apt analogy when he suggests that the force, mass, etc., of a hammer are real physical properties, but they cannot be inferred from the properties of single molecules. In a similar manner, if we can recover the operative representational elements of language we can begin to build a bridge connecting the three domains.

Mishlove, J. (2010). https://www.youtube.com/watch?v=0XTDLq34M18

Vincent, J. (2016). https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist Algorithmic

Decision-Making and the Control Problem
Thomas Grote & Ezio Di Nucci (University of Tübingen & University of Copenhagen)

In the legal sector, as well as in public policy or medicine, decisions are increasingly being delegated to learning algorithms. In this paper, we will argue, that relevant delegation involves trade-offs in terms of control. These trade-offs will be scrutinized from a normative angle. In particular, we will focus on two (potential) sources for a loss of control: epistemic dependence and the accountability gap. By drawing on the literature in testimony and moral responsibility, in addition to discussing some of the basic concepts of machine learning, it will be argued that relevant loss of control might shape the motivational structure of decision-makers in a way that is ethically non-beneficial. Therefore, even under the assumption that learning algorithms make fairer or more objective decisions than human experts, associated costs stemming from the loss of control might yet make delegating high-stakes decisions to learning algorithms ethically unfavourable.

Fully Autonomous AI
Wolfhart Totschnig (Universidad Diego Portales)

In the fields of artificial intelligence and robotics, the term “autonomy” is generally used to mean the capacity of an artificial agent to operate independently of human oversight. This capacity is important especially in complex environments, where the situations that the agent will encounter cannot be predicted in detail and where the actions that the agent is meant to perform in these situations hence cannot be precisely defined in advance. Examples are driving a vehicle in everyday traffic or engaging enemy combatants on the battlefield. “Autonomy” here signifies the capacity of the agent to choose by itself the appropriate course of action in these unpredictable environments.¹

In this use of the term “autonomy,” it is assumed that the agent will have a fixed goal or “utility function,” a set purpose with respect to which the appropriateness of its actions will be evaluated. That is, it is assumed that it will have been defined and established, in general terms, what the agent is supposed to do—driving a vehicle safely and efficiently from one place to another, say, or neutralizing all and only enemy combatants it encounters. The question is only whether the agent will be able to carry out the given general instructions in concrete situations.

From a philosophical perspective, this is an oddly weak notion of autonomy. In philosophy, the term “autonomy” is generally used to mean a stronger capacity, namely the capacity to “give oneself the law” (as Kant put it), to decide by oneself what one’s goal or principle of action will be. This understanding of the term derives from its Greek etymology (auto = “by oneself,” nomos = “law”). An instance of such autonomy would be an agent that, by itself, decides to devote its existence to a certain project—the attainment of knowledge, say, or the realization of justice. By contrast, any agent that has a predetermined and immutable goal or utility function would not be considered autonomous in this sense.

The aim of this paper is to argue that an artificial agent can have autonomy in the philosophical sense—or “full autonomy,” as I will call it for short. “Can” is here meant in the sense of general possibility, not in the sense of current feasibility. That is, I contend that the possibility of fully autonomous robots cannot be excluded, but do not mean to claim that such robots can be created today.

My argument stands in opposition to the predominant view in the literature on the long-term prospects and risks of artificial intelligence. The predominant view is that an artificial agent cannot exhibit full autonomy because it cannot rationally change its own final goal, since changing the final goal would be counterproductive and hence undesirable with respect to that goal.² I will challenge this view by showing that it is based on questionable assumptions about the behavior of intelligent agents. In particular, I will criticize the idea that the final goal or “utility function” of an artificial agent is necessarily, or even ideally, separate from its “world model.” I will arrive at the conclusion that a general AI may very well come to modify its final goal in the course of developing its understanding of the world, just as humans do.

¹ See, for instance, Russell and Norvig (2010, 18) and Johnson and Verdicchio (2017).

² See, for example, Yudkowsky 2001, 2008, 2011, 2012; Bostrom 2002, 2003, 2014; Omohundro 2008, 2012, 2016; Yampolskiy and Fox 2012, 2013.

Bostrom, Nick. 2002. “Existential risks: Analyzing human extinction scenarios and related hazards.” Journal of Evolution and Technology 9 (1). Accessed April 24, 2016 http://www.jetpress.org/volume9/risks.html.

———. 2003. “Ethical issues in advanced artificial intelligence.” Accessed March 9, 2016. http://www.nickbostrom.com/ethics/ai.html.

———. 2014. Superintelligence: Paths, dangers, strategies. Oxford: Oxford University Press.

Johnson, Deborah G., and Mario Verdicchio. 2017. “Reframing AI discourse.” Minds and Machines.

Omohundro, Stephen M. 2008. “The nature of self-improving artificial intelligence.” Accessed November 18, 2016. https://selfawaresystems.files.wordpress.com/2008/01/nature_of_self_improving_ai.pdf

———. 2012. “Rational artificial intelligence for the greater good.” In Singularity hypotheses: A scientific and philosophical assessment, edited by Amnon H. Eden, James H. Moor, Johnny H. Søraker, and Eric Steinhart, 161–176. Berlin: Springer.

———. 2016. “Autonomous technology and the greater human good.” In Risks of artificial intelligence, edited by Vincent C. Müller, 9–27. Boca Raton: CRC Press.

Russell, Stuart J., and Peter Norvig. 2010. Artificial intelligence: A modern approach. Third edition. Upper Saddle River: Prentice Hall.

Yampolskiy, Roman V., and Joshua Fox. 2012. “Artificial general intelligence and the human mental model.” In Singularity hypotheses: A scientific and philosophical assessment, edited by Amnon H. Eden, James H. Moor, Johnny H. Søraker, and Eric Steinhart, 129–145. Berlin: Springer.

———. 2013. “Safety engineering for artificial general intelligence.” Topoi 32 (2): 217–226.

Yudkowsky, Eliezer. 2001. Creating friendly AI 1.0: The analysis and design of benevolent goal architectures. San Francisco: The Singularity Institute.

———. 2008. “Artificial Intelligence as a positive and negative factor in global risk.” In Global catastrophic risks, edited by Nick Bostrom and Milan M. Ćirković, 308–345. Oxford: Oxford University Press.

———. 2011. “Complex value systems in Friendly AI.” In Artificial general intelligence, edited by Jürgen Schmidhuber, Kristinn R. Thórisson, and Moshe Looks, 388–393. Berlin: Springer.

———. 2012. “Friendly artificial intelligence.” In Singularity hypotheses: A scientific and philosophical assessment, edited by Amnon H. Eden, James H. Moor, Johnny H. Søraker, and Eric Steinhart, 181–193. Berlin: Springer.

Keynote

Robotics and Artificial Intelligence: Ethical and Societal Challenges
Mark Coeckelbergh (University of Vienna)

Developments in robotics and neurotechnology promise to give us highly intelligent and much more autonomous machines than we have today, sometimes in close connection to humans and their brains. Visions of human-like robots and cyborgs, sometimes inspired by science-fiction, dominate the debate. There is fear and fascination. This talk argues against alarmism, but calls for more reflection on potential problems in the near-future. It gives an overview of some important and relatively urgent ethical and societal challenges raised by these projects and visions, with a focus on robotics and artificial intelligence. The talk also emphasizes the need for more policy making in this area, given that many issues are already relevant today with regard to smart devices and intelligent machines in home and work contexts.

AIs and the Boundaries of Legal Personhood: Why Uniqueness Matters
Fabrice Jotterand (Medical College of Wisconsin)

In this presentation, I examine the concept of legal personhood with regard to smart autonomous robots/AIs. In the coming years advances in neuroscience and robotics are likely to challenge and potentially decenter our moral frameworks and legal systems as new forms of sophisticated entities will appear among the human population. Our initial intuition is to bestow personhood to entities that are being born as human beings which confers de facto actual or potential right and duties. But some legal definitions stretch the criteria of legal personhood to include entities such as corporations, robots, and AIs. I will use what I call the Argument from Uniqueness to demonstrate that reflections on the status of AIs ought not be based on an analogy to human beings with regards to similar capacities and demeanors (functionalist approach) but rather on the idea that the more unique (biological and personality uniqueness) and irreplaceable an entity is, the more protection and rights should be granted to that entity. This evaluation should be based on criteria of kind (ontological status) rather than degree (functionalist status) to avoid a “gradation of worth” grounded on developmental stages (i.e., infant vs. adults, normal vs. augmented, etc.) or mere capacities. In light of my analysis I conclude that there is not enough uniqueness in sentient robots to grant full legal status of personhood. Only entities that cannot be replaced would have full legal status of personhood and therefore natural personhood should be reserved to human persons.

Robots as Others Expectations Moral Agents: From Human to Interactive Relationships
Zhengqing Zhang (Colorado School of Mines)

The conception of robots as moral agents is often criticized. Moral agents are thought to be (1) individuals with a subjective life with which others can enter into some type of relationships and (2) able to make autonomous decisions on the basis of some ideas of right and wrong. Both (1) and (2) make it possible for moral agents to be held morally accountable for their actions, that is, emotionally or rationally criticized with an aim of reformation.

The argument against robots as moral agents thus concerns emotional subjectivity and autonomy or free will. First, those who reject robots as moral agents argue that robots do not have subjective emotional states. People, who do have subjective emotional states, thus cannot participate in emotional relationships with robots, something they can do with human moral agents. Second, it is pointed out that robots do not have autonomy or free will. Robots do not act on their own intentions or purposes. The purposes of robots spring wholly from their engineering designers or programmers. Robots do not act on their own. Thus robots cannot be held morally accountable or subjected to emotional or rational criticism with an aim of reforming them.

The conceptions of moral agency on which these arguments rest deserves to be reconsidered. Consider, first, the issue of emotional subjectivity. When humans perform actions, their emotional states are not directly visible to others, nor are their intentions and purposes. As a result, when making moral judgements about others, what we use are expectations about their actions and intentions — which always leaves some level of uncertainty in such judgments. The emotions and intentions of others are necessarily to some extent our projections. This anthropomorphism is especially pronounced in military and medical areas, where robots are regularly felt to be human-like moral agents. Such robots are known as functional moral agents.

Second, even when robots are designed and programmed by engineers, it is not always possible to predict precisely what behaviors they will exhibit, any more than it is possible to predict the behaviors of autonomous human agents to which we attribute free will. Indeed, among the general public, people often talk about intelligent machines such as autonomous vehicles and caring robots as performing moral actions independently their engineer designers and programmers.

I thus argue for a new conception of moral agency. For a robot to be a functional, anthropopathic moral agent, those who interact with it need only decide to treat it as a moral agent, that is, as having subjectivity and free will. Their expectations in these regards are sufficient. The classical conception of the moral agent as possessing its own emotions and intentions is transformed into what I call an others-expectation moral agent. In concrete, everyday human-robot interactions, a moral agent need not be defined by the moral states or free will of a subject, but interactive relationships.

Virtuous v. Utilitarian Artificial Moral Agents
William A. Bauer (North Carolina State University)

The day has arrived when artificial agents operate as moral agents. Even if they’re not yet morally responsible for their actions (we are), they’re morally accountable because they (can) autonomously increase good or bad in the world (Floridi 2013). Therefore, it is incumbent on humans, the designers of artificial moral agents (AMAs), to establish moral parameters for their interaction with humans, animals, and other AMAs (Wallach and Allen 2009). How do we want AMAs to make moral decisions? What kinds of moral characteristics do we want AMAs to exhibit?

Although metaethical debates also need resolution, for now the appropriate level of ethical analysis for AMAs is normative theory. We should at least consult the big three traditions in moral philosophy (virtue theory, consequentialism, and deontology) in deciding how to load values into AMAs. My focus in this paper is comparing ‘virtuous AMAs’ and ‘utilitarian AMAs’. (‘Kantian’ or ‘deontological AMAs’ are probably too strict, for Kant’s theory demands absolute universalizability of moral rules.)

A sophisticated model of a virtuous AMA was recently proposed by Howard and Muntean (2016, 2017) (hereafter, H&M). Broadly applying virtue theory, they emphasize pattern-recognition and situational learning from moral examples (i.e., machine learning applied to ethics) to build up a set of dispositional states which underpin virtuous behavior (or, something approximating that). In developing their model, H&M employ several analogies with human moral learning to illustrate the plausibility of the virtuous AMA. In response, I will suggest where their analogies are vulnerable.

Furthermore, I challenge the moral particularist assumptions of H&M’s model. Favoring a more generalist approach, I will contrast the virtuous AMA model to a rule-utilitarian AMA model that incorporates a flexible, continually updated system of consequentialist rules. I will argue that the rule-utilitarian AMA, especially one situated within a two-level utilitarian framework (Hare 1981), can account for nearly everything the virtuous AMA can account for, while also issuing some advantages.

Additionally, I will ask a big picture question: need the specific ethical framework for AMAs match the general framework best suited for the moral challenges of an expanding global-technological environment? I suggest not, arguing that (for instance) we could consistently promote virtue theory for a global-technological society (as Vallor 2017 recommends), or some other normative theory, while using rule-utilitarianism in AMAs. This is because rule-utilitarianism is arguably compatible with some of the most important considerations of those other theories.

Aristotle. 350 B.C.E. Nichomachean Ethics. W.D. Ross (trans.). The Internet Classics Archive. http://classics.mit.edu/Aristotle/nicomachaen.html.

Bostrom, N. 2014. Superintelligence: Paths, Dangers, Strategies. New York: Oxford UP.

Crisp, R. 1996. Mill on Virtue As a Part of Happiness. British Journal for the History of Philosophy 4(2): 367-380.

Floridi, L. 2013. The Ethics of Information. New York: Oxford UP. Floridi, L. 2014. The 4th Revolution. New York. Oxford UP.

Guarini, M. 2006. Particularism and the Classification and Reclassification of Moral Cases. IEEE Intelligent Systems 21(4), Jul-Aug: 22-28. DOI: 10.1109/MIS.2006.76.

Guarini, M. 2011. Computational Neural Modeling and the Philosophy of Ethics: Reflections on the Particularism-Generalism Debate. Machine Ethics, M. Anderson and S.L. Anderson. New York: Cambridge UP. 316-334.

Hare, R.M. 1981. Moral Thinking: Its Levels, Method, and Point. New York: Oxford UP.

Hooker, B. 2000. Ideal Code, Real World. New York: Oxford UP.

Howard, D. and Muntean, I. 2016. A Minimalist Model of the Artificial Autonomous Agent (AAMA). Association for the Advancement of Artificial Intelligence (www.aaai.org).

Howard, D. and Muntean, I. 2017. Artificial Moral Cognition: Moral Functionalism and Autonomous Moral Agency. Philosophy and Computing, Philosophical Studies Series 128, T.M. Powers (ed.). Springer.

Kant, I. 1785. Groundwork for the Metaphysics of Morals. www.earlymoderntexts.com.

Mill, J.S. 1861. Utilitarianism. www.earlymoderntexts.com.

Vallor, S. 2016. Technology and the Virtues. New York: Oxford UP.

Wallach, W. and Allen, C. 2009. Moral Machines: Teaching Robots Right from Wrong. New York: Oxford UP.

The Right(s) Question: Can and Should AI Have Standing?
David Gunkel (Northern Illinois University)

This paper takes up and investigates whether AI and other forms of emerging neurotechnology either can or should have rights? The examination of this subject matter will proceed by way of three steps or movements. We begin by looking at and analyzing the form of the question itself. There is an important philosophical difference between the two modal verbs that organize the inquiry—can and should. This difference has considerable history behind it that influences what is asked about and how. Second, capitalizing on this verbal distinction, it is possible to identify four modalities concerning emerging technology and the question of rights. The second section will identify and critically assess these four modalities as they have been deployed and developed in the current literature. Finally, we will conclude by proposing another alternative, a different way of formulating the "rights question" that effectively challenges the existing rules of the game and provides for other ways of theorizing moral and legal status.

Blame-Laden Moral Rebukes and the Morally Competent Robot: A Confucian Ethical Perspective
Qin Zhu, Thomas Williams, & Blake Jackson (Colorado School of Mines)

Recent research has suggested that humans often perceive robots as moral agents, which suggests that robots will be expected to adhere to the moral norms that govern human behavior. Moreover, our own recent research suggests that humans may take cues from robot teammates as to what norms apply within their shared context. As such, we argue that a truly socially integrated robot must be able to clearly communicate its willingness to adhere to shared moral norms. We further argue that such a robot must also be willing to communicate its objection to others’ proposed violations of such norms, through, for example, blame-laden moral rebukes, even if such rebukes would violate other standing norms such as politeness, which are also necessary for social and ethical human-robot interaction. In this paper, by drawing on the resources in Confucian ethics, we argue that this ability to respond to unethical human requests using blame-laden moral rebukes is crucial for robots to contribute to cultivating the “moral ecology” of the human-robot system, and can and should be considered as one criterion for assessing a robot’s level of artificial moral agency.

Confucian ethics argues that the responsibilities of a person are often prescribed by the roles (e.g., father, son, engineer) assumed in specific communal contexts. As a true teammate, a morally competent robot has a role ethics of “caring” about the cultivation of the moral selves of other teammates. Social robots have a role ethics of helping human teammates to better reflect on what kind of people they are becoming and what virtues are cultivated in themselves when they make specific requests. According to Confucius, a good friend has the role ethics of remonstrating with you when the friend sees you committing a wrongdoing.

Blame-laden moral rebukes may also allow human teammates to cultivate a virtue of reciprocity (shu, 恕). Different from the Christian Golden Rule, the virtue of reciprocity states ethical principles on what not to do assuming human teammates do not wish for others to humiliate them. For Neo-Confucianists such as Wang Fuzhi, timely moral remonstrations are crucial. When a slightly selfish desire arises, Wang suggested that that desire will recede after immediate blame. Without timely blame-laden moral rebukes, the moral ecology of the human-robot system can be negatively affected which will further develop vices rather than virtues in human teammates. However, blame-laden moral rebukes are not the only strategy robots may use to ensure adherence to moral norms, and are not, in fact, our ultimate aim. Confucianists distinguish the petty person (xiaoren, 小人) and the exemplary person (junzi, 君子) in terms of their distinct reactions to blame. Unlike the petty person, the exemplary person turns blame into an opportunity for self-cultivation. From the Confucian perspective, the ultimate goal is to shift the vehicle for moral development from robot-generated blame (via blame-laden moral rebukes) to opportunities for “self-blame” (wherein humans may critically examine their own behaviors).

As robots are increasingly becoming teammates, friends, and companions, it is critical to reflect on what constitutes morally reliable human-robot interaction that can bring positive moral experience and moral development opportunities to human teammates. To design morally competent robots is to create not only reliable and efficient human-robot interaction, but also a robot-mediated environment in which human teammates can grow their own virtues.

Neuroscience and Ethical Decision-Making in Artificial Agents
Matthew A. Butkus (McNeese State University)

The advent of self-driving cars and artificially intelligent “bots” has introduced a significant ethical challenge in contemporary popular and professional discussions about how artificial agents should handle moral and ethical dilemmas (Hutchins, Kirkendoll, &.Hook 2017; Kang et al. 2005; Millar 2016; Wallach & Allen 2010). Traditional examples like the “trolley problem” have entered popular parlance and drawn attention to the distinct possibility that “driverless cars” may opt to kill their passengers instead of causing greater casualties by hitting pedestrians. While this discussion is compelling, the larger issue is how (and whether) ethical decision-making and behaviors can be programmed – actual ethical quandaries are much more complex than traditional cases (Goodall 2016). Some ethical methodologies would seem to be easier than others – utilitarian calculi and deontological protection of rights have relatively straightforward correlates in programming (e.g., decision-trees involving programmed constraints). Human decision-making (especially in complex ethical cases) however, is not so linear.

Neuroscience and cognitive psychology have yielded significant insights into how we actually make decisions. We employ both algorithmic and heuristic mechanisms concurrently, yielding a “dual process” model of cognition. Additionally, when we are considering complex ethical phenomena, we engage a variety of neural structures accessing both information and emotional valancing of that information (Rahaman, Kobti, & Snowdon 2010; Whitby 2008). Our actual ethical cognition involves reason, emotion, intuition, and habituation in additional to cultural awareness and socialization. These defy easy translation into computational terms.

The proposed presentation would highlight current neuroscience and cognitive psychology as a framework for discussing programming ethical behaviors. Recent examples of robotics and AI will be incorporated both to show current deficits as well as potential mechanisms for working around identified barriers (e.g., engaging in ethical learning and habituation, moral uncertainty; top-down versus bottom-up learning, etc.) (Bogosian 2017; Lindner, Bentzen, & Nebel 2017; Quilici-Gonzalez et al. 2014; Wallach, Allen, & Smit 2007; Zhu & Li 2011).

Bogosian, K. (2017). Implementation of moral uncertainty in intelligent machines. Minds and Machines, 27(4), 591-608. doi:10.1007/s11023-017-9448-z

Goodall, N. J. (2016). Away from trolley problems and toward risk management. Applied Artificial Intelligence, 30(8), 810-821. doi:10.1080/08839514.2016.1229922

Hutchins, N., Kirkendoll, Z., & Hook, L. (2017). Social impacts of ethical artifical intelligence and autonomous system design. 2017 IEEE International Systems Engineering Symposium (ISSE), 1-5. doi:10.1109/SysEng.2017.8088298

Kang, J., Wright, D. K., Qin, S. F., & Zhao, Y. (2005). Modelling human behaviours and reactions under dangerous environment. Biomedical Sciences Instrumentation, 41, 265-270.

Lindner, F., Bentzen, M. M., & Nebel, B. (2017). The HERA approach to morally competent robots. 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 6991-6997. doi:10.1109/IROS.2017.8206625

Millar, J. (2016). An ethics evaluation tool for automating ethical decision-making in robots and self-driving cars. Applied Artificial Intelligence, 30(8), 787-809. doi:10.1080/08839514.2016.1229919

Quilici-Gonzalez, J. A., Broens, M. C., Quilici-Gonzalez, M. E., & Kobayashi, G. (2014). Complexity and information technologies: an ethical inquiry into human autonomous action. Scientiae Studia, 12(Special Issue), 161-179. doi:10.1590/S1678-31662014000400009

Rahaman, S. F., Kobti, Z., & Snowdon, A. W. (2010). Artificial emotional intelligence under ethical constraints in formulating social agent behaviour. IEEE Congress on Evolutionary Computation, 1-7. doi:10.1109/CEC.2010.5586121

Wallach, W., Allen, C., & Smit, I. (2008). Machine morality: bottom-up and top-down approaches for modelling human moral faculties. AI & Society, 22, 565-582. doi:10.1007/s00146-007-0099-0

Wallach, W., Franklin, S., & Allen, C. (2010). A conceptual and computational model of moral decision making in human and artificial agents. Topics in Cognitive Science, 2, 454-485. doi:10.1111/j.1756-8765.2010.01095.x

Whitby, B. (2008). Computing machinery and morality. AI & Society, 22, 551-563. doi:10.1007/s00146-007-0100-y

Zhu, Y., & Li, Q. (2011). Studying robotic intuition and emotion. 2011 2nd International Conference on Artificial Intelligence, Management Science and Electronic Commerce (AIMSEC), 680-683. doi:10.1109/AIMSEC.2011.6010244


Energy as a Universal Right: Applications for Artificial and Biological Beings
Tyler L. Jaynes (Utah Valley University)

We understand in physics that living objects and inanimate clumps of matter have one essential difference—namely, that the former have a greater tendency to capture more energy from their environment than the later. Jeremy England suggests that the irreversibility of a spontaneous instance of self-replication is quantitatively related to the entropy generated by the process, coupled with the argument that this is significant for the functioning of microorganisms and other self-replicators. If this holds true, we can speculatively state that the capture of energy and generation of entropy is essential for life within an organism. Assuming that this speculation is valid and that a greater tendency to capture energy correlates to the persistence of a given organism, we can generate a dialogue centered on the right for an organism to generate, possess, and (or) use energy for self-sustainment.

Further, there is a growing debate centered around the actions of Saudi Arabia to grant citizenship to an artificial being. If we are to take this action as being more than economically or politically motivated, we necessarily need to consider the reality that this artificial being possesses something unique that only humans possess. Legally recognized citizenship is unique to humans on a political sphere, as it grants us a particular set of rights to be able to participate in society competently. In a realistic sense, we can state that a portion of our global society has accepted that artificial entities can competently bear these rights. We must also recognize that the generation of rights can no longer be universal if we fail to incorporate artificial beings such as Sophia in Saudi Arabia both from an artificial intelligence and robotic-entity perspective.

This argument is one segment of a more extensive work that intends to incorporate a definition of “energy” that is consistent with England’s suggestions. Coupled with the knowledge that the consumption of food and electricity is synonymous with modern living, we can state that foodstuffs and electricity need to be incorporated into a universal right regarding energy. This universality will incorporate both artificial and biological beings that interact in today’s socio-political sphere globally and incorporate other members of the animal kingdom as a sub-set of right-bearing entities (given they only require food in this context). This argument intends to urge for the generation of a universal right for energy, and provide an anticipatory argument centered in a holistic bioethical perspective to guide the development of such a right.

CRISPR, Gene Editing, and Cognitive Enhancement
Michael W. Nestor & Richard L. Wilson (The Hussman Institute for Autism & Loyola University)

Clustered regularly interspaced short palindromic repeats (CRISPR) genome editing has already reinvented the direction of genetic and stem cell research. For more complex diseases it allows scientists to simultaneously create multiple genetic changes to a single cell. Technologies for correcting multiple mutations in an in vivo system are already in development. On the surface, the advent and use of gene editing technologies is a powerful tool to reduce human suffering by eradicating complex disease that has a genetic etiology. Gene drives are CRISPR mediated alterations to genes that allow them to be passed on to subsequent populations at rates that approach 100% transmission. Therefore, from an anticipatory biomedical ethics perspective, it is possible to conceive gene drive being used with CRISPR to permanently ameliorate aberrant genes from wild-type populations containing mutations. As CRISPR technology continues to evolve it is plausible that researchers will undertake projects such as cognitive enhancement by using CRISPR on the brain.

However, there are also a number of possible side effects that could develop as the result of combining gene editing and gene drive technologies in an effort to eradicate complex diseases. Side effects could also develop as the result of using CRISPR to alter the brain. In this paper, we critically analyze the hypothesis that the combination of CRISPR and gene drive will have a deleterious effect on human populations from an ethical perspective by developing an anticipatory ethical analysis of the implications for the use of CRISPR together with gene drive in humans. Our main focus will be on identifying and examining ethical issues related to using CRISPR and gene drive technologies for the purpose of cognitive enhancement.

Cong L, Ran FA, Cox D, Lin S, Barretto R, Habib N, Hsu PD, Wu X, Jiang W, Marraffini LA, Zhang F (2013) Multiplex Genome Engineering Using CRISPR/Cas Systems. Science 339:819–823.

Esvelt, K. M., Smidler, A. L., Catteruccia, F., & Church, G. M. (2014). Concerning RNA-Guided Gene Drives for the Alteration of Wild Populations. doi:10.1101/007203

Esvelt, K. (n.d.) Gene Drives. Sculpting Evolution Available at: http://www.sculptingevolution.org/genedrives [Accessed March 12, 2017].

Yasemin Saplakoglu, Kevin Bogardus, (2017) New gene drive technology could wipe out malaria, but is it safe? Science | AAAS Available at: http://www.sciencemag.org/news/2017/02/new-gene-drive-technology-could-wipe-out-malaria-it-safe [Accessed March 12, 2017].

Doudna JA, Charpentier E. Genome editing. The new frontier of genome engineering with CRISPR-Cas9. Science. 2014;346(6213):1258096.

Schwank G, Koo BK, Sasselli V, Dekkers JF, Heo I, Demircan T, Sasaki N, Boymans S, Cuppen E, van der Ent CK, Nieuwenhuis EE, Beekman JM, Clevers H. Functional repair of CFTR by CRISPR/Cas9 in intestinal stem cell organoids of cystic fibrosis patients. Cell Stem Cell. 2013;13(6):653-8.

Yin H, Xue W, Chen S, Bogorad RL, Benedetti E, Grompe M, Koteliansky V, Sharp PA, Jacks T, Anderson DG. Genome editing with Cas9 in adult mice corrects a disease mutation and phenotype. Nat Biotechnol. 2014;32(6):551-3.

Smith C, Abalde-Atristain L, He C, Brodsky BR, Braunstein EM, Chaudhari P, Jang YY, Cheng L, Ye Z. Efficient and Allele-Specific Genome Editing of Disease Loci in Human iPSCs. Mol Ther. 2015;23(3):570-7.

Xie F, Ye L, Chang JC, Beyer AI, Wang J, Muench MO, Kan YW. Seamless gene correction of beta-thalassemia mutations in patient-specific iPSCs using CRISPR/Cas9 and piggyBac. Genome Res. 2014;24(9):1526-33.

Esvelt, K. (n.d.) FAQ. Sculpting Evolution Available at: http://www.sculptingevolution.org/genedrives/genedrivefaq [Accessed March 12, 2017].

Anon (n.d.) What Can CRISPR Do to Help in the Fight Against Zika Virus? – Synthetic Guide RNA for CRISPR Genome Editing | Synthego. Synthetic Guide RNA for CRISPR Genome Editing Synthego Available at: https://www.synthego.com/blog/what-can-crispr-do-to-help-in-the-fight-against-zika-virus/ [Accessed March 12, 2017].

Liang P, Xu Y, Zhang X, Ding C, Huang R, Zhang Z, Lv J, Xie X, Chen Y, Li Y, Sun Y, Bai Y, Songyang Z, Ma W, Zhou C, Huang J (2015) CRISPR/Cas9-mediated gene editing in human tripronuclear zygotes. Protein & Cell 6:363–372.

Oye KA, Esvelt K, Appleton E, Catteruccia F, Church G, Kuiken T, Lightfoot SB-Y, Mcnamara J, Smidler A, Collins JP (2014) Regulating gene drives. Science 345:626–628.

Sun N, Zhao H (2013) Seamless correction of the sickle cell disease mutation of the HBB gene in human induced pluripotent stem cells using TALENs. Biotechnology and Bioengineering 111:1048–1053. https://wyss.harvard.edu/staticfiles/newsroom/pressreleases/Gene%20drives%20FAQ%20FINAL.pdf

Ben Ouagrham-Gormley S, Vogel, KM (2016) Gene drives: The good, the bad, and the hype. Bulletin of the Atomic Scientists Available at: http://thebulletin.org/gene-drives-good-bad-and-hype10027 [Accessed March 12, 2017].

Gulati S (2008) Technology-Enhanced Learning in Developing Nations: A review. The International Review of Research in Open and Distributed Learning 9.

Fu Y, Foden JA, Khayter C, Maeder ML, Reyon D, Joung JK, Sander JD (2013) High-frequency off-target mutagenesis induced by CRISPR-Cas nucleases in human cells. Nature Biotechnology 31:822–826.

Randall R. Dipert (2010): The Ethics of Cyberwarfare, Journal of Military Ethics, 9:4, 384-410

Werhane, Patricia H. (2002). Moral imagination and systems thinking. _Journal of Business Ethics_ 38 (1-2):33 - 42.

Cuilla. Joanne, Martin, Clancy, Solomon, Robert C., Honest Business, A Business Ethics Reader, Oxford University Press, New York, New York, 2007.

Babar MI, Ghazali M, Jawawi DNA, Zaheer KB (2015) StakeMeter: Value-Based Stakeholder Identification and Quantification Framework for Value-Based Software Systems. Plos One 10.

Allhoff F, Henschke A, Strawser BJ (2016) Binary bullets: the ethics of cyberwarfare. New York: Oxford University Press.

National Academies of Sciences, Engineering, and Medicine; Division on Earth and Life Studies; Board on Life Sciences; Committee on Gene Drive Research in Non-Human Organisms: Recommendations for Responsible Conduct (2016) Gene Drives on the Horizon: Advancing Science, Navigating Uncertainty, and Aligning Research with Public Values. The National Academies Press Available at: https://www.nap.edu/catalog/23405/gene-drives-on-the-horizon-advancing-science-navigating-uncertainty-and [Accessed March 12, 2017].

Veerbeek, PP. (2012) Moralizing Technology: Understanding and Designing the Morality of Things. s.l.: University of Chicago Press.