Main content
Emory Profile
Undergraduate research explores the ethics of artificial intelligence

In his philosophy honors thesis, Ben Goldfein argues that “we need to shift the current ethos of technological progress and start realizing that it is not just the end results which matter, but also the means to get there.” Emory Photo/Video

Artificial intelligence, the ability of computers to spot patterns in large volumes of data, has advanced so rapidly that researchers promise self-driving cars and medical diagnoses from facial scans soon.

The ascent of AI into the domain of human exceptionality has prompted as much anxiety as hope. Science fiction is littered with stories of AI, programmed to protect human life, becoming sentient and murderous.

Ben Goldfein, a senior in Emory College of Arts and Sciences, argues in his philosophy honors thesis that the answer to this modern angst lies in Aristotle’s ancient Nicomachean Ethics, which describe moral action as doing the right thing to the right thing at the right time, toward the right end and in the right way.

Aristotle’s virtuous ethics eliminate a constant right answer, forcing the best possible answer in a given situation. Such consideration would turn AI far more like us than something like Amazon’s Alexa — and expand what it means to be human in the ever-shifting technological landscape.

“There will be a point when we no longer have control over AI beyond the set of rules we give or the outcomes we want,” Goldfein says. “We need to shift the current ethos of technological progress and start realizing that it is not just the end results which matter, but also the means to get there.”

He will continue to explore these questions after graduation as he pursues a master of letters in moral, political and legal philosophy through the Robert T. Jones Jr. Fellowship. The fellowship is part of the Bobby Jones Scholars program that honors the most outstanding representatives from Emory and the University of St Andrews in Scotland, where Goldfein will study.

Cutting-edge technology, lagging ethics

As AI broke free from abstract theory to everyday reality in recent years, consideration of its ethical implications has lagged. The European Union is at the forefront, just pledging this spring to have guidelines for use of AI by year’s end.

So far, many of the alarming AI headlines – such as the facial recognition that helped police spot and arrest a fugitive in a crowd of 50,000 concertgoers in China, and a patent pending to allow Alexa to listen to all conversations – highlight the actions of the people behind the machines.

At the current pace of technology, though, it’s not unreasonable to begin questioning the moral responsibility of the computers themselves, says Thomas Flynn, Samuel Candler Dobbs Professor of Philosophy, who oversaw Goldfein’s thesis.

“The question is just starting to arise, whether as these machines develop their problem-solving strength, does it bring any moral conscience with it,” Flynn says. “Can a computer beat us in chess? Yes. But does it care it can beat us, and what are the limits of the machine’s responsibility for what is done?”

The questions are not simply academic. One of Goldfein’s neighbors was in a crash involving an early example of an autonomous vehicle. A driver was test-driving a Mercedes that the salesperson said had the ability to stop itself at a stop sign. It didn’t.

Goldfein’s neighbor was injured. The legal system, though, has yet to decide who, or what, was responsible. The driver blamed the salesperson. The salesperson blamed the manufacturer. The manufacturer blamed the computer.

“It sounds fake, but you’re asking if you can punish a computer and if justice, which is a feeling, can apply to a developed machine,” Goldfein says. “It’s so fascinating because this isn’t science fiction. This is happening.”

“Just for fun”

At first blush, Goldfein seems an unlikely candidate for considering the moral behavior of AI and the people who design such machines.

The Atlanta native and son of two Emory alumni arrived on campus prepared to major in business and theater studies – the fields most likely to be helpful in his goal of becoming a career magician.

“I wanted to be the next David Copperfield,” says Goldfein, who was named a “Rising Star” by the Society of American Magicians when he was 16.  

He ended up a philosophy major, with an ethics minor, because he promised himself to take one course every semester “just for fun.”

The spring of his first year at Emory College, his pick for fun was an interdisciplinary philosophy course, the ethics of violence. The course, which included reading “Crime and Punishment” and debated questions such as whether a more brutal death would involve the dull or the sharp end of an ax, was revelatory.

Other fields taught him what to think about a certain topic, Goldfein says. Philosophy was about how to think.

“He has challenged the ideas of ethics and emotions just as philosophy is reconsidering its old idea that ethical thinking requires only reason,” says Cynthia Willett, Samuel Candler Dobbs Professor of Philosophy, who served on Goldfein’s thesis committee and has taught him in graduate seminars.

“Now we realize people can’t set feelings aside because we think through our emotions,” adds Willett, who is working with Goldfein on a follow-up paper to his thesis. “With all of the science and technology, you have to be there with the humanity, too, which is why Ben’s work is so timely.”

Considering the human question

Goldfein’s work has focused deeply on the high-level steps AI programmers and architects should take, or be removed from, rather than focusing on the programming itself.

His research included a year spent as a visiting student at the University of Oxford in England. He studied philosophy, politics and economics, with a focus on analytic and moral philosophy in the St. Peter’s College Visiting Student Programme. Goldfein was also an Undergraduate Honors Fellow at the Fox Center for Humanistic Inquiry.

His coursework and honors were not confined to the humanities. Goldfein was Emory’s first Mt. Vernon Fellow, a competitive leadership program in Washington, D.C.

He also chose to take neuroscience classes instead of computer science to understand the next level of AI, the “deep learning” that mimics the human brain.

Some of the questions of moral sentience – not only if machines could achieve it but also the array of factors that go into it – helped lessen the burden on a programmer to code ethics or virtue into devices, says Andrew Kazama, a lecturer in the Department of Psychology.

“From a neuroscience perspective, there are so many things controlling our behavior, from stimuli in our environment to the reinforced learning which machines do now,” says Kazama, who taught Goldfein and served on his thesis committee.

“Ben’s thesis is an important piece of writing because it shows there are so many things that scientists are going to have to grapple with, not just the technology, if we want our work to be used for good,” Kazama adds.

Goldfein envisions a future working for good and is considering a master’s degree in bioethics, with a career as a lawyer, researcher or academic.

“Technology research has often been kept to the hard sciences, but anything that concerns humanity must consider these questions,” Goldfein says. “I’m fascinated by the human aspect of what we can do and whether or how we should, and I am excited to continue this research in hopes of creating meaningful change.”


Recent News