Main content
AI.Humanity Ethics Lecture Series will explore the ethics of artificial intelligence
scholars Matthias Scheutz, Carissa Véliz, Seth Lazar, Ifeoma Ajunwa and Mona Sloane

World-renowned artificial intelligence scholars (from left) Matthias Scheutz, Carissa Véliz, Seth Lazar, Ifeoma Ajunwa and Mona Sloane will discuss the moral and social complexities of AI and how it may be shaped for the benefit of humanity during a lecture series in April and May.

As society increasingly relies on artificial intelligence (AI) technologies, how can ethically committed individuals and institutions articulate values to guide their development and respond to emerging problems? Join the Office of the Provost to explore the ethical implications of AI in a new AI.Humanity Ethics Lecture Series.

Over four weeks in April and May, world-renowned AI scholars will visit Emory to discuss the moral and social complexities of AI and how it may be shaped for the benefit of humanity. A reception will follow each lecture.


Matthias Scheutz: “Moral Robots? How to Make AI Agents Fit for Human Societies” 

Monday, April 11

Lecture at 4 p.m., reception at 5:30 p.m.

Convocation Hall — Community Room (210)

Register here.

AI is different from other technologies in that it enables and creates machines that can perceive the world and act on it autonomously. We are on the verge of creating sentient machines that could significantly improve our lives and better human societies. Yet AI also poses dangers that are ours to mitigate. In this presentation, Scheutz will argue that AI-enabled systems — in particular, autonomous robots — must have moral competence: they need to be aware of human social and moral norms, be able to follow these norms and justify their decisions in ways that humans understand. Throughout the presentation, Scheutz will give examples from his work on AI robots and human-robot interaction to demonstrate a vision for ethical autonomous robots. 

Matthias Scheutz is a professor of cognitive and computer science, director of the Human-Robot Interaction Laboratory and director of the human-robot interaction degree programs at Tufts University. His research and areas of interest are in the field of AI, artificial life, cognitive modeling, complex systems, foundations of cognitive science, human-robot interaction, multi-scale, agent-based models and natural language understanding.


Seth Lazar: “The Nature and Justification of Algorithmic Power” 

Monday, April 18

Lecture at 4 p.m., reception at 5:30 p.m.

Convocation Hall — Community Room (210)

Register here.

Algorithms increasingly mediate and govern our social relations. In doing so, they exercise a distinct kind of intermediary power: they exercise power over us; they shape power relations between us; and they shape our overarching social structures. Sometimes, when new forms of power emerge, our task is simply to eliminate them. However, algorithmic intermediaries can enable new kinds of human flourishing and could advance social structures that are otherwise resistant to progress. Our task, then, is to understand and diagnose algorithmic power and determine whether and how it can be justified. In this lecture, Lazar will propose a framework to guide our efforts, with particular attention to the conditions under which private algorithmic power either can, or must not, be tolerated.

Seth Lazar is professor of philosophy at the Australian National University (ANU), an Australian Research Council (ARC) future fellow and a distinguished research fellow of the University of Oxford Institute for Ethics in AI. At ANU, he was founding lead of the Humanizing Machine Intelligence project and recently launched the Machine Intelligence and Normative Theory Lab, where he directs research projects on the moral and political philosophy of AI. He is general co-chair for the ACM Fairness, Accountability and Transparency Conference 2022, was program co-chair for the ACM/AAAI AI, Ethics and Society conference in 2021, and is one of the authors of a study by the U.S. National Academies of Science, Engineering and Medicine, reporting to Congress on the ethics and governance of responsible computing research. He has given the Mala and Solomon Kamm lecture in ethics at Harvard University and will in 2023 give the Tanner Lectures on AI and human values at Stanford University.


Ifeoma Ajunwa: “The Unrealized Promise of Artificial Intelligence” 

Thursday, April 28

Lecture at 4 p.m., reception at 5:30 p.m.

Oxford Road Building — Presentation Room and Living Room/Patio

Register here.

AI was forecast to revolutionize the world for the better. Yet this promise is still unrealized. Instead, there is a growing mountain of evidence that automated decision making is not revolutionary; rather, it has tended to replicate the status quo, including the biases embedded in our societal systems. The question, then, is what can be done? The answer is twofold: One part looks to what can be done to prevent the reality of automated decision making both enabling and obscuring human bias. The second looks toward proactive measures that could allow AI to work for the greater good. 

Ifeoma Ajunwa is an associate professor of law with tenure at the University of North Carolina School of Law, where she also is the founding director of the AI Decision-Making Research Program. Prior to that, Ajunwa earned tenure at Cornell University.  Ajunwa is currently a Fulbright Scholar (2021-2022) and a nonresidential visiting fellow at Yale Law School. She has also been faculty associate at the Berkman Klein Center at Harvard Law School since 2017. Her research interests include race and the law, law and technology, employment and labor law, health law and more. 


Carissa Véliz: “On Privacy and Self-Presentation Online” 

Thursday, May 5

Lecture at 4 p.m.

Online via Zoom

A long tradition in philosophy and sociology considers self-presentation as the main reason why privacy is valuable, often equating control over self-presentation and privacy. Véliz argues that, even though control over self-presentation and privacy are tightly connected, they are not the same — and overvaluing self-presentation leads us to misunderstand the threat to privacy online. Véliz argues that to combat some of the negative trends we witness online, we need, on the one hand, to cultivate a culture of privacy, in contrast to a culture of exposure (for example, the pressure on social media to be on display at all times). On the other hand, we need to readjust how we understand self-presentation online.

Carissa Véliz is an associate professor at the Faculty of Philosophy and the Institute for Ethics in AI, as well as a tutorial fellow at Hertford College at the University of Oxford. Her work is based in digital ethics (with an emphasis on privacy and AI ethics), and practical ethics; more generally, political philosophy and public policy.


Mona Sloane: “About the Social in AI

As artificial intelligence progresses and becomes part of our everyday lives, evidence of its potential to negatively impact society grows. But to date, we do not have a robust concept of AI as a social phenomenon. The harm that AI can cause is often considered a technical problem that requires a technical solution. This process falls short of addressing the root problems of discriminatory AI. Sloane’s lecture will outline a new approach to uncover and understand new approach to understanding AI’s social dimensions. It will emphasize on the social organization of “ethical AI,” interdisciplinary AI auditing, AI accountability techniques and practice-based compliance frameworks for AI organizations. Sloane will showcase how cross-disciplinary collaborations help diagnose harmful AI to create technology that is in the public interest.

Monday, May 23

Lecture at 4 p.m., reception at 5:30 p.m.

Convocation Hall — Room 208

Register here.

Mona Sloane, PhD, is a senior research scientist at the NYU Center for Responsible AI, an adjunct professor at NYU’s Tandon School of Engineering, a fellow with NYU’s Institute for Public Knowledge, a fellow with the GovLab and the technology editor for Public Books. Currently, she serves as inaugural director of the *This Is Not A Drill* program on technology, inequality and the climate emergency at NYU’s Tisch School of the Arts and holds an affiliation with the Tübingen AI Center at the University of Tübingen in Germany. Her research examines the intersection of design, technology and society, specifically in the context of AI design and policy.

For more information about the AI.Humanity Ethics Lecture Series, contact Melissa Daly.

Recent News