Main content
Center for Ethics appoints Anne-Elisabeth Courrier as AI ethics liaison
Media Contact
Ashlee Gardner
Anne-Elisabeth Courrier

Anne-Elisabeth Courrier, a leader in research and teaching at the intersection of AI ethics, law and data, has assumed the role of artificial intelligence ethics liaison, a new position created in the Emory University Center for Ethics.  

Courrier, an adjunct professor at the Emory School of Law and associate professor in comparative law at Nantes University in France, joined the Center for Ethics as a visiting fellow in 2019. For several years, she has been an advocate and driving force behind the center’s engagement with AI ethics. In this new role, she will further support efforts across Emory to integrate AI ethics into programming, coursework and faculty development. 

“Anne-Elisabeth Courrier has been a mainstay in the center's efforts to explore issues in AI ethics,” says John Lysaker, director of the Center for Ethics. “Beyond her extensive experience and international perspective, she also brings great energy and imagination to the position and an amazing track record of building and maintaining collaborative partnerships.” 

Courrier’s expertise is internationally rooted. Her PhD in law from the University of Paris I, Panthéon-Sorbonne, in collaboration with Oxford University, compared public service ethics in Great Britain and France. With postdoctoral fellowships at the European University Institute in Florence, Italy and at the Corvinus University of Budapest in Hungary, her focus evolved to ethics and law in social corporate responsibility, public governance, and more recently, big data and AI, from both the European and American perspectives. 

While at Emory, she created a program called "Simuvaction on AI," which convenes university students around the world and across fields of study, to actively engage, practice and contribute to the ethical development of AI through a simulation of the Global Partnership of AI’s summit. She also designed and launched the online course "The Ethical Path to AI: Navigating Strategies for Innovation and Integrity and has organized multiple international conferences, seminars and workshops at Emory. 

Courrier will begin her role with a listening tour, meeting with deans to learn how the center can best serve the unique AI ethics needs of each school. She shares more about her background and objectives for the role below. 


Q: How did you become interested in AI ethics, specifically as it relates to law? 

I have always had a passion for understanding how different societies construct the normative fabric between law and ethics. Adjusting and balancing between legal norms and ethical norms is specific to each culture. When I came to the United States in 2017, I wanted to explore the intersection of law and ethics here, to understand how it operates in American society. I started by working on the law and ethics of data, especially health data, comparing the European and American perspectives. That led me into the world of AI, examining how law, governance and ethics influence one another. I’ve kept that comparative perspective, now looking at how the United States, Europe and Canada each approach AI governance.  
 
AI is transforming our world in profound ways on both a personal and collective level. We are very lucky to be living in a time when we think about the foundations of a society that we will leave for future generations. That’s a powerful opportunity, but also a responsibility. 

 

Q: What does your new role as AI Ethics Liaison mean to you? How do you envision shaping it? 

The role is very much about collaboration and building bridges between schools, departments and people. As AI ethics liaison, I will do a lot of listening at first to understand the needs, challenges and desires of each unit. What are the ethical questions researchers, students, educators and staff are wrestling with? 

At the Center for Ethics, our director John Lysaker tells people that we see ethics “not as compliance, but as aspiration.” That philosophy will be central to how I approach this role. The idea is to work together to articulate and align plans for ethics to create a culture of awareness and shared responsibility around AI. Not to refrain from using AI, but to emphasize equity and collective responsibility, so that we can all thrive and find delight in its use. 

 

Q: You recently returned from Quebec for the Simuvaction on AI 2025 project. Tell us about it. 

I started the Simuvaction project in 2022. The name is a combination of three keywords: simulation, innovation and action. It’s a six- to eight-week learning experience where students don’t just learn about ethical AI development and governance, they live it. They simulate the work of a real international body, the Global Partnership on AI, and take on the roles of countries or stakeholders like Meta, international organizations and NGOs.  

This year, we had 35 students from 14 universities across the world, including Emory, representing diverse disciplines and backgrounds. Their task is to collaborate, negotiate and write actionable recommendations based on a challenging scenario. This year’s theme is the universal right to work. They had to answer questions like: If AI devices are used not only to address medical issues but to boost productivity, what does the right to work mean and what is the economic value of a job? What policies can ensure an equitable and balanced future? 

In the AI era, especially with generative AI, we have knowledge at the tip of our fingers. So, the question becomes: Why bother learning? Simuvaction responds to that. Students engage in an experiential learning exercise based on real-world issues related to AI and ethics. They practice discernment, develop courage and refine their ways of doing and being. It’s about mobilizing knowledge in a meaningful and timely way. 

 

Q: How does your experience with international partnerships influence your work at Emory? 

I often say I’m a bridge builder. I’ve spent more time living abroad than in my own country. That shapes how I approach my work. Each time I arrive somewhere new, I start from scratch. It humbles you. It makes you ask questions and seek out common ground. 

One of my inspirations is Madame Simone Veil, the first woman president of the European Parliament and a Holocaust survivor. As she reflected, “Whether we like it or not, we are all responsible for what will unite us tomorrow.” That notion captures the spirit of my work: How can we use law and ethics to express the roots of a society and build toward the common good? 

I believe in international collaboration. In AI governance, it’s so important to have a dialogue with other countries. If we can communicate and align across borders about regulations and norms, we can ensure that AI serves all of humanity.  

 

Q: How do you hope AI ethics will evolve over the next five years at Emory and in the world? 

At Emory, I’d love to see every student walk across the Commencement stage with a strong culture of AI ethics. That would be perfectly aligned with Emory’s mission and the AI.Humanity initiative to lead AI in service to humanity.  

All our students are involved with AI on some level. They don’t need to be experts when it comes to AI ethics, but they should have a basic awareness. I hope they will have little lightbulb moments throughout life, asking questions like: When I interact with AI, what should I be asking? What am I consenting to?  Am I intentional about my AI uses and how do I want to contribute to Human-Machine Interaction?  

On a global scale, I hope we come to see AI as a friend, a helpful assistant. But we must stay aware of potential downsides and make sure we keep humans in the loop. I believe that by leaning on humanity’s skills in communication and connection, together, we can create a bright future that uses AI for good. 


Recent News