Emory University has joined more than 200 of the nation’s leading AI authorities as a member of the new U.S. Artificial Intelligence Safety Institute Consortium (AISIC). Established by the National Institute of Standards and Technology (NIST), an agency of the U.S. Department of Commerce, the consortium will promote the development and deployment of trustworthy AI.
As a member of AISIC, Emory will collaborate with Fortune 500 companies, academic teams, non-profit organizations, and other U.S. government agencies to enable safe AI systems and support future standards and policies to guide its responsible use.
“Emory is honored to join the U.S. Artificial Intelligence Safety Institute Consortium and contribute our expertise to help ensure the ethical and responsible use of AI on the world stage,” said Ravi V. Bellamkonda, Emory’s provost and executive vice president for academic affairs. “The goals of the consortium perfectly align with Emory’s AI.Humanity Initiative and its focus on guiding the AI revolution to improve human health, generate economic value and promote social justice. We look forward to interdisciplinary collaboration with NIST and other member organizations as we develop human-centered AI systems and protocols that reflect our values and serve society.”
The consortium is dedicated to fulfilling the priorities defined in the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence and will be responsible for developing guidelines and tools for a number of focus areas including industry standards for creating and deploying trustworthy AI, secure development practices for generative AI, authentication of digital content, red-teaming and privacy-preserving machine learning and outlining criteria for AI workforce skills.
“The U.S. government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence. President Biden directed us to pull every lever to accomplish two key goals: set safety standards and protect our innovation ecosystem. That’s precisely what the U.S. AI Safety Institute Consortium is set up to help us do,” said U.S. Secretary of Commerce Gina Raimondo. “Through President Biden’s landmark Executive Order, we will ensure America is at the front of the pack—and by working with this group of leaders from industry, civil society, and academia, together we can confront these challenges to develop the measurements and standards we need to maintain America’s competitive edge and develop AI responsibly.”
Contributors to Emory’s AISIC effort comprise a cross-campus, cross-disciplinary team of experts. The group will be led by Joe Sutherland, director of the Center for AI Learning, and Anant Madabhushi, executive director of the Emory Empathetic AI for Health Institute (AI.Health).
With scholars in business, law, health care, liberal arts and sciences, Emory brings a wide variety of expertise suited to multiple consortium goals. For instance, in the realm of AI ethics, bias and fairness, faculty in the Center for Ethics are determining the parameters and standards where philosophy, mathematical modeling and causal inference converge, with the goal of studying algorithmic fairness and formulating ethical outcomes.
AI.Health scholars are examining data security and privacy concerns. Research teams are developing secure and centralized AI products to ensure privacy-preserving based models and validation tools across a range of disease indications, including cancer, cardiovascular disease, diabetes and brain health.
“Emory is uniquely qualified to contribute to the important work of the consortium,” said Sutherland. “As a highly ranked R1 liberal arts institution in one of the most diverse cities in the country, we are built for interdisciplinary collaboration that explores the complexities at the intersection of society and technology. This will be an exciting opportunity for our faculty and students to work with leading-edge thinkers and apply their skills to national AI research and policy development.”
Learn more about the U.S. Artificial Intelligence Safety Institute.
Contributors
The following contributors are part of the Emory team for the U.S. AI Safety Institute Consortium:
Joe Sutherland (Lead PI)
Director, Center for AI Learning
Anant Madabhushi (Co-PI)
Executive Director, Emory Empathic AI for Health Institute
Cliff Carrubba
Samuel Candler Dobbs Professor and chair in the Department of Quantitative Theory and Methods, Emory College of Arts and Sciences
Jinho Choi
Associate Professor, Department of Quantitative Theory and Methods, Emory College of Arts and Sciences
Gari Clifford
Professor and chair in the Department of Biomedical Informatics, Emory University School of Medicine
Joe Depa
Chief Data and AI Officer, Emory Healthcare and Emory University
Charles Elliott
Cloud Engineer Architect, Office of Information Technology
Alistair Erskine
Chief Information and Digital Officer, Emory Healthcare and Vice President of Digital Health, Emory University
Rajiv Garg
Associate Professor of Information Systems and Operation Management, Goizueta Business School
Jo Guldi
Professor, Department of Quantitative Theory and Methods, Emory College of Arts and Sciences
Vicki Hertzberg
Director, Center for Data Science; Professor, Computer Science; Biostatistics and Bioinformatics, Nell Hodgson Woodruff School of Nursing
Thomas Ottolin
Senior Program Coordinator, Center for AI Learning
Geoffrey Parsons
Lead Cybersecurity Architect, Office of Information Technology
David Schweidel
Professor of Marketing, Goizueta Business School
Vaidy Sunderam
Samuel Candler Dobbs Professor and chair in the Department of Computer Science, Emory College of Arts and Sciences
Lance Waller
Professor, Biostatistics and Bioinformatics, Rollins School of Public Health
Danielle Williams
Events and Communications Specialist, Center for AI Learning