CANDIDATE AI:
THE IMPACT OF ARTIFICIAL INTELLIGENCE ON ELECTIONS

Emory experts weigh in on how chatbots, algorithmic targeting, deepfakes and a sea of misinformation — and the tools designed to counter them — might sway how we vote in November and beyond.

On January 21, 2024, two days before the New Hampshire presidential primaries, thousands of the state’s registered voters received a strange phone call. Robocalls on behalf of certain candidates and ballot measures are nothing unusual, especially in the days leading up to a big election. But this call was unique in that it came directly from the president of the United States.

Or so it seemed. The voice on the other end of the line sounded just like President Joe Biden. He even used his signature catchphrase: “What a bunch of malarkey!” But strangely, he was telling these would-be voters to stay away from the polls, falsely warning them that voting in the primary would somehow make them ineligible to vote in the general election in November.

The robocalls didn’t necessarily impact the voting results; Biden still handily won the New Hampshire Democratic primary. Nevertheless, the stunt sent shockwaves through the worlds of politics, media and technology because the misleading message didn’t come from the president — it came from a machine.

The call was what’s known as a deepfake, a recording generated by artificial intelligence (AI), made by a political consultant to sound exactly like Biden and, in this case, apparently suppress voter turnout. It was one of the most high-profile examples of how generative AI is being used in the realm of politics.

These deepfakes are affecting both sides of the political aisle. In summer 2023, the early days of the Republican race for the presidency, would-be candidate and Florida Gov. Ron DeSantis shared deepfakes of former President Donald Trump hugging Anthony Fauci, one of the leaders and lightning rods of the U.S.’s COVID-19 response. And, despite being a victim of deepfake tactics like this, Trump has not been afraid to turn around and use them himself. Famously, this included his recent Truth Social post of AI-manipulated photos that showed pop star Taylor Swift, decked out as Uncle Sam, endorsing him for president.

This use and misuse of AI as a high-tech, 21st-century iteration of age-old American political shenanigans is triggering some who fear that campaigns can now essentially manufacture their own realities, further polarizing the electorate and chipping away at already fragile public trust in our institutions. But the fact is that AI, machine learning and the almighty algorithm have been impacting the world of politics behind the scenes for years — and not just in negative ways.

The good uses of AI in political campaigns aren’t as easy to spot, yet they’re almost equally influential. Today, AI tools are being used to increase voter engagement through personalized, microtargeted messaging. Additionally, they’re improving campaign efficiencies by helping candidates digest and sort through vast amounts of information on vital issues.

The technology can empower campaigns to make more informed decisions when allocating resources based on predictive models of voter behavior. Finally, generative AI like ChatGPT and other sophisticated chatbots can even be used by candidates to craft and refine stump speeches, provide a sparring partner for debate and quickly digest dense and complex legislation.

“AI can be used to make everything faster, and not necessarily in a malicious way,” says Anthony J. DeMattee, a data scientist who heads up the Digital Threats to Democracy Project for The Carter Center. “We know political campaigns typically use AI not to create misleading information or opposition, but quickly generate arguments for their policy positions. For example, ‘Give me 50 tweets on why reduced taxes are good for the economy and use emojis.’ The tech is great for ideation and first drafts. Campaigns aren’t replacing managers; they’re using AI for analyzing data. It’s just another tool.”

The Carter Center, founded in 1982 by former President Jimmy Carter through a partnership with Emory University, is also using AI to ensure election integrity. For instance, the Atlanta-based nongovernmental, nonprofit organization developed ELMO, an open-source data collection and reporting system that enables poll watchers both in the U.S. and around the globe to share and analyze data from the field in real time.

With AI becoming so pervasive in every aspect of our elections and politics, Emory is uniquely positioned to influence the future — and not just through its link to The Carter Center.

In 2022, the university launched the groundbreaking AI.Humanity Initiative, an institution-wide effort to build on Emory’s existing technological, analytical and ethics-based expertise to guide and shape the way AI technologies impact society. This has included the hiring of nearly 40 new faculty with expertise in AI and its applications in business, law, health, humanities and beyond, developing robust AI educational programs, advancing the ethical use of the technology and engendering a collaborative community of AI-centered ideas, teaching and research.

Emory has already been recognized as a leader in this space. In February 2024, for example, the university became one of more than 200 leading authorities to join the U.S. Artificial Intelligence Safety Institute Consortium (AISIC). Emory now collaborates with Fortune 500 companies, academic teams, nonprofit organizations and other U.S. government agencies to enable safe AI systems and support future standards and policies to guide its responsible use.

“Emory’s distinctive focus on guiding the AI revolution has been to focus on its intersections with the human experience — whether it’s to improve human health, generate economic value or promote social justice,” says Ravi V. Bellamkonda, Emory’s provost and executive vice president for academic affairs. “As a leading liberal arts and research university, we have a calling and a responsibility to understand how this revolutionary technology is transforming our world.”

“Data science can influence politics in any direction,” says Rajiv Garg, associate professor of Information Systems and Operation Management at Goizueta Business School and a member of Emory’s AISIC working group. Garg’s research focuses partly on the diffusion of digital content across social networks. “I’m very optimistic about AI. But it’s all about who has the team to move the waters. Whichever candidate uses AI more effectively will likely be the leader.”

FROM BIG DATA TO MICROTARGETING

As long as there have been American elections, candidates and their campaigns have tried to predict how people will vote. More specifically, they’ve worked to identify which data points — from demographic information to hot-button issues to past voting behaviors — will drive voters to show up at the polls and cast their ballot for a candidate or cause. The use of linear regression, using known factors to predict unknown factors, has been used by campaigns since the 1700s.

In 1948, academics launched a formal study of voter behavior, research that evolved into the American National Election Studies, the longest-running continuous series of public-opinion survey data in the U.S. The 21st-century explosion in AI has simply increased the volume and detail of information we can collect and exponentially expanded our ability to process that data.

“What is different is the ease, convenience, velocity and volume of political propaganda that can be created by AI,” says Ifeoma Ajunwa, Asa Griggs Candler Professor of Law and founding director of the AI and the Future of Work Program at Emory Law School. “It will surpass anything we’ve seen.”

AI has also made it easier to parse and analyze that data and use it to create predictive models. “We’ve used linear regression, an early form of machine learning that lets computers learn how the world works from data, for centuries,” says Joe Sutherland, director of Emory’s Center for AI Learning — the nexus for AI literacy and collaboration for the Emory community — and assistant teaching professor in the Department of Quantitative Theory and Methods. Sutherland brings tremendous tech experience to Emory as a former AI company CEO, an executive for companies like Amazon and Cisco and a public servant who supported the administration of President Barack Obama. He serves as Emory’s principal investigator in its membership with AISIC.

“Algorithms let machines learn from the world. Complex neural networks are the algorithms that power today’s mimicry of human behavior,” he says. “Campaigns are getting smarter, using those same techniques to convince voters their candidate is the best, get out the vote for their candidate and, now, trying to ensure the people they don’t want to vote don’t show up.”

Campaigns have always focused on at least one of these three factors, but previously, they haven’t had the staff or resources to collect, compute and efficiently apply the data. Instead, they used more of an indiscriminate approach, picking a larger target issue or bigger demographic and aiming their messaging accordingly in hopes of reaching the maximum number of voters.

The advent of so-called “Big Data” in the 2000s changed that. Starting with George W. Bush in 2004, but really coming into its own with Barack Obama in 2008, a presidential campaign could use the internet to ask each voter hundreds of questions (“Do you want to vote? Do you care about immigration? Do you own a boat?”), use analytics to process this vast amount of data and link it to an electoral map — all at the stroke of a computer key.

The wildfire spread of social media only amplified the campaign’s ability to gather targeted information and tailor their messaging to the individual. “Obama started using this data science to target his audience with personalized messages,” says Garg. “This is a message you care about. This is the information you need to make your decision. All messages were hyper-personalized to target voters. Thus, it is going to influence a lot of people who believe that the candidate is thinking about them as individual voters.”

Then, Sutherland points out, a third innovation hit in the 2012 and 2016 elections, when campaigns could tap into the colossal online commercial databases that had been tracking consumers for years. Suddenly, candidates didn’t need to bother would-be voters with a survey: they had instant access to demographic data on behavior and preferences that users willingly — if sometimes unknowingly — provided through their own online activities.

“They could use the consumer database and even harvest everyone’s social media data,” Sutherland says. “They now knew a surprising amount about you, like if you owned a Camry or a pickup truck, how much you just spent on Etsy or whether you had recently suffered a knee injury and searched for remedies on Google.”

While this data harvesting might skirt privacy concerns a little too closely for some, there is nothing inherently malicious about this sort of microtargeting. Campaigns just want to know what matters to each voter so they can personalize their messaging and motivate that person to show up and cast their ballot accordingly. Just years ago, that still meant reaching out to smaller and smaller segments of the electorate. However, now we’ve reached a point where the technology virtually brings the candidate face to face with each individual voter.

“We can now get to the point where it’s true one-on-one targeting,” says David Schweidel, an expert in AI and social media analytics who is Goizueta Chair in Business Technology and a professor of marketing in Emory’s Goizueta School of Business. “Before, that would have been cost-prohibitive. Both the Democrats and the Republicans have spent millions of dollars building out their voter databases. But now they need more and different data. It’s no longer just the issues you care about, it’s how you communicate. What are your brand preferences? Social media information gets pulled in. It’s not just your political profile, it’s your broader consumer profile. Because, at the end of the day, I’m trying to sell you something.”

Now with generative AI, candidates can not only pinpoint the issues, topics, brands and influencers that will “activate” you as a voter and consumer, but they can also deliver that message in a voice and language that will speak to you. “Suppose I know from your profile that you care about the theater and Broadway shows, I can create a message directly for you,” Garg says. “I can tweak that message for someone dedicated to their faith, outspoken on women’s rights or who enjoys Broadway musicals, and I can create a message about a specific policy in wording that resonates with them.”

WEAPONIZING ECHO CHAMBERS AND DISINFORMATION

Of course, this form of one-to-one marketing has its share of drawbacks. First, it can easily become too hyper-personalized — voters are bombarded with one message on a single issue at the exclusion of all else, including dissenting opinions and perspectives. The resulting echo chamber perpetuates itself as voters continually receive and actively seek out information that reinforces their opinions and values.

Second and more problematically, this same one-on-one marketing method works just as well for microtargeting inaccurate and badly skewed information — or just outright lies. This misinformation is breathtakingly easy to create and mass-distribute over these same social networks. Campaigns, political action committees and foreign interests (like the infamous Russian troll farms) have already weaponized disinformation not only to sway voters but also to wreak havoc on the American system, encouraging divisiveness and eroding public trust in both government and media.

With that trust gone, some politicians have embraced spreading false information, or “crying wolf” by shrugging off real statements and events as “fake news” to minimize damage to their campaigns and further sow skepticism of once-sacred institutions. And ever since, these memes, testimonials and fake news stories have ricocheted around the echo chambers, polarizing and isolating the electorate in their virtual realities.

Indeed, the arrival of deepfakes has simply reinforced existing delusions. “Our whole existence is mapped onto an algorithm,” Garg says. “If I can get a personal phone call from the president of the United States, anything can be hyper-personalized.”

The good news is that despite the increasing concern about AI and its potential impact on politics, several Emory experts say they are surprised by how relatively small of an impact deepfakes have had outside of the prominent examples mentioned above. At least, so far.

“This year is a major election year for several large countries,” says Natália S. Bueno, Emory assistant professor of political science, noting that about 60 countries and half of the world’s population are voting this year. She and Emory alumna Kaylyn Jackson Schiff 20G 22G and Daniel Schiff, faculty colleagues from Purdue University, have been studying how AI has been affecting those elections. They note that while there have been incidents worldwide, such as deepfakes in Slovakia and coordinated efforts in European Union elections, no major event has shaped an election and generative AI has not yet had a noticeable impact.

HOW TO FACE THE ELECTION AI ONSLAUGHT

Of course, that doesn’t mean we aren’t still being pelted by AI-generated misinformation trying to sway our opinion, lure us to the polls, keep us away or change our vote. So, what can we do? How can we cut through the din of misinformation, penetrate the AI-reinforced barriers of confirmation bias and provide voters with the reliable information they need to make an educated decision at the polls?

For one thing, experts say we shouldn’t panic. “The technology is moving fast, but I don’t think we should be fatalistic about what direction it will take,” Ajunwa says. “What we tend to overlook is that the technology will develop the way we allow it to. The government has a role, as does the tech industry, to think about how these tools impact society.”

Ajunwa points to the need for regulation when it comes to the end products of generative AI. These include watermarking, the process of embedding a unique digital signature to AI-created materials that can be recognized and flagged by computers. She suggests that we hold accountable social media like Facebook, X (formerly Twitter), YouTube and TikTok, which are disseminating disinformation at an alarming rate. She also believes our government should clamp down on any foreign actors that seek to sow discord through disinformation and propaganda as a matter of national security.

Nevertheless, Ajunwa admits that legislation alone won’t be able to mitigate or even keep pace with tech-savvy bad actors. “Companies can voluntarily choose to abide by a code of ethics,” she says. “They have not. It behooves these companies, even absent government regulation, to think about how this impacts society. This is not a partisan issue. Once we allow this sort of behavior, it will erode public trust. And that is the thread that holds everything together.”

While we wait for the government and tech companies to act, we can also take matters into our own hands and fight the malevolent use of AI — with AI.

For instance, in 2019, The Carter Center launched the Digital Threats to Democracy Project to fact-check and flag social media content for disinformation and unchecked hate speech in places like South Africa, Tunisia, Sri Lanka, Ethiopia and the U.S.

“In Sri Lanka, social media comes in three different languages, and I only speak one,” says DeMattee. “I could use a large team of local coders to translate and decode all of that or I could use AI. Ideally, we’d have both. We have big human brains we can use to fact-check, but we can also compare human coding with AI coding and see how often we are in agreement. That way, we can make sure the bot we’re training is correct and be transparent about how often the tool and humans align.”

Speaking of bots, Bueno points to new research that shows chatbot GPT-4 has been effective in debunking conspiracy theories and providing personalized counter-evidence that dissuades people who previously believed them. In addition, she and her colleagues emphasize that factual AI-generated content can complement candidate platforms, increasing the quality and availability of information about candidates, better educating the public about topics that are relevant to them.

These same chatbots can also help tailor the messaging — again, hyper-personalizing the wording of the information in a manner that will resonate with your background and interests. And all of this, Bueno says, can be automated, reducing the overall cost of campaigns, potentially making it easier for more people to run for office without major financial backing.

EMORY’S LEADERSHIP IN ETHICAL AI

And while we wait for the government to regulate the industry, for that industry to police itself and good guys with AI to combat the troublesome use of this technology, our best defense might be old-fashioned healthy skepticism of everything we see.

“Digital literacy is going to be so critical, not just during elections, but going forward,” Ajunwa says, pointing to the AI and Future of Work program she heads, which is committed to training law students in AI literacy as well as in the ethical considerations and legal issues from using the technology. “In this AI revolution, we need individuals who are discerning and critical of what they are seeing online. People need the skills to separate AI from reality.”

This is where institutions like Emory can make an impact, both through the AI.Humanity initiative and the global work of The Carter Center and its Digital Threats to Democracy Project.

AI.Humanity has created a community of scholars at Emory to study and discuss the impact of AI and machine learning and help guide the technological evolution on an ethical, socially responsible and productive path. Students can take advantage of a new interdisciplinary AI minor or an AI concentration specifically for those focusing on computer science.

Off campus, Sutherland and the Center for AI Learning have partnered with the Rowen Foundation and the Georgia Chamber of Commerce to provide “AI + You: Save Time, Earn More and Thrive with AI,” a statewide education tour that conducts free courses about AI skills and knowledge to instill confidence and empower Georgians in their daily lives — including at the polls this November.

“I see AI as being able to provide a huge benefit in supplying enhanced human services, streamlining government operations and supporting policymaking by helping people get the information they need quicker and making it more digestible,” says Sutherland. “We can create informational trust models that harken back to the time when people would seek information from credible, well-known sources. AI can help us automate, accelerate and get the clearest signal about the issues and policies we care about. It will help people help themselves.”

In other words, AI is created by the people, for the people and therefore — in the right hands, with the right policies — can be shaped to serve the common good.

Story by Tony Rehagen. Illustrations by Charlie Layton. Design by Elizabeth Hautau.

Want to know more?

Please visit Emory Magazine,  Emory News Center, and Emory University.