Artificial intelligence is everywhere lately — on the news, in podcasts and around every water cooler. A new, buzzy term, artificial general intelligence (AGI), is dominating conversations and raising more questions than it answers. So, what is AGI? Will it replace jobs or unlock solutions to the world’s biggest challenges? Will it align with societal priorities or redefine them? Is our future more “The Jetsons” or “The Terminator”? The answers aren’t simple. They depend on how we define AI, and what we expect it to do.
As the inaugural director of the Center for AI Learning, Emory’s community hub for AI literacy and a core component of the AI.Humanity initiative, Joe Sutherland is no stranger to tough questions about AI. Last summer, he embarked on a statewide tour seeking to demystify AI and empower Georgians with skills to thrive in an technology-focused future. Diverse audiences made up of professionals, business owners, lawmakers and students shared many of the same core questions.
In this Q&A, Sutherland answers our most pressing concerns about AGI, helping us cut through the hype to find the hope.
What is the difference between artificial intelligence, artificial general intelligence and artificial super intelligence?
The definitions have changed over time, and that has caused some confusion. Artificial intelligence is not a monolithic technology. It’s a set of technologies that automate tasks or mimic decisions humans normally make. We’re delegating the authority to make those decisions to a machine.
Traditionally, when people talked about artificial general intelligence (AGI), they meant Skynet from “The Terminator” or HAL from “2001: A Space Odyssey,” machines that supposedly had free will and approximated human abilities.
Today, some major research labs have redefined AGI to mean a computer program that can perform as well as, or better than, expert humans at specific tasks.
Artificial super intelligence (ASI) is the modern term for what we used to call superintelligence or the singularity. That is what we used to think of concurrently as AGI — humanoid robots that surpass human intelligence.
Do we already have AGI?
It depends on what definition you use. If you're using the task-oriented definition from the labs, yes. AI is great at retrieving information and summarizing it in a way that any human evaluating would say, “Oh, this is pretty good.”
Large language models (LLMs) like ChatGPT can outperform humans trying to get into medical school on the MCAT. But that’s not real intelligence. It’s like giving a student Google during an exam. True AGI should show reasoning, not just information retrieval and pattern matching.
What’s the difference between reasoning and what today’s AI does?
Today’s models give the impression they are reasoning, but they’re just sequentially researching information and then summarizing it. They don’t understand the world — they just predict what word comes next based on patterns. When tested on real reasoning tasks, like the Tower of Hanoi or logic puzzles, LLMs often fail unless they’ve memorized the answers.
Humor is another example where AI falls short. From a humanities perspective, humor lies at the intersection of comfort and discomfort. That boundary shifts all the time. Chatbots only regurgitate things they’ve seen in the past; they don’t understand that boundary. True humor is something they can’t do.
Similarly, businesses often don’t have data indicative of broader trends taking place “outside” of the company. Their AI models, which are trained on internal data, can’t synthesize where we’ve been with where we’re going. That would require reasoning, intuition and values alignment — things we struggle to articulate even for ourselves.
So how close are we to AGI, really?
If we're using the old definition of AGI — like “The Terminator” — I think we’re far away. LLMs won’t bring us anywhere close because they don't have reasoning or intuitively creative abilities. We haven’t given them a framework to efficiently discover new information. We're going to have to develop totally new architectures if we want to get closer.
One step in the right direction is joint embedding predictive architecture, or JEPA. Instead of stringing words together like an LLM, it infers deeper relationships between concepts and activates those inferences to achieve a higher-level objective.
It’s refreshing to learn that throwing all of society’s encyclopedia entries into an LLM doesn’t produce human-level intelligence. I’m boiling it down, of course. There’s more to humanity than meets the eye.
What are the biggest promises and perils of AGI?
The current technologies are fantastic. They enable people to do tasks in hours that they previously had to spend days on. The promise is efficiency — tools that can summarize research, assist in medical diagnosis, help you plan your shopping list for the week. That will help people earn more money, spend more time with their families or on hobbies and help them live longer, healthier lives.
The peril isn’t the tech; it’s the lack of public understanding. We need AI literacy, so people understand when AI is being used the right way and when it is not.
What kind of oversight or guardrails are needed?
The key is providing a framework that balances the value of what is being built with the intellectual property fueling these models, that is, the datasets.
Some companies have argued in court that scraping people’s data without consent or compensation is justified because it advances society. That’s a manipulative and troubling argument.
If the needs of the people contributing to these technologies are represented and they are adequately rewarded, it would incentivize greater innovation and usage. We also need thorough testing to identify where these models break or produce biased outcomes.
Can we align AI with human values?
That’s more of a social question than a technical one. Last summer I gave talks around Georgia for Emory’s statewide AI workforce development tour with Rowen Foundation and the Georgia Chamber of Commerce, which we called “AI + You.” One audience member asked, “How can we ensure that the AI models we are building have American values?”
What makes America special is that we value the opportunity for dissent. We’re never going to agree on everything, so the bigger question is: How do we create a system with responsive guardrails that adapt to our evolving societal values? What's the framework that allows us to still have a robust debate and ultimately come to good decisions?
These questions aren’t new to the advent of AI. We’ve been asking them for centuries. But the rapid rise of technology, and its increasingly centralized power, forces us to revisit how we approach collective action.
If we get AGI right, what does the best version of the future look like?
I think the best version of the future with all these technologies is one where they free us to do more meaningful work and spend less time on things we don’t enjoy. This means more time for innovation and creative problem-solving.
Emory’s AI research is already transforming health care, leading to improved diagnosis and treatment of diseases like cancer, heart disease and diabetes. We are using text analysis to uncover patterns in public policies that will make governance more efficient and equitable. Our scholars are also looking at how AI can protect people’s rights and grow businesses.
AI offers more benefits than drawbacks, if we empower people through education and include them in the conversation so they can advocate for themselves and those they care about.
If deployed thoughtfully, these technologies can amplify human potential, not replace it.
Joe Sutherland directs Emory's Center for AI Learning, a community hub for AI integration and training.