Decoding Canine Cognition

Machine learning gives glimpse
of how a dog's brain
represents what it sees

Getty Images

Getty Images

Scientists have decoded visual images from a dog’s brain, offering a first look at how the canine mind reconstructs what it sees. The Journal of Visualized Experiments published the research done at Emory University.

The results suggest that dogs are more attuned to actions in their environment rather than to who or what is doing the action.

The researchers recorded the fMRI neural data for two awake, unrestrained dogs as they watched videos in three 30-minute sessions, for a total of 90 minutes. They then used a machine-learning algorithm to analyze the patterns in the neural data.

“We showed that we can monitor the activity in a dog’s brain while it is watching a video and, to at least a limited degree, reconstruct what it is looking at,” says Gregory Berns, Emory professor of psychology and corresponding author of the paper. “The fact that we are able to do that is remarkable.”

"My experiences at Emory opened up the world to me," says Erin Phillips, first author of the new paper, who came to the university as a Bobby Jones Scholar with an undergraduate degree in zoology. She took a range of liberal arts and computer sciences courses at Emory and worked in the Canine Cognitive Neuroscience Lab.

"My experiences at Emory opened up the world to me," says Erin Phillips, first author of the new paper, who came to the university as a Bobby Jones Scholar with an undergraduate degree in zoology. She took a range of liberal arts and computer sciences courses at Emory and worked in the Canine Cognitive Neuroscience Lab.

The project was inspired by recent advancements in machine learning and fMRI to decode visual stimuli from the human brain, providing new insights into the nature of perception. Beyond humans, the technique has been applied to only a handful of other species, including some primates.

“While our work is based on just two dogs, it offers proof of concept that these methods work on canines,” says Erin Phillips, first author of the paper, who did the work as a research specialist in Berns’ Canine Cognitive Neuroscience Lab. “I hope this paper helps pave the way for other researchers to apply these methods on dogs, as well as on other species, so we can get more data and bigger insights into how the minds of different animals work.”

Phillips, a native of Scotland, came to Emory as a Bobby Jones Scholar, an exchange program between Emory and the University of St Andrews. She is currently a graduate student in ecology and evolutionary biology at Princeton University.

The half-hour video created by the researchers aimed to recreate scenes typical to most dogs, as seen from their point of view. (Emory Canine Cognitive Neuroscience Lab)

The half-hour video created by the researchers aimed to recreate scenes typical to most dogs, as seen from their point of view. (Emory Canine Cognitive Neuroscience Lab)

Berns and colleagues pioneered training techniques for getting dogs to walk into an fMRI scanner and hold completely still and unrestrained while their neural activity is measured. A decade ago, his team published the first fMRI brain images of a fully awake, unrestrained dog. That opened the door to what Berns calls The Dog Project — a series of experiments exploring the mind of the oldest domesticated species.

Over the years, his lab has published research into how the canine brain processes vision, words, smells and rewards such as receiving praise or food.

Gregory Berns with his dog Callie.

Berns with Callie, the first dog to have its brain activity scanned while fully awake and unrestrained.

Berns with Callie, the first dog to have its brain activity scanned while fully awake and unrestrained.

Meanwhile, the technology behind machine-learning computer algorithms kept improving. The technology has allowed scientists to decode some human brain-activity patterns. The technology “reads minds” by detecting within brain-data patterns the different objects or actions that an individual is seeing while watching a video.

“I began to wonder, ‘Can we apply similar techniques to dogs?’” Berns recalls.

To create the video for the experiments, a video recorder was attached to a gimbal and selfie stick that allowed researchers to shoot steady footage from a dog's perspective. (Emory Canine Cognitive Neuroscience Lab)

To create the video for the experiments, a video recorder was attached to a gimbal and selfie stick that allowed researchers to shoot steady footage from a dog's perspective. (Emory Canine Cognitive Neuroscience Lab)

The first challenge was to come up with video content that a dog might find interesting enough to watch for an extended period. The Emory research team affixed a video recorder to a gimbal and selfie stick that allowed them to shoot steady footage from a dog’s perspective, at about waist high to a human or a little bit lower.

They used the device to create a half-hour video of scenes relating to the lives of most dogs. Activities included dogs being petted by people and receiving treats from people. Scenes with dogs also showed them sniffing, playing, eating or walking on a leash. Activity scenes showed cars, bikes or a scooter going by on a road; a cat walking in a house; a deer crossing a path; people sitting; people hugging or kissing; people offering a rubber bone or a ball to the camera; and people eating.

The video data was segmented by time stamps into various classifiers, including object-based classifiers (such as dog, car, human, cat) and action-based classifiers (such as sniffing, playing or eating).

Kirsten Gillette offers a ball in a video scene

Kirsten Gillette, a co-author of the new paper who worked on the project as an Emory undergraduate majoring in neuroscience and behavioral biology, offers a ball in a scene from the video. (Emory Canine Cognitive Neuroscience Lab)

Kirsten Gillette, a co-author of the new paper who worked on the project as an Emory undergraduate majoring in neuroscience and behavioral biology, offers a ball in a scene from the video. (Emory Canine Cognitive Neuroscience Lab)

Only two of the dogs that had been trained for experiments in an fMRI had the focus and temperament to lie perfectly still and watch the 30-minute video without a break, including three sessions for a total of 90 minutes. These two “super star” canines were Daisy, a mixed breed who may be part Boston terrier, and Bhubo, a mixed breed who may be part boxer.

“They didn’t even need treats,” says Phillips, who monitored the animals during the fMRI sessions and watched their eyes tracking on the video. “It was amusing because it’s serious science, and a lot of time and effort went into it, but it came down to these dogs watching videos of other dogs and humans acting kind of silly.”

Two humans also underwent the same experiment, watching the same 30-minute video in three separate sessions, while lying in an fMRI.

The brain data could be mapped onto the video classifiers using the time stamps.

Daisy in the scanner

Daisy takes her place in the scanner. Her ears are taped to hold in ear plugs that muffle the noise. (Emory Canine Cognitive Neuroscience Lab)

Bhubo prepares for a scan

Bhubo prepares for a scan with his owner Ashwin Sakhardande. The dog's ears are taped to help muffle the noise of the fMRI scanner. (Emory Canine Cognitive Neuroscience Lab)

Daisy in the scanner

Daisy takes her place in the scanner. Her ears are taped to hold in ear plugs that muffle the noise. (Emory Canine Cognitive Neuroscience Lab)

Bhubo prepares for a scan with his owner Ashwin Sakhardande. The dog's ears are taped to help muffle the noise of the fMRI scanner. (Emory Canine Cognitive Neuroscience Lab)

A machine-learning algorithm, a neural net known as Ivis, was applied to the data. A neural net is a method of doing machine learning by having a computer analyze training examples. In this case, the neural net was trained to classify the brain-data content.

The results for the two human subjects found that the model developed using the neural net showed 99% accuracy in mapping the brain data onto both the object- and action-based classifiers.

In the case of decoding video content from the dogs, the model did not work for the object classifiers. It was 75% to 88% accurate, however, at decoding the action classifications for the dogs.

The results suggest major differences in how the brains of humans and dogs work.

“We humans are very object oriented,” Berns says. “There are 10 times as many nouns as there are verbs in the English language because we have a particular obsession with naming objects. Dogs appear to be less concerned with who or what they are seeing and more concerned with the action itself.”

Erin Phillips tags an antelope in Mozambique

Erin Phillips, now a graduate student at Princeton, tags a sedated antelope in Mozambique. "I'm now focused on behavioral studies of animals," she says. "Getting a different perspective by working in the Canine Cognitive Neuroscience Lab made me a stronger scientist."

Erin Phillips, now a graduate student at Princeton, tags a sedated antelope in Mozambique. "I'm now focused on behavioral studies of animals," she says. "Getting a different perspective by working in the Canine Cognitive Neuroscience Lab made me a stronger scientist."

Dogs and humans also have major differences in their visual systems, Berns notes. Dogs see only in shades of blue and yellow but have a slightly higher density of vision receptors designed to detect motion.

“It makes perfect sense that dogs’ brains are going to be highly attuned to actions first and foremost,” he says. “Animals have to be very concerned with things happening in their environment to avoid being eaten or to monitor animals they might want to hunt. Action and movement are paramount.”

For Philips, understanding how different animals perceive the world is important to her current field research into how predator reintroduction in Mozambique may impact ecosystems.

“Historically, there hasn’t been much overlap in computer science and ecology,” she says. “But machine learning is a growing field that is starting to find broader applications, including in ecology.”

Additional authors of the paper include Daniel Dilks, Emory associate professor of psychology, and Kirsten Gillette, who worked on the project as an Emory undergraduate neuroscience and behavioral biology major. Gillette has since graduated and is now in a postbaccalaureate program at the University of North Carolina.

Daisy is owned by Rebecca Beasley and Bhubo is owned by Ashwin Sakhardande. The human experiments in the study were supported by a grant from the National Eye Institute.

Story and design by Carol Clark

To learn more:

Emory's Canine Cognitive Neuroscience Lab,
the Bobby Jones Scholarship

Media Contact: Carol Clark, carol.clark@emory.edu, 404-727-0501