Skip to Content

Recognizing the bias in AI

Recognizing the bias in AI

A robotic hand with its palm facing up on a cool gradated background.

What’s your first impulse when you have a question about an unfamiliar topic? If you’re like most people, you get out your phone or log onto your laptop and look it up online. Sometimes, you might even fall down the rabbit hole, one search prompting another and another and another as you continue to gather new information.

But what happens when our unconscious biases guide that curiosity? And what happens when our already-biased inquiry then reinforces those very biases? We’ve wrestled with these problems since the dawn of the search engine, but they’re getting more problematic as we enter the age of artificial intelligence. 

Enter Alamir Novin. An assistant professor in the College of Information and Communications, Novin recently conducted a volunteer-based study on user interaction with AI systems and how user bias evolves. Among the 200 participants, he found evidence of four basic types of cognitive bias: priming, anchoring, framing and availability.

Priming is when we gravitate toward the familiar, such as familiar thumbnails and URLs, before even reading our search results. Anchoring is when our first search result influences how we perceive subsequent results. Framing is the way information is presented by multiple sources working together to paint a narrative. Availability is when we decide we’re satisfied with the search results and end our search. 

“I was interested in how people inform themselves and learn,” he says. “Misinformation and disinformation are  big now — there was a time when I had to define those terms — but I wasn’t even interested in how people misinformed themselves. I knew that was part of it. The bigger thing was how people teach themselves about topics, unconscious of biases.”

“Misinformation and disinformation are big now — there was a time when I had to define those terms — but I wasn’t even interested in how people misinformed themselves. I knew that was part of it. The bigger thing was how people teach themselves about topics, unconscious of biases.”

Alamir Novi

The problem isn’t necessarily the information itself, says Novin, but how our own biases interact with AI and how those biases then reinforce themselves in the process.

“How can people have differing opinions about something to the point where they get biased views about things without necessarily being misinformed?” he says. “They don’t have bad information. They just have this bias against other types of information.”

A possible explanation emerged when participants were shown a series of AI-generated health-related advertisements, some of which were labeled as AI-generated, some of which were not. Observing the participants’ biological reactions, as well as what each participant learned through their inquiry, Novin concluded that labeling the AI-generated advertisements led participants not to engage with the AI.

Novin also found that people’s learning methods were dependent on how the AI was presented, and the presentation could potentially amplify cognitive biases. Participants were more likely to engage with AI bots at the top of a search engine as opposed to AI on the side of the web page.

“Slight changes in user-design affect how people learn via AI,” he says. “This has future implications on how AI is used by companies like Google and Microsoft, which plan to use advertisements to support AI. However, if public education is compromised by advertisements in AI, then that is not the best path for the public.”

The best remedies against such biases, he says, are critical thinking and metacognition, or “thinking about thinking.” Simply put, intentionally reflecting on our thoughts can help us acknowledge associated biases.

“One beautiful thing that I’m starting to find with the mind is that once you point out how it’s thinking about thinking, it has the amazing ability to adjust itself,” says Novin. “It’s like, ‘That’s how I thought about this, but there’s other ways to do it.’ It doesn’t always work, but my studies found that the people who fare best are the people who articulate their metacognition better.”

 

©