Technology is rapidly changing the world we live in— especially the way we communicate and connect. Because of her training and research in journalism, University of Rhode Island (URI) journalism Professor Ammina Kothari values the fundamentals of communication.
Kothari’s training and experience teaching news reporting and writing is the foundation of her focus on the role journalism plays in helping society capture and understand big picture problems. She examines conflicts, emerging issues, and how stories are being told, influenced, and featured.
“I’ve always been interested as a journalist in how stories come about,” she says. “But now as a researcher, I also want to know how things happen. You see the output, but what was the process behind it?”
Kothari, the Harrington School of Communication and Media director, studies the influence of technology on communication and journalism. Her interdisciplinary research focuses on topics ranging from gender to social media, to artificial intelligence (AI), and science communication.
As technology continues to change our communication landscape—such as through social media—Kothari looks at factors that influence public opinion.
One of Kothari’s recent projects looks at the representation of the LGBTQI+ community in the media. Particularly, she examines how heterosexual young adult attitudes and behaviors toward LGBTQI+ people and policies are influenced.
Kothari’s students can’t use a service like ChatGPT to write their stories, because it is not their original work. However, that does not mean there is no place for it in the classroom. For instance, summarizing reports,
looking up data, or generating story ideas can be great ways for students and professors to use AI to supplement learning.
In collaboration with URI’s Assistant Professor Joon Kim and colleagues from the Rochester Institute of Technology, Kothari conducted an online survey of 623 heterosexual-identifying people, ages 18 to 41. The survey questioned participants about their exposure to messages about LGBTQI+ people in entertainment media and news stories. Then participants were shown two different messages about an LGBTQI+ person, one positive and one negative. The researchers then asked how likely the person was to support a policy recommendation that would positively impact the LGBTQI+ community.
The study found that people who were exposed to pro-LGBTQI+ messages on social media were more likely to support pro-LGBTQI+ policy, but that effect was only strong when people were exposed to messages on social media versus a medium like TV. They also found that exposure to negative LGBTQI+ messages didn’t change people’s attitudes about the community.
“We found that if we are trying to create a more inclusive society, particularly with heterosexual young adults, there should be more positive messages on social media,” Kothari says.
For the past three years, she has been working with U.S. and European researchers to examine the relationship between willingness to get the COVID-19 vaccine, risk perceptions, and trust in health authorities. The results across multiple countries showed that trust in health authorities was a particularly strong predictor of intent to get vaccinated, emphasizing the need for a clear translation of science.
The Impact of AI on Communication
Kothari also has been looking at how AI tools, like generative AI—which creates brand new content based on data it has been given—are changing newsroom practices. In the past few years, she has interviewed and surveyed journalists and editors in the United States, United Kingdom, and Africa to understand how they’re incorporating AI tools, and if they aren’t, what their perception of the impact of generative AI is on their work.
National and international news organizations have invested in both AI tools and staff with programming and data science skills, while mid-size and smaller media outlets have to rely on free or grant-funded products to experiment in their newsroom. While journalists are optimistic about the potential of generative AI, those using some of the tools also emphasize the need for human oversight because of biases and assumptions baked in AI models.
The impact of generative AI extends to the classroom as well.
For the first time in her 12-year teaching career, Kothari added a generative AI policy to her journalism syllabus last semester.
“The ethical use of AI is important to teach students,” Kothari says. Kothari’s students can’t use a service like ChatGPT to write their stories, because it is not their original work. However, that does not mean there is no place for it in the classroom. For instance, summarizing reports, looking up data, or generating story ideas can be great ways for students and professors to use AI to supplement learning.
“Storytelling is a powerful tool,” she says. “So how do we present the information in an engaging manner so people who aren’t part of the scientific community will be able to understand and connect with information and how it impacts them?”