Skip to Content
A robot hand pushes a red button.

The implications of AI

USC researcher leads group studying ethics, safety and impact of AI

From dangerous wrecks involving self-driving cars to deepfakes of celebrities and politicians, news stories about artificial intelligence gone wrong grab attention across the world. Yet, the use of new AI in all aspects of our lives is rapidly increasing. 

 “Many thinkers believe that AI will eradicate humanity. Technology continues to grow faster than the ethics surrounding it,” says Shannon A. Bowen, a professor in the University of South Carolina’s College of Information and Communications. 

Bowen researches ethical decision making within an organization’s leadership, with an emphasis on AI. She heads the AI-Ethics Advisory Board and conducts ethics training with leaders of organizations to help them avoid and solve problems and develop strategic communications plans. 

“We try to steer the conversation to involve the ethical implications of AI earlier in the process,” Bowen says. 

On the board, researchers work together to raise questions regarding the application of the technologies and rigorously think about future use and implications. 

“Involving ethics will prevent or delay an AI attack on humanity in the future. But for now, the top three ethical issues regarding AI are its impacts on misinformation, warfare/military applications and changes to the workforce,” Bowen says. 

A quick look on social media reveals how AI is used to spread misinformation in the form of deep fakes, politically motivated cover-ups or misdirection and through the spread of disinformation or fake news.

“We have all seen those videos of some famous person talking, and it sounds and looks just like them, but really, it’s an AI-generated voice. Those videos may be for fun, but there have been instances of heads of state being deep-faked with some serious results,” Bowen says. 

Bowen also calls the loss of privacy that social media users experience as a result of AI monitoring “staggering and irretrievable.”

As for the use of AI in warfare and military applications, Bowen points to both positive and negative outcomes. 

“A bomb-detecting robot protects human life, for example. However, drone missile drops have led to the deaths of civilians, even children,” Bowen says. “AI is driven toward efficiency. The ethics behind that technology is left playing catch-up, which can be at the cost of safety or even life. For example, a self-driving Uber in Arizona did not even attempt to brake or swerve to avoid killing a pedestrian.”

Much of Bowen’s research focuses on the ethical use of AI in the workforce. 

We must control AI technology, not be controlled by it. Ethics is at the heart of these concerns for respecting the human condition.

Shannon Bowen

“AI continues to replace jobs in organizations, especially in the manufacturing sector as well as customer service,” Bowen says. “As AI advances, many people are worrying if their job will still be there in five to 10 years. I believe people will be needed to run AI, but that means employees must learn new skill sets.”

Bowen believes that, when making decisions on AI, companies must consider not only the bottom line, but also the people who work for them, as well as those who use their products and services. To support organizations in this effort, Bowen founded the Global Strategic Communication Consortium. The consortium, a nonprofit housed at USC, is made up of a group of scholars from six continents who study and collaborate on AI’s applications in areas from government affairs and manufacturing to social media and communications. The AI-Ethics Advisory Board includes consortium members. 

The group’s varied backgrounds help them identify possible ethical issues that could arise with new AI. Then, they collaborate with organizations to implement changes, or at least better prepare for those issues. 

 “Our members are experts in a variety of disciplines, including health care, robotics, strategic communications and military strategy,” Bowen says. “Our goal is to help organizations be less reactionary by keeping ethics in mind throughout the process.” 

Later this year, the consortium will release the “Handbook of Innovations in Strategic Communication,” which addresses ways to involve ethics early in the AI development process, before disaster or loss of life could occur. 

“We must control AI technology, not be controlled by it,” Bowen says. “Ethics is at the heart of these concerns for respecting the human condition. Exactly how we implement these protocols across AI remains to be seen.”

Bowen hopes the research taking place now leads to more equitable, safer and ethical workplaces and communities in the future as AI continues to expand to all aspects of our lives.

 

Banner image created in Adobe Firefly using generative AI. 

©