Tech

OpenAI Disrupts Iranian Misinformation Campaign

• Bookmarks: 2


The company said the Iranian effort, which used ChatGPT, did not gain much traction.

OpenAI said on Friday that it had discovered and disrupted an Iranian influence campaign that used the company’s generative artificial intelligence technologies to spread misinformation online, including content related to the U.S. presidential election.

The San Francisco A.I. company said it had banned several accounts linked to the campaign from its online services. The Iranian effort, OpenAI added, did not seem to reach a sizable audience.

“The operation doesn’t appear to have benefited from meaningfully increased audience engagement because of the use of A.I.,” said Ben Nimmo, a principal investigator for OpenAI who has spent years tracking covert influence campaigns from positions at companies including OpenAI and Meta. “We did not see signs that it was getting substantial engagement from real people at all.”

The popularity of generative A.I. like OpenAI’s online chatbot, ChatGPT, has raised questions about how such technologies might contribute to online disinformation, especially in a year when there are major elections across the globe.

In May, OpenAI released a first-of-its-kind report showing that it had identified and disrupted five other online campaigns that used its technologies to deceptively manipulate public opinion and influence geopolitics. Those efforts were run by state actors and private companies in Russia, China and Israel as well as Iran.

These covert operations used OpenAI’s technology to generate social media posts, translate and edit articles, write headlines and debug computer programs, typically to win support for political campaigns or to swing public opinion in geopolitical conflicts.

An example of the campaign uncovered by OpenAI to manipulate public opinion.via OpenAI
An example of the Iranian-backed Storm-2035 campaign, which used ChatGPT to generate content around topics including the U.S. presidential election. via OpenAI

This week, OpenAI identified several ChatGPT accounts that were using its chatbot to generate text and images for a covert Iranian campaign that the company called Storm-2035. The company said the campaign had used ChatGPT to generate content related to a variety of topics, including commentary on candidates in the U.S. presidential election.

In some cases, the commentary seemed progressive. In other cases, it seemed conservative. It also dealt with hot-button topics ranging from the war in Gaza to Scottish independence.

The campaign, OpenAI said, used its technologies to generate articles and shorter comments posted on websites and on social media. In some cases, the campaign used ChatGPT to rewrite comments posted by other social media users.

OpenAI added that a majority of the campaign’s social media posts had received few or no likes, shares or comments, and that it had found little evidence that web articles produced by the campaigns were shared across social media.

(The New York Times has sued OpenAI and its partner, Microsoft, claiming copyright infringement of news content related to A.I. systems.)

This post was originally published on this site

2 recommended
0 views
bookmark icon