Iran and an Israeli company also exploited the tools in online influence efforts, but none gained much traction, an OpenAI report said.
OpenAI said on Thursday that it had identified and disrupted five online campaigns that used its generative artificial intelligence technologies to deceptively manipulate public opinion around the world and influence geopolitics.
The efforts were run by state actors and private companies in Russia, China, Iran and Israel, OpenAI said in a report about covert influence campaigns. The operations used OpenAI’s technology to generate social media posts, translate and edit articles, write headlines and debug computer programs, typically to win support for political campaigns or to swing public opinion in geopolitical conflicts.
OpenAI’s report is the first time that a major A.I. company has revealed how its specific tools were used for such online deception, social media researchers said. The recent rise of generative A.I. has raised questions about how the technology might contribute to online disinformation, especially in a year when major elections are happening across the globe.
Feeling some real surprised Pikachu energy right now
The operations used OpenAI’s technology to generate social media posts, translate and edit articles, write headlines and debug computer programs, typically to win support for political campaigns or to swing public opinion in geopolitical conflicts.
Covert activity against other countries seems like an area where one might want to invest in one’s own automated translation tools, or at least hire a human translator.
Good steps, hope they would do the same for the any other company or government, the US included.
We should know about these abuse cases and study them if we’re ever to get a leash and collar around AI before disaster can occur.