Politics

AI deepfakes in campaigns may be detectable, but will it matter?

• Bookmarks: 5


So the industry needs to implement “active detection” measures as well, Farid said, like embedding digital watermarks into media metadata. He would extend that imperative to devices that record and capture real media — an unedited mobile phone photo would essentially come with a certification stamp, verifying when and where it was recorded.

Pledges from industry

That two-pronged approach seems to be the one favored by the nascent industry, with established firms like Adobe, Microsoft and the BBC leading the Coalition for Content Provenance and Authenticity (C2PA), which is developing technical standards for certifying the sources’ digital content. At a White House gathering of AI corporate leaders in July, the firms pledged to implement active detection protocols.

Anna Bulakh, head of ethics and partnerships at Respeecher, likened the ongoing development of intra-industry standards for generative AI to how websites migrated to more secure encrypted protocols, which begin web addresses with “https.” Describing herself as a “pragmatic optimist,” Bulakh said she’s hopeful AI firms can work with governments to mitigate the technology’s abuse.

But, she noted, not every startup in the AI field is as ethical as her own. In the tech space, for every Reddit trying to enforce community norms, there is a 4Chan, where nihilism reigns. The same goes for generative AI, where many companies take few steps, if any, to combat malicious use, saying it’s up to users to behave responsibly. “They allow you to train [a] voice model and copy anyone’s voice,” Bulakh said. “We have to understand that our societies are really vulnerable to disinformation. Our societies are really vulnerable to fraud as well. Our societies are not that tech savvy.”

And even the firms that have generated standards of conduct for how their products are used haven’t been able to prevent users from breaking those rules. A recent Washington Post investigation found that OpenAI’s ChatGPT allowed users to generate personalized arguments for manipulating an individual’s political views, despite the platform’s attempt to ban such uses.

This post was originally published on this site

5 recommended
2 views
bookmark icon