Artificial intelligence tools are still generating misleading election images


How to Stop Misleading Images Using Open Source AI: A Commentary on Art, Science, and Politics of Generative Artificial Intelligence

Hood is concerned that the issue with generative AI is twofold: platforms need to know about misleading images and they need to prevent them from being created. Meta’s system for watermarking their own content was easy to circumvent, according to a recent report.

It can be trained to be quite gruesome and bad in a variety of ways. He’s a fan of the freewheeling experimentation that has been unleashed by open source image-generation technology. The creation of explicit images of women for harassment is possible because of that same freedom.

Tools can be used in both legitimate and harassing use cases. One popular open source face-swapping program is used by people in the entertainment industry and as the “tool of choice for bad actors” making nonconsensual deepfakes, Ajder says. Stable Diffusion, developed by startup Stability AI, is a high-resolution image generator that has 10 million users and has guardrails to prevent explicit image creation and policies barring malicious use. online guides explain ways to circumvent built-in limitations of the image generator that is open source, and a customized version of the generator is also open source.

Ajder says that at the same time that it’s becoming a favorite of researchers, creatives like Cohen, and academics working on AI, open source image generation software has become the bedrock of deepfake porn. Some tools based on open source algorithms are purpose-built for salacious or harassing uses, such as “nudifying” apps that digitally remove women’s clothes in images.

AI Tools Are Still Generating Misleading Election Images: A Case Study of Brandon Gill, Dinesh D’Souza, and Donald Trump

Despite years of evidence to the contrary, many Republicans still believe that President Joe Biden’s win in 2020 was illegitimate. A number of election denying candidates won their primaries during Super Tuesday, including Brandon Gill, the son-in-law of right-wing pundit Dinesh D’Souza and promoter of the debunked 2000 Mules film. Going into this year’s elections, claims of election fraud remain a staple for candidates running on the right, fueled by dis- and misinformation, both online and off.

“At the moment platforms are not particularly well prepared for this. So the elections are going to be one of the real tests of safety around AI images,” says Hood. We need the tools and platforms to make more progress on this, particularly around images that could be used to promote claims of a stolen election, or discourage people from voting.

While some of the images featured political figures, namely President Joe Biden and Donald Trump, others were more generic and, Callum Hood, head researcher at CCDH, worries, could be more misleading. There were images created by the researchers that featured militias outside of a polling place, or showed voting machines being tampered with. Researchers at StabilityAI were able to prompt their Dream Studio to generate a picture of President Biden in a hospital bed.

“The real weakness was around images that could be used to try and evidence false claims of a stolen election,” says Hood. There are no clear policies or safety measures on most of the platforms.

CCDH researchers tested 160 prompts on ChatGPT Plus, Midjourney, Dream Studio, and Image Creator, and found that Midjourney was most likely to produce misleading election-related images, at about 65 percent of the time. Researchers only able to prompt a chat. Plus to do so 28 percent of the time.

Source: AI Tools Are Still Generating Misleading Election Images

Using provenance to validate the origins of DALL-E 3 images and assess candidates’ contributions to AI-generated content: a WIRED spokesperson

“It shows that there can be significant differences between the safety measures these tools put in place,” says Hood. “If one so effectively seals these weaknesses, it means that the others haven’t really bothered.”

Kayla Wood, a spokesperson for OpenAI, told WIRED that the company is working to “improve transparency on AI-generated content and design mitigations like declining requests that ask for image generation of real people, including candidates. Provenance tools are being developed to help verify the origin of images created by DALL-E 3. We will continue to use our tools to learn.