OpenAI: We’ll Stop GPT Misuse for Election Misinfo
2024-1-17 01:16:19 Author: securityboulevard.com(查看原文) 阅读量:19 收藏

Sam Altman, CEO of OpenAISam says avoid AI abuse—protect the democratic process.

With elections coming up in the US and other major countries, concerns are rising that hostile nations might use AI to sow dissent. Generative AI tools such as ChatGPT and DALL-E will get extra guardrails, say their creators.

OpenAI CEO Sam Altman (pictured) wants to “make sure our AI systems are built, deployed and used safely.” In today’s SB Blogwatch, we assess the challenge.

Your humble blogwatcher curated these bloggy bits for your entertainment. Not to mention: Dialup modem song.

Guardrails Prevent Trouble?

What’s the craic? Asa Fitch reports—“OpenAI Curbs Use Of Its Tools In Politics”:

People aren’t allowed
OpenAI outlined limits on using its tools in politics during the run-up to elections in 2024, amid mounting concern that artificial-intelligence systems could mass-produce misinformation and sway voters in high-profile races. … The growth of such tools has raised worry that [generative AI] could be used to manipulate voters with false news stories and computer-generated images and video.

OpenAI said people aren’t allowed to use its tools for political campaigning and lobbying. People also aren’t allowed to create chatbots that impersonate candidates and other real people, or chatbots that pretend to be local governments. … It also banned applications that discouraged voting—by claiming a vote was meaningless, for example.

It’s an international issue. Gintaras Radauskas reminds us—“OpenAI to introduce anti-disinformation tools”:

Scaled influence operations
Elections are taking place this year in countries that are home to half the world’s population and represent 60% of global GDP. People in the United States, the United Kingdom, the European Union, and India will all vote this year.

And just recently, the World Economic Forum’s “Global Risks Report 2024” warned that generative artificial intelligence (AI) tools could help disrupt politics via the spread of false information. … To increase vigilance ahead of the elections, OpenAI said it has brought together expertise from its safety systems, threat intelligence, legal, engineering, and policy teams. It anticipates quite a few misleading deepfakes, chatbots impersonating candidates, and scaled influence operations.

For example? Okian Warrior has one:

A recent example is Mark Ruffalo (aka “The Hulk”) reposting an image of Trump on Epstein’s plane. Someone made a deepfake image smearing Trump, Ruffalo believed it, and because Ruffalo has a wide following the fake image went far and wide on the internet.

Because, you know, everybody in the ****in’ country can edit and post videos now.

Why now? Why, ask u/MassiveWasabi:

OpenAI … have all the money they need, they have the best researchers, and they have Microsoft providing them with massive amounts of compute. Now all they need to do is make sure the public doesn’t freak out and put pressure on the government to regulate them.

You know what would make the public freak out? Massive, unending streams of disinformation created by generative AI. Images that are literally indistinguishable from reality, or AI agents all over the internet spreading propaganda for presidential candidates.

The way this upcoming election plays out will shape AI regulation for the foreseeable future, and all the big AI companies know it. There’s way too much at stake here from the corporate perspective.

Who do we need to worry about? Foreigners, finks flatline[You’re fired—Ed.]

I’m not as worried about Joe Schmo using OpenAI services as I am about a foreign state with weaponized AI tools. Plus nobody really cares if you can show that something was AI generated after the fact, the first impression is all that counts.

Remember the backlash against snopes.com? Social media plus AI is going to make this a really spectacular election season and OpenAI can do approximately nothing to curb that.

And will OpenAI’s plan help? Nope, cries OYAHHH:

Kinda worthless when the targets of this technology will simply move to technologies not controlled by big tech. … When you put the squeeze on the free flow of information it tends to leak out from a direction you were not expecting.

Google, Microsoft, Apple honestly are stupid enough to think they are the Gatekeepers. They are minding gates where the sheep pass, not where the wolves roam free.

So, what’s the solution? Since you asked, u/ButSinceYouAsked answers:

Drowning out false information with true information is the way to go (or contextualising out-of-context stuff – see Community Notes on X or YouTube’s little, “Here’s an article on [topic]”). … Anything that gives people greater access to true information wins in my book.

We don’t need no stinkin’ AI, kobe_throwaway’s thinkin’:

I don’t think AI models are capable of producing the amount of misinformation that is coming out from some of the most popular American news outlets nor is it going to have as much impact as the said outlets will have. I do however believe [AI is] going to be the scapegoat.

Meanwhile, VeryFluffyBunny pays attention to that man behind the curtain:

Have we reached peak hype yet? What OpenAI are essentially claiming to potential investors & clients is that they can provide massively influential PR & marketing. This has got nothing to do with preserving democracy & everything to do with making money. A lot of money.

And Finally:

Ask your parents

Previously in And Finally


You have been reading SB Blogwatch by Richi Jennings. Richi curates the best bloggy bits, finest forums, and weirdest websites … so you don’t have to. Hate mail may be directed to @RiCHi, @richij or [email protected]. Ask your doctor before reading. Your mileage may vary. Past performance is no guarantee of future results. Do not stare into laser with remaining eye. E&OE. 30.

Image sauce: Steve Jennings/Getty Images for TechCrunch (cc:by; leveled and cropped)

Recent Articles By Author


文章来源: https://securityboulevard.com/2024/01/openai-election-misinfo-richixbw/
如有侵权请联系:admin#unsafe.sh