Faking It – Will OpenAI Outwit AI Tricksters?

Posted by

An ugly (and obvious) side of AI reared its head this past week.

OpenAI recently revealed that it had thwarted five covert influence operations attempting to misuse its AI models for deceptive activities.
These operations, originating from various nations (including Russia, China, Iran, and Israel), aimed to manipulate public opinion by generating fake comments, articles, and social media profiles.
This revelation has stirred a complex debate about the role of AI in both safeguarding and potentially undermining democratic processes.

Truth bomb: Humans were doing this long before AI… now it all has speed and cheap resources to manipulate and scale.

OpenAI (in this case) was smart in proactively letting the world know that has happened.
It wasn’t that long ago, when social media networks took tons of heat for not only allowing this content across their platforms, but for not proactively notifying the public.

Baby steps…

Sure, it’s commendable that Open AI let the world know, but it’s also a smart public relations play to do that in lieu of being accused about these issues further down the road.

Is Open AI (and other developers) committed to ethical AI usage?

The company’s efforts to prevent abuse could signal a robust dedication to maintaining the integrity of their technology.
Some will argue that this could also be a strategic move to preempt criticism.
By publicly addressing these issues now, OpenAI positions itself as a responsible player in the AI field, potentially softening any backlash from future incidents.

It’s getting real… and real good.

The ability of AI to produce human-like text, images, and videos is both revolutionary and frightening.
On one hand, it can enhance content creation, making our media interactions more seamless and engaging.
On the other hand, it opens the door to sophisticated misinformation campaigns that can be hard to detect and combat.
Look no further that last week’s “All Eyes on Rafah” image that has now become the Internet’s most viral AI image (is that a good or bad thing?).
Meta recently reported finding AI-generated content used deceptively on its platforms, blending seamlessly with legitimate posts from news organizations and lawmakers.

This makes distinguishing real from fake increasingly challenging (in a world where the context of the story is critical).

Just today, The New York Times published this story: OpenAI Insiders Warn of a ‘Reckless’ Race for Dominance.
A band of OpenAI insiders has just pulled the fire alarm on what they describe as a reckless and secretive culture brewing at the San Francisco AI powerhouse.
They’re in a high-stakes sprint to craft the most potent AI systems the world has ever seen, but it seems the breakneck pace might be costing them their balance.
A tale of two cities.

Here’s what we know…

The AI community must prioritize the development of tools that can detect and mitigate misinformation.
OpenAI’s new deepfake detector is a step in the right direction, but as Sandhini Agarwal (an OpenAI researcher) stated, “there is no silver bullet” in the fight against deepfakes.
Continuous innovation? Yes!
Constant vigilance? Yes!

The future of AI holds incredible promise (I promise), but it also requires us to navigate its potential pitfalls with care (we have not, historically, been all that good with this).

This is what Jeremy White and I discussed on 640 Toronto. Listen in right here.

Before you go… ThinkersOne is a new way for organizations to buy bite-sized and personalized thought leadership video content (live and recorded) from the best Thinkers in the world. If you’re looking to add excitement and big smarts to your meetings, corporate events, company off-sites, “lunch & learns” and beyond, check it out.