President Biden recently signed an executive order that the White House claims is the most “sweeping action ever taken” to protect Americans from artificial intelligence (AI), but a leading deepfake detection company said the measure has “dangerous limitations.”
Speaking with Fox News Digital, Reality Defender co-founder and CEP Ben Colman said he applauded the Biden administration for addressing AI now versus kicking it down the line. He said the move was incredibly important to raise awareness about the potential dangers of the technology.
However, Colman also raised several concerns about the recent executive order, especially regarding provenance watermarking, which involves adding a message, logo, or signature to digital objects to determine its origin or source.
Part of the executive order, the White House has said, will attempt to protect citizens from AI-enabled fraud by establishing standards and practices to differentiate between authentic and AI-generated content.
EXPERT WARNS BIDEN’S AI ORDER HAS ‘WRONG PRIORITIES’ DESPITE SOME POSITIVE REVIEWS
“The Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content,” the White House said. “Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world.”
Colman said that while Reality Defender believes the guidance is presented “with the best of intentions,” the team is concerned by the administration’s belief that watermarking is an effective way of labeling AI-generated content and protecting consumers from “disinformation, deception and fraud.”
Soheil Feizi, a professor at the University of Maryland, recently told Wired that his research team determined that there are currently no reliable watermarking applications currently available.
The research team evaded current watermarking methods easily and found it even simpler to add fake logos or emblems to emblems that AI did not generate.
A similar collaboration between the University of California, Santa Barbara and Carnegie Mellon University found that a series of simulated attacks could easily remove watermarks.
WHAT IS ARTIFICIAL INTELLIGENCE?
The provenance watermarking proposed in the executive order, Colman noted, has been developed and promoted by some of the largest tech companies, including Microsoft, Adobe, Arm and Intel.
“It’s a bit of a walled garden approach where they’re trying and saying, trust us, we will govern ourselves, no need for any regulations. And our view is that it’s quite a bit self-serving for the large main players,” Colman said.
He also stressed that anyone outside the walled garden, an environment that controls users’ access to network-based content, like the Google or Apple app stores, will not follow the rules set out by the government and the large corporations.
These include bad actors, state-level hackers and general hackers who are responsible for a significant portion of the nefarious uses of AI. These individuals also do not typically use the platforms companies have to create effective oversight tools.
“It’s, in our view, a bit of a Band-Aid,” Colman said. “It presupposes that watermarking is perfect and also that everybody, including all hackers, are going to follow the rules. It’s like saying we’re building great locks for houses that only take a certain kind of key and all aspired criminals are going to follow our protocols.”
EXPERTS CALL BIDEN EXECUTIVE ORDER ON AI A ‘FIRST STEP,’ BUT SOME EXPRESS DOUBTS
He also suggested that any watermark that is publicized and is trying to become standard is ultimately hackable.
“Not only can you potentially break it, but you can also fake it as well,” Colman said.
While Reality Defender believes that provenance is a helpful tool, it is not the only tool. To achieve provenance, a watermark embeds itself within generated content, and the signal attempts to encode the source of said content.
Colman said the modern world is a field of inference, one where we may never actually have the ground truth. The assumption is that users may never grasp the original asset, such as a person’s face. Perhaps that person’s face is actually an actress or maybe the person doesn’t exist at all.
Operating under this assumption, Colman said Reality Defender uses probabilistic scoring, where content is graded on a scale of 0-99% certainty. By analyzing the pixel layer, the spectrogram, waveforms of audio and sentence construction in text, a deepfake detection company can indicate to a stakeholder whether content is legitimate based on that probabilistic view.
HOW BIDEN’S EXECUTIVE ORDER WILL IMPACT THE FUTURE OF AI
“Companies poised to reap substantial profits from AI-generated content shouldn’t be relied upon to openly admit how susceptible their products are to manipulation and misuse,” Colman added. “As the interests of these companies and the public diverge, the government should prioritize reliable methods of deepfake detection rather than trusting implicitly in unreliable safety measures offered by the biggest players in the industry.”
GET FOX BUSINESS ON THE GO BY CLICKING HERE
Colman said he and others are pushing elected officials to implement a quick tool to provide further information before potentially manipulated content goes viral. This could be a probabilistic score, a red-green checkbox, results from a deepfake detection company, or some other kind of report card.
That way, the end-user has some degree of knowledge of the authenticity of the image they have access to before they decide to post.
For more Culture, Media, Education, Opinion, and channel coverage, visit foxnews.com/media.