June 3, 2023

Freemp3

free information music and technology for you

Deepfake Technology Is Now a Threat to Everyone. What Do We Do?

Kartik Hosanagar (@khosanagar) is a professor of technology and digital business at the Wharton School...

Kartik Hosanagar (@khosanagar) is a professor of technology and digital business at the Wharton School of the University of Pennsylvania and faculty co-lead of AI for Business. He also is the author of “A Human’s Guide to Machine Intelligence: How Algorithms Are Shaping Our Lives and How We Can Stay in Control.”

In October, MIT Prof. Sinan Aral warned his

Twitter

followers that he had discovered a video of himself that he hadn’t recorded endorsing an investment fund’s stock-trading algorithm. In reality, it wasn’t Prof. Aral in the video, but an artificial-intelligence creation in his likeness, or what is known as a highly persuasive “deepfake.”

It is striking that scammers targeted Prof. Aral, considering he is a leading expert on the study of misinformation online. It also suggests that deepfake technology is now at an inflection point: Thanks to a number of free deepfake apps that are just a Google search away, anyone can become a victim of such a scam.

The term deepfake has its origins in pornography, but it has come to mean the use of AI to create synthetic media (images, audio, video) in which someone appears to be doing or saying what in reality they haven’t done or said. The technology isn’t always misused. Cadbury, for example, joined with Bollywood celebrity Shahrukh Khan on a marketing campaign for small businesses in India hit by Covid-19. Business owners uploaded details of their stores, and Cadbury used deepfake technology to create the effect of Mr. Khan promoting them in tailored TV ads. (The campaign was transparent about its fakery).  

WSJ

But positive use cases are likely to be overshadowed in coming years by the technology’s potential role in financial fraud, identity theft and worse—from the savaging of reputations to the stoking of civil and political unrest. 

Current laws targeting fraudulent impersonation weren’t designed for a world with deepfake technology, and efforts at the federal level to update these laws have faltered so far. One stumbling block is the need to also protect parodies and other free speech.

Another big challenge is that in an online world where people can anonymously upload content, it can be difficult to find the individuals behind deepfakes. Some researchers have proposed putting the onus on website platforms such as Facebook and YouTube by making their protections in relation to user-generated content conditional on their taking “reasonable steps” to police their own platforms. 

Broad adoption of these kinds of laws could create meaningful deterrents—eventually. But the technology is moving so fast that lawmakers will likely always lag behind. That is why I believe we are going to have to rely on technology to protect us from a problem it helped create. 

One such solution is to detect deepfakes via machine-learning methods. For instance, while deepfakes appear highly realistic, the technology isn’t yet capable of generating natural eye blinking in the impersonated individuals. As such, machine-learning algorithms have been trained to detect deepfakes using eye-blinking patterns. While these detectors can be successful in the short term, people looking to evade such systems will likely just respond with better technology, creating a continuing and expensive cat-and-mouse game.

A better approach with a longer time horizon is media provenance or authentication systems to verify the origins of images and videos.

Microsoft,

for instance, has developed a prototype of a system called AMP (Authentication of Media via Provenance) that enables media-content creators to create and assign a certificate of authenticity to their content. Under such a system, every time you watch a video of, say, the U.S. president, the technology would help your browser or media-viewing software verify the source of the video (for example, a news network or the White House). The process could be delivered as simply as through an icon—much like the current browser padlock icon that indicates any information you send to that particular website is protected from third-party tampering en route. To be effective in practice, such systems would have to be widely adopted by all content creators, which will take time. 

While legislation eventually may offer protection against deepfakes, I believe the market could be quicker—provided we, as consumers and citizens, care.

Write to Dr. Hosanagar at reports@wsj.com.

Computer-generated videos are getting more realistic and even harder to detect thanks to deep learning and artificial intelligence. As WSJ’s Jason Bellini finds in this episode of Moving Upstream, these so-called deepfakes can be playful, but can also have real, damaging consequences for people’s lives. (Video from 10/15/18)

Copyright ©2021 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8

Source link