Knowing things by learning | Fake pornography is rampant, what can artificial intelligence do?

This article is published by  NetEase Cloud .  

 

"Knowing things by learning" is a brand column created by NetEase Yunyidun. The words come from Han Wang Chong's "Lun Heng · Real Knowledge". People have different abilities. Only by learning can they know the truth of things, and only afterward can they be wise. If you don't ask, you won't know. "Knowing things by learning" hopes to bring you gains through technical dry goods, trend interpretation, character thinking and precipitation, and also hopes to open your eyes and achieve a different you. Of course, if you have good knowledge or sharing, you are also welcome to contribute by email ([email protected]).

This article was written by Louise Matsakis, editor of Wired magazine, responsible for cybersecurity, internet law and internet culture, and former editor of VICE's tech sites Motherboard and Mashable.

Gfycat - a dynamic image hosting platform dedicated to making uploading and sharing videos or dynamic images faster and easier.

As an online dynamic image hosting platform, the company was founded to improve the GIF viewing experience in the 21st century. GYF is an acronym for "GIFFormatYoker" (GIF Format Association), an acronym that nicely reflects the company's purpose, which is to associate GIFs with HTML5 video.

 

The use of facial recognition and machine learning has become more common, and the Internet has begun to use these technologies to create fake pornographic videos. As Motherboard reports, people are making smart face-swap porn that swaps celebrities' faces for actresses in porn, like fake Alien star Gal Gadot sleeping with her half-brother video together. At a time when Reddit, Pornhub, and other communities are struggling with the banning of deepfakes, GIF-hosting company Gfycat has found a nice solution.

 

Gfycat says they have found a way to use artificial intelligence to identify fake videos. Gfycat has already started using this technology to audit GIFs on its platform. This new technology shows how to try and fight fake video content in the future. There's no arguing that the fight against fake video content will intensify as more Snapchat-like platforms bring video content to the news industry.

 

With at least 200 million active users, Gfycat hopes to offer a more comprehensive approach to filtering deepfakes than Reddit, Pornhub, and Discord. Mashable reported that Pornhub failed to remove some deepfake videos from its website, including some that had millions of views (the videos were later removed after the article was published). In early March, the Reddit site banned some deepfake communities, but kept some related boards, such as r/DeepFakesRequests and r/deepfaux, until WIRED brought it to their attention in the course of reporting this story.

 

These efforts should not be ignored, but at the same time, they also show how difficult the human operation of Internet platforms can be - especially when computers do not need humans to find their own deepfakes.

 

Artificial intelligence begins to fight

Gfycat uses artificial intelligence to develop two tools, both named after cats: Project Angora and Project Maru. When a user uploads a low-quality GIF of Taylor Swift to Gfycat, the Angora Project can search the web for a higher-resolution version to replace it. In other words, it could find the same clip of Swift singing ""ShakeIt Off" and upload this better version.

 

Now let's assume you didn't tag your clips as Taylor Swift, but that's not a problem. The Maru project is said to be able to distinguish between different faces and automatically tag GIFs with Swift's name. This makes sense from the perspective of Gfycat, which wants to index the material of the millions of users who upload to the platform every month.

 

Most deepfakes created by amateurs are not entirely believable. Because if you look closely, the frames don't quite match; in the video clip below ( required), Donald Trump's face doesn't quite match Covering Angela Merkel's face. But your brain does some processing to fill in the gaps where technology can't turn one person's face into another.

 

The Maru project is far less forgiving than the human brain. When Gfycat's engineers run a deepfake through its AI tool, it registers something similar to Nicolas Cage, but not enough to issue a positive match, since faces aren't rendered perfectly in every frame No shortage of. Using Maru is Gfycat's way of spotting deepfakes, and it may not work particularly well when GIFs are only partially celebrity-like.

 

The Maru project may not be able to prevent all deepfakes alone, and as they become more complex, they will become more troublesome in the future. Sometimes, a deepfake is not characterized by a celebrity's face, but a civilian, or even someone the creator only knows personally. To combat this change, Gfycat developed a shadowing technique similar to Project Angora.

 

If Gfycat suspects a video has been altered to show someone else's face, like Maru doesn't say with certainty that it's Taylor Swift's, the company can "mask" the victim's face and search to see if it exists elsewhere Body and background footage. For example, in a video of Trump putting other people’s faces on his body, the AI ​​could scour the internet and open the original video footage of the State of the Union address it borrowed. If there is a mismatch between the new GIF and the source file, the AI ​​can conclude that the video has been modified.

 

Gfycat plans to use its blocking technology to block more faces to detect different types of fake content, such as fraudulent weather or scientific videos. Gfycat has been relying heavily on artificial intelligence to categorize, curate and moderate content. "The accelerating pace of innovation in artificial intelligence has the potential to dramatically change our world, and we will continue to adapt our technology to these new developments," Gfycat CEO Richard Rabbat said in a statement. ."

 

not foolproof

Gfycat's technology won't work in at least one feedfake's work scenario: a face and body that doesn't exist anywhere else. For example, two people make a sex video together and then switch to someone else's face. If no one was involved, and the video was not available elsewhere, it would be impossible for Maru or Angora to know if the content was altered.

 

At the moment, this is a rather unlikely situation, as making a deepfake requires access to a video and someone's photo. But it's not hard to imagine a situation where an ex-lover would use video from his phone to film his victim, which was never made public.

 

Even with feedfakes featuring porn stars or celebrities, sometimes the AI ​​isn't sure what's going on, which is why Gfycat hires people to help. The company also uses other metadata, such as shared location or uploader, to determine if a clip is a feedfake.

 

 Also, not all videos are malicious. As the Electronic Frontier Foundation pointed out in a blog post, examples like the Merkel/Trump mashup above are mere political commentary or satire. There are other legitimate reasons to use this technology, such as anonymizing people who need identity protection, or creating pornography that is altered by mutual consent.

 

Still, it's easy to see why so many people find deepfakes distressing. They represent the beginning of a future where it is impossible to tell whether a video is real or fake, which could have wide-ranging implications for propaganda and more. Russia flooded Twitter with fake bots during the 2016 presidential election; in the 2020 election, it may do the same with fake videos of candidates themselves.

 

long battle

While Gfycat offers a potential solution, it may only be a matter of time until deepfake creators learn how to circumvent its security guarantees. The ensuing struggle could take years to complete.

 

As Hany Farid, a computer science professor at Dartmouth College who specializes in digital forensics, image analysis, and human perception, said: "For decades, you could find on porn sites or Reddit you could Unleash forensic technology and finally distinguish real information from a false information.” If you really want to fool the system, you start building a way to crack the forensic system.

 

Related Reading:

Zhiwuyouxue Issue 7 | Future security risks: AI's soft underbelly - deliberately deceiving neural networks

Zhiwuyouxue No. 8 | The real reason behind your network security problems

The ninth issue of Zhiwuyouxue | Anti-spoofing mechanism in DNN-based face recognition

 

If you are trapped in false information such as pornography and politics, then you can try to use Yidun's anti-spam business, and you can click here to access Yidun's content security solution with one click.

 

Learn about NetEase Cloud:
NetEase Cloud Official Website: https://www.163yun.com/
New User Gift Package: https://www.163yun.com/gift
NetEase Cloud Community: https://sq.163yun.com/

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325630356&siteId=291194637