Ads 720 x 90

Ads 720 x 90

AI cannot protect people from being tampered with

Britt Paris and Joan Donovan, reporters of Data and Society , said that the relationship between media, technology and reality has never been "peaceful". Since the 1950s when photography was permitted in US courts, many people began to disbelieve technology, instead directly telling witnesses as well as related documents. Currently, the manipulation of information on the media is even more sophisticated. Humans can use artificial intelligence or learning tools to create images and videos that the human eye cannot distinguish from fake or real. This problem is called DeepFake.

Anyone who publishes their profile on social networks risks being faked. Once false information about these people is released, they will spread quickly on online platforms in seconds.

AI is being used to create DeepFake, so can AI be used to stop DeepFake? To address the problem of fraud, Facebook launched the Deepfake Detection Challenge, to develop tools that can distinguish fake photos and videos created by algorithms. A number of startup companies, such as TruePic, which uses AI and Blockchain to detect fake images, are also joining the game. The agency leading the Advanced Defense Research Projects in America (DARPA) has invested in a system called Medifor, which is said to be able to identify how videos are corrected to the pixel by pixel.

However, most of the solutions stop at the level of identifying which content has been edited and resolving the question about whether the image or video when pressing the record button is manipulated or edited. Meanwhile, DeepFake must be addressed both technically and socially.

Some experts believe that DeepFake also has a positive side, such as creating mocking videos, political comments or helping to anonymize those who need to protect their identities. "If there is a law about DeepFake, there should be provisions related to freedom of speech," said David Greene, head of the Electronic Frontier Foundation Digital Content Protection Association. However, he also stressed that if DeepFake used illegal activities such as extortion, it should be prosecuted.

Many people worry that if the law allows DeepFake to exist, there is a risk that companies will be able to freely collect images and create an online database for self-seeking. Bobby Chesney, a law professor at Texas Law University, said the collection was not worrisome, but needed more technical solutions along with the legal system to prosecute DeepFake users for malicious purposes. .

"AI cannot protect people from the risk of forgery. We need to discuss measures to limit the harm of DeepFake, not to solve it thoroughly because it will never disappear completely," he said. talk.
Newest Older

Related Posts

Post a Comment

Subscribe Our Newsletter