FakeCatcher to combat deep fakes; Deepfakes, which are pieces of digital media that take an image or video and alter it by using another person’s voice or face, have grown in popularity online over the past several years. Without ever realizing it, you might have already seen one.
Deepfakes’ enormous popularity has made it possible for a lot of instances of disinformation and scams to circulate online. In response, Intel unveiled “FakeCatcher,” a new tool that has a 96% accuracy rate for detecting deep-fake media.
Using deepfake technology, fake photos, movies, and audio may be produced. Algorithms for artificial intelligence and machine learning are the foundation of this technology. For example, it can be used to make realistic character animations for movies.
However, it can also be employed maliciously, such as when fabricating videos of people saying or acting in ways they never would have. It is getting harder and harder to distinguish between what is real. And that of what isn’t thanks to this technology’s increasing realism and sophistication. People are starting to believe things that are untrue as a result, which is causing issues.
What risks do deep-fake technologies pose?
Despite being in its early stages, deep-fake technology has the potential to be harmful. Deepfakes could be used to transmit misleading information or produce fake news since they can be used to produce realistic. And also convincing audio and video of people saying and doing things they never said or did. Deepfakes may also be used to fabricate evidence in legal proceedings or to persuade someone to confess to a crime they did not commit. Deepfake technology has the potential to be misused by an adept user.
Ironically, Deepfakes are a powerful demonstration of artificial intelligence and machine learning in action. This technology has the ability to produce terrifyingly exact impersonations of famous people and public figures saying or doing things they have never said or done.
How does FakeCatcher from Intel function?
With its recently developed technology, Intel will be able to detect deep fakes in real time by looking at how a video’s pixels reflect human blood flow. It differs from other fake identification tools, which primarily analyze raw data to look for indications of falsity and pinpoint the flaws in a video.
The technology from Intel can detect variations in the color of the blood as it moves throughout the body. The signals of that blood flow are then gathered and converted into information by AI algorithms to assist in determining the validity of a video. FakeCatcher to combat deep fakes.
FakeCatcher has a variety of potential applications, according to Intel. The technology might be used by social media sites to stop individuals from posting damaging fake videos. The detector could be used by international news organizations to prevent unintentionally amplifying manipulated videos. Additionally, nonprofit organizations could use the platform to democratize deepfake detection for all users.