Efficient Deep Fakes Identification
Deepfakes are fake video or sound recordings of humans in which the characters look and sound deceptively real. However, statements or entire interviews can be forged by any person. Freely available and politically motivated deepfakes could trigger crisis situations or political scandals. What does that mean when we can no longer trust our perception?
The technology behind it is based on a relatively new field in Deep Learning called Generative Adverserial Networks (GAN), which have significantly improved the quality and efficiency in reallistically-looking fake images and videos of human faces to replace the human in the original video. Even complete body motion can be faked to a video of a targeted human. Attackers can tamper even complex media streams very effectively to propagate false information. Because the GAN models are trained on huge datasets, it can be realistically spliced into the original video.
While those techniques work pretty well and will improve to a super-realistic fake media where almost everything is possible, the detecting of fakes is a pressing need for security, burden-of-proof and non-repudiation (of evidence) in a smart society. Fraunhofer Singapore researches on the viability and desirability of possible solutions. Through our research, we explore holistic detection and protection methods against faked visual media and deep fakes. We are working on combined methods and tools that work on physical level (e.g. verifying the lightning condition, shadow, reflection), semantical level to check the consistecy of meta-data, and on signal level (e.g. physical unclonable functions (PUFs), TPM-based data integrity protection) as well as checking for unintended information propagated by the counterfeit video material (considered as side-channel from the fake media, e.g. wrong/non eye-blinking, human pulse, etc.).