In addition, the cyber security experts at Fraunhofer AISEC subject AI systems, e.g. B. used in facial recognition or speech processing, precise security checks. Using penetration tests, they analyze their weak points and develop "hardened" security solutions that withstand deception attempts with "deepfakes." With methods such as "Robust Learning" and "Adversarial Learning", Fraunhofer AISEC gives the AI algorithms thicker armor, so to speak, and makes them more resilient, for example through a more complex design of programming.
Use case insurance industry: "Deepfakes" deceive voice ID system
Banks, insurance companies or mobile operators are increasingly offering to identify with the voice on a call. So the voice has the meaning of a password. Authentication by voice recognition may be more convenient than conventional authentication methods such as PIN or password. However, in order to be used as a trustworthy and reliable alternative, the Voice ID system must be robust and secure.
The scientists at Fraunhofer showed in the latest use case that there is still some catching up to do in terms of security: In a penetration test, they successfully leveraged the voice recognition system (a so-called voice ID system) of a large German insurance company. On the basis of provided training material in the form of a recording of a public speech of the target person for about ten minutes, a high-quality audio "deepfake" was specially made at Fraunhofer, which could deceive the security system and allow access to the target person's personal account.