Trustworthy AI

Artificial Intelligence aims to resolve economic and social challenges by incorporating innovations and digital transformation into every industry and in all aspects of social life. This changes how businesses operate, impacting the ways nations are governed and how people live. It is transforming every sector and is forming entirely new opportunities, e.g. in Industrie 4.0, smart transportation, smart health, and smart living for example. 

However, with the advance of machine learning techniques, a quick adoption is taking place and AI is becoming more broadly deployed, opening new attack surfaces to modern information systems and infrastructures. New risks emerge with AI transformation, whereby security, privacy and trust becomes one of the basic prerequisites for responsible AI. In this context, it is critical to understand the potential vulnerabilities and threats, as well as to develop security architectures and defensive mechanisms for AI solutions. 

At Fraunhofer Singapore our mission is to strengthen the future by building safe and open AI technology. We believe that resiliency-by-design and technology that protects and validates AI will enable a secure deployment of powerful new AI technology and the protection of related business models. Thus, we are working on the elements for securing and protecting AI technologies and services to help organisations in Singapore thrive in this new cognitive era. Our goal is to provide our clients with AI solutions, consulting and AI engineering services that put in place security, reliability, trust, protection and transparency.

Areas of Expertise and Portfolio

AI Safety and Defensible Machine Learning

Fraunhofer develops methods, techniques, and tools for robust AI. We evaluate adversarial attacks and perturbation attacks in real-world systems. We research on fundaments and practical solutions to protect against potential poisoning and prevent adverse misclassification.

Privacy-Preserving Collaborative AI

Fraunhofer conducts research, develops enabling technologies and provides building blocks for collaborative AI with security, privacy/data protection and Federated Learning for strong‭ ‬know-how protection‭.

 

Trusted Cognitive Systems and Secure AI Infrastructures

Fraunhofer establishes means for more robust AI and Machine Learning. We investigate and develop mechanisms to provide trust and protect cognitive systems and services over the complete lifecycle.

DeepFakes and DeepFake Detection

"Deepfakes": Reliably expose audio and video manipulations with AI systems. Securely and automatically detect "deepfakes" as counterfeits. They are also researching methods to strengthen the robustness of systems that evaluate video and audio material.

Artificial Immune Systems

Fraunhofer researches on fundaments for new generation of AI and Machine Learning techniques, big data analytics, sensor fusion and intelligence tools to detect, discover, assess and defend against advanced cyber attacks on cyber-physical infrastructures in smart cities, smart mobility, autonomous driving and Industrie 4.0. 

AI Audit and Risk Assessment

Fraunhofer develops AI assessment methods and certification processes for companies and industry. It is important for AI applications to guarantee compliance to laws and regulations, and to limit the risk for existing and conventional business processes. AI Assessment and certification can serve to evaluate organizational, technical, legal and ethical risks.  

AI Transformation Playbook for Industry

Fraunhofer helps to identify and match AI opportunities, develop AI transformation blueprints & technology roadmaps, organization and skills requirements. We provide implementation support and execute AI pilots to help our customer to develop a solid and sustainable AI strategy. 

 

"Deepfakes" in practice

Check Out Our Spotlight and Demos.

Our Services and Offers

Fraunhofer Singapore aims to strengthen current and future AI and Cognitive Systems by building safe and open AI technology and protect Machine Learning and their underlying infrastructure against evolving threats. Our goal is to systematically improve the security of systems and products in close cooperation with our partners and customers. In doing so, we utilize the capabilities of state-of-the-art AI algorithms to comprehensively evaluate system reliability and sustainably maintain reliability and robustness throughout the entire lifecycle.

 

Evaluate security      

  • Evaluating AI-based security products, such as facial recognition cameras or audio systems like speech synthesis, voice recognition, or voice-based user recognition.
  • Explainability of AI methods (Explainable AI).
  • Hardware reversing and pentesting using artificial intelligence, e.g., side-channel attacks on embedded devices
  • Assessing the correctness of datasets, both against random errors (such as incorrect annotations) and attacks (adversarial data poisoning)
  • Evaluating machine learning (ML) training pipelines: examining the correctness of the applied preprocessing methods, algorithms, and metrics

Design security

  • Implementation and further development of approaches from the field of Privacy Preserving Machine Learning: training of models on foreign datasets, while maintaining the confidentiality of datasets or models
  • Authentication and Human Machine Interface (HMI) Security
  • Support in the evaluation of security log files using Natural Language Processing
  • Information aggregation for system analysis and monitoring using ML-based analysis of data streams, log files and other data sources

Maintain security

  • Conception and prototyping of performance-aware, AI-assisted anomaly detection
  • Conception and prototyping of AI-assisted fraud detection
  • Situational awareness using imagery, text, and audio (including open source intelligence)
  • Development of algorithms in predictive security
  • Creation of automated solutions for the implementation of the DSGVO guidelines
  • Seminar and training courses on AI for IT security
  • Development of detection algorithms for deepfake materials
  • Implementation of AI-based elements for IP protection

Selected Solutions & Activities

 

Trustworthy AI

How to Reliably detect and expose audio and video manipulations with AI

 

Project

Boosting Autonomous Vehicle Safety

In partnership with Continental Automotive and AI Singapore, Fraunhofer Singapore will spearhead the research project in 'Adversarial Attack and Defense Assessment for Autonomous Driving' as part of Singapore's national AI program, 100 Experiments.

 

Consulting & Training

Artificial Intelligence

  • Training and Consulting Portfolio
  • Seminar and training courses on AI for IT security