Context and Objectives

Today Artificial Intelligence (AI) algorithms typically run in the cloud on clusters of Processing Units (CPUs and GPUs). To be able to run AI algorithms locally and onto distributed Internet-of-Things (IoT) devices, customized (efficient) hardware platforms for AI (HW-AI) are required. Reliability of hardware becomes mandatory for achieving trustworthy AI in safety-critical and mission-critical applications, such as robotics, smart healthcare, and autonomous driving. The RE-TRUSTING project will thus develop fault models and perform failure analysis of HW-AIs to study their vulnerability with the goal of "explaining" HW-AI. Explaining HW-AI means ensuring that the hardware is error-free and that the AI hardware does not compromise the AI prediction accuracy and does not bias AI decision-making. In this regard, the project aims at providing confidence and trust in decision-making based on AI by explaining the hardware wherein AI algorithms are being executed.

Team

Consortium:

INL Alberto Bosio (Project Coordinator)
INRIA Olivier Sentieys
LIP6 Haralampos Stratigopoulos
THALES Nicolas Ventroux

Project identity

  • Starting date: 03/01/2022
  • Duration: 42 months
  • Grant number: ANR-21-CE24-0015
  • PRCE - Projets de recherche collaborative - Entreprises/Public