Today Artificial Intelligence (AI) algorithms typically run in the cloud on clusters of Processing Units (CPUs and GPUs). To be able to run AI algorithms locally and onto distributed Internet-of-Things (IoT) devices, customized (efficient) hardware platforms for AI (HW-AI) are required. Reliability of hardware becomes mandatory for achieving trustworthy AI in safety-critical and mission-critical applications, such as robotics, smart healthcare, and autonomous driving. The RE-TRUSTING project will thus develop fault models and perform failure analysis of HW-AIs to study their vulnerability with the goal of "explaining" HW-AI. Explaining HW-AI means ensuring that the hardware is error-free and that the AI hardware does not compromise the AI prediction accuracy and does not bias AI decision-making. In this regard, the project aims at providing confidence and trust in decision-making based on AI by explaining the hardware wherein AI algorithms are being executed.