Trustworthy image recognition systems (LIMPID)

A three-year interdisciplinary research program conducted by Télécom Paris and IDEMIA, funded in part by the Agence Nationale de Recherche (ANR).

Artificial Intelligence is developing fast and is still in its infancy. Its potential looks very high: it can help in the field of health, in the field of security, in the field of industrial processes and much more. However, to recommend its usage to citizens, or to rely on it as a professional, AI needs a trust framework – for instance like medicines have today with protocols that make doctors, insurance, and in fine patients confident to use them. “The European approach for AI aims to promote Europe’s innovation capacity in the area of AI while supporting the development and uptake of ethical and trustworthy AI across the EU economy” (European Commission, 2020). According to the HLEG ethics guidelines for trustworthy AI, trustworthy AI requires reliability, robustness, fairness and explainability. Social and legal acceptability can happen only if the quality of the algorithms can be demonstrated, including reliability, robustness, level of local interpretability, absence of discrimination, the rate of false positives, and level of human control and oversight.

Our objective

Our high-level goal is to contribute to Trustworthy AI by Design in image recognition, including facial recognition. We will help formulate the legal/ethical requirements for reliability, fairness and local explainability, and then design or adapt tools to meet the identified requirements. Iteratively, we will refine each proposed technical solution based on feedback from the legal specialists.

Achieve trustworthy image recognition systems by design based on:

  • methods to test for bias, and to apply bias mitigation measures to image recognition systems, including facial recognition,
  • develop approaches to explainability by design for various image recognition use cases,
  • confront bias-mitigation and explainability by design with regulatory requirements for fair and explainable image recognition, and identify gaps between regulatory requirements and proposed technical solutions.

 

3 research pillars of trustworthiness


Reliability & Robustness


Fairness

 


Explainability

We’re hiring!

Two positions:

  • Post-doctoral researcher in data science

The PhD will work on improving reliability and explainability in image recognition systems within the Signal, Statistics and Machine Learning team and the Operational AI Ethics working group. He will develop novel algorithms for providing explainability by design and for estimating the confidence level on predictions and will propose solutions that meet regulatory requirements about trustworthy recognition systems and will closely work with experts in law. A PhD in Machine Learning, Computer Vision or more generally in AI/data science is required with an excellent track–record of scientific achievements (publications and conference presentations).

  • Post-doctoral researcher or PhD candidate in AI ethics

The researcher will be part of an interdisciplinary team evaluating different approaches to identifying and removing bias from image recognition algorithms, including facial recognition, as well as evaluating different approaches to explainability, particularly in light of regulatory requirements for fair and explainable image recognition systems. The candidate should have a masters or PhD in law, economics, political science, or business as well as a solid understanding of machine learning algorithms.

Application

To apply, please send by e-mail, with the subject line « Application for Research Position », a single dossier containing a statement of research interest, a CV, a copy of relevant certificates and a list of two references to Winston Maxwell (winston.maxwell@telecom-paris.fr), Stephan Clémençon (stephan.clemencon@telecom-paris.fr) and Florence d’Alché-Buc (florence.dalche@telecom-paris.fr)

Research publications by Télécom Paris

In coming

Other resources

In coming

Partners