Skip to main content
If you continue browsing this website, you agree to our policies:
x

Enrolment options

Black box AI systems for automated decision making, often based on machine learning over (big) data, map a user’s features into a class or a score without exposing the reasons why. This is problematic not only for the lack of transparency, but also for possible biases inherited by the algorithms from human prejudices and collection artifacts hidden in the training data, which may lead to unfair or wrong decisions. The future of AI lies in enabling people to collaborate with machines to solve complex problems. Like any efficient collaboration, this requires good communication, trust, clarity and understanding. Explainable AI addresses such challenges and for years different AI communities have studied such topic, leading to different definitions, evaluation protocols, motivations, and results.  

This course provides a reasoned introduction to the work of Explainable AI (XAI) to date and surveys the literature with a focus on post-hoc and by-design approaches. We motivate the needs of XAI in real-world and large-scale applications while presenting state-of-the-art techniques and best practices, as well as discussing the many open challenges.

  • In collaboration with: Roberto Pellungrini (Scuola Normale Superiore)
  • Estimated time: ≈ 2.5h
  • Prerequisites: Basic knowledge of machine learning methods and algorithms, statistical learning, visual analytics.