People

Isabel Valera
Saarland Informatics Campus
Building E1 1, R. 225
For administrative services, contact ml-office@lists.saarland-informatics-campus.de
To apply for PhD/PostDoc/HiWi/Thesis, see the information on the “Positions” page for the correct e-mail to use.
Otherwise, contact ivalera@cs.uni-saarland.de.
About me
I am a full Professor on Machine Learning at the Department of Computer Science of Saarland University (Saarbrücken, Germany), and Adjunct Faculty at MPI for Software Systems in Saarbrücken (Saarbrücken, Germany).
I am a fellow of the European Laboratory for Learning and Intelligent Systems (ELLIS), where I am part of the Robust Machine Learning Program and of the Saarbrücken Artificial Intelligence & Machine learning (Sam) Unit.
Prior to this, I was an independent group leader at the MPI for Intelligent Systems in Tübingen (Germany) until the end of the year. I have held a German Humboldt Post-Doctoral Fellowship, and a “Minerva fast track” fellowship from the Max Planck Society. I obtained my PhD in 2014 and MSc degree in 2012 from the University Carlos III in Madrid (Spain), and worked as postdoctoral researcher at the MPI for Software Systems (Germany) and at the University of Cambridge (UK).
Publications
2021
Mohammadi, Kiarash; Karimi, Amir-Hossein; Barthe, Gilles; Valera, Isabel
Scaling Guarantees for Nearest Counterfactual Explanations Proceedings Article
In: Fourcade, Marion; Kuipers, Benjamin; Lazar, Seth; Mulligan, Deirdre K. (Ed.): AIES '21: AAAI/ACM Conference on AI, Ethics, and Society, Virtual Event, USA, May 19-21, 2021, pp. 177–187, ACM, 2021.
Abstract | Links | BibTeX | Tags: amir, isabel, project-interpretableML
@inproceedings{DBLP:conf/aies/MohammadiKBV21,
title = {Scaling Guarantees for Nearest Counterfactual Explanations},
author = {Kiarash Mohammadi and Amir-Hossein Karimi and Gilles Barthe and Isabel Valera},
editor = {Marion Fourcade and Benjamin Kuipers and Seth Lazar and Deirdre K. Mulligan},
url = {https://doi.org/10.1145/3461702.3462514},
doi = {10.1145/3461702.3462514},
year = {2021},
date = {2021-01-01},
urldate = {2021-01-01},
booktitle = {AIES '21: AAAI/ACM Conference on AI, Ethics, and Society, Virtual
Event, USA, May 19-21, 2021},
pages = {177--187},
publisher = {ACM},
abstract = {Counterfactual explanations (CFE) are being widely used to explain algorithmic decisions, especially in consequential decision-making contexts (e.g., loan approval or pretrial bail). In this context, CFEs aim to provide individuals affected by an algorithmic decision with the most similar individual (i.e., nearest individual) with a different outcome. However, while an increasing number of works propose algorithms to compute CFEs, such approaches either lack in optimality of distance (i.e., they do not return the nearest individual) and perfect coverage (i.e., they do not provide a CFE for all individuals); or they do not scale to complex models such as neural networks. In this work, we provide a framework based on Mixed-Integer Programming (MIP) to compute nearest counterfactual explanations for the outcomes of neural networks, with both provable guarantees and runtimes comparable to gradient-based approaches. Our experiments on the Adult, COMPAS, and Credit datasets show that, in contrast with previous methods, our approach allows for efficiently computing diverse CFEs with both distance guarantees and perfect coverage.},
keywords = {amir, isabel, project-interpretableML},
pubstate = {published},
tppubtype = {inproceedings}
}
Karimi, Amir-Hossein; Schölkopf, Bernhard; Valera, Isabel
Algorithmic Recourse: from Counterfactual Explanations to Interventions Proceedings Article
In: Elish, Madeleine Clare; Isaac, William; Zemel, Richard S. (Ed.): FAccT '21: 2021 ACM Conference on Fairness, Accountability, and Transparency, Virtual Event / Toronto, Canada, March 3-10, 2021, pp. 353–362, ACM, 2021.
Abstract | Links | BibTeX | Tags: amir, isabel, project-interpretableML
@inproceedings{DBLP:conf/fat/KarimiSV21,
title = {Algorithmic Recourse: from Counterfactual Explanations to Interventions},
author = {Amir-Hossein Karimi and Bernhard Schölkopf and Isabel Valera},
editor = {Madeleine Clare Elish and William Isaac and Richard S. Zemel},
url = {https://doi.org/10.1145/3442188.3445899},
doi = {10.1145/3442188.3445899},
year = {2021},
date = {2021-01-01},
urldate = {2021-01-01},
booktitle = {FAccT '21: 2021 ACM Conference on Fairness, Accountability, and
Transparency, Virtual Event / Toronto, Canada, March 3-10, 2021},
pages = {353--362},
publisher = {ACM},
abstract = {As machine learning is increasingly used to inform consequential decision-making (e.g., pre-trial bail and loan approval), it becomes important to explain how the system arrived at its decision, and also suggest actions to achieve a favorable decision. Counterfactual explanations -"how the world would have (had) to be different for a desirable outcome to occur"- aim to satisfy these criteria. Existing works have primarily focused on designing algorithms to obtain counterfactual explanations for a wide range of settings. However, it has largely been overlooked that ultimately, one of the main objectives is to allow people to act rather than just understand. In layman's terms, counterfactual explanations inform an individual where they need to get to, but not how to get there. In this work, we rely on causal reasoning to caution against the use of counterfactual explanations as a recommendable set of actions for recourse. Instead, we propose a shift of paradigm from recourse via nearest counterfactual explanations to recourse through minimal interventions, shifting the focus from explanations to interventions.},
keywords = {amir, isabel, project-interpretableML},
pubstate = {published},
tppubtype = {inproceedings}
}
2020
Karimi, Amir-Hossein; Barthe, Gilles; Schölkopf, Bernhard; Valera, Isabel
A survey of algorithmic recourse: definitions, formulations, solutions, and prospects Journal Article
In: CoRR, vol. abs/2010.04050, 2020.
Abstract | Links | BibTeX | Tags: amir, isabel, project-interpretableML
@article{DBLP:journals/corr/abs-2010-04050,
title = {A survey of algorithmic recourse: definitions, formulations, solutions, and prospects},
author = {Amir-Hossein Karimi and Gilles Barthe and Bernhard Schölkopf and Isabel Valera},
url = {https://arxiv.org/abs/2010.04050},
year = {2020},
date = {2020-01-01},
urldate = {2020-01-01},
journal = {CoRR},
volume = {abs/2010.04050},
abstract = {Machine learning is increasingly used to inform decision-making in sensitive situations where decisions have consequential effects on individuals' lives. In these settings, in addition to requiring models to be accurate and robust, socially relevant values such as fairness, privacy, accountability, and explainability play an important role for the adoption and impact of said technologies. In this work, we focus on algorithmic recourse, which is concerned with providing explanations and recommendations to individuals who are unfavourably treated by automated decision-making systems. We first perform an extensive literature review, and align the efforts of many authors by presenting unified definitions, formulations, and solutions to recourse. Then, we provide an overview of the prospective research directions towards which the community may engage, challenging existing assumptions and making explicit connections to other ethical challenges such as security, privacy, and fairness.},
keywords = {amir, isabel, project-interpretableML},
pubstate = {published},
tppubtype = {article}
}
Karimi, Amir-Hossein; Barthe, Gilles; Balle, Borja; Valera, Isabel
Model-Agnostic Counterfactual Explanations for Consequential Decisions Proceedings Article
In: Chiappa, Silvia; Calandra, Roberto (Ed.): The 23rd International Conference on Artificial Intelligence and Statistics, AISTATS 2020, 26-28 August 2020, Online [Palermo, Sicily, Italy], pp. 895–905, PMLR, 2020.
Abstract | Links | BibTeX | Tags: amir, isabel, project-interpretableML
@inproceedings{DBLP:conf/aistats/KarimiBBV20,
title = {Model-Agnostic Counterfactual Explanations for Consequential Decisions},
author = {Amir-Hossein Karimi and Gilles Barthe and Borja Balle and Isabel Valera},
editor = {Silvia Chiappa and Roberto Calandra},
url = {http://proceedings.mlr.press/v108/karimi20a.html},
year = {2020},
date = {2020-01-01},
urldate = {2020-01-01},
booktitle = {The 23rd International Conference on Artificial Intelligence and Statistics,
AISTATS 2020, 26-28 August 2020, Online [Palermo, Sicily, Italy]},
volume = {108},
pages = {895--905},
publisher = {PMLR},
series = {Proceedings of Machine Learning Research},
abstract = {Predictive models are being increasingly used to support consequential decision making at the individual level in contexts such as pretrial bail and loan approval. As a result, there is increasing social and legal pressure to provide explanations that help the affected individuals not only to understand why a prediction was output, but also how to act to obtain a desired outcome. To this end, several works have proposed optimization-based methods to generate nearest counterfactual explanations. However, these methods are often restricted to a particular subset of models (e.g., decision trees or linear models) and differentiable distance functions. In contrast, we build on standard theory and tools from formal verification and propose a novel algorithm that solves a sequence of satisfiability problems, where both the distance function (objective) and predictive model (constraints) are represented as logic formulae. As shown by our experiments on real-world data, our algorithm is: i) model-agnostic ({non-}linear, {non-}differentiable, {non-}convex); ii) data-type-agnostic (heterogeneous features); iii) distance-agnostic (l0, l1, l8, and combinations thereof); iv) able to generate plausible and diverse counterfactuals for any sample (i.e., 100% coverage); and v) at provably optimal distances.},
keywords = {amir, isabel, project-interpretableML},
pubstate = {published},
tppubtype = {inproceedings}
}
Karimi, Amir-Hossein; Kügelgen, Bodo Julius; Schölkopf, Bernhard; Valera, Isabel
Algorithmic recourse under imperfect causal knowledge: a probabilistic approach Proceedings Article
In: Larochelle, Hugo; Ranzato, Marc'Aurelio; Hadsell, Raia; Balcan, Maria-Florina; Lin, Hsuan-Tien (Ed.): Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020.
Abstract | Links | BibTeX | Tags: amir, isabel, project-interpretableML
@inproceedings{DBLP:conf/nips/KarimiKSV20,
title = {Algorithmic recourse under imperfect causal knowledge: a probabilistic approach},
author = {Amir-Hossein Karimi and Bodo Julius Kügelgen and Bernhard Schölkopf and Isabel Valera},
editor = {Hugo Larochelle and Marc'Aurelio Ranzato and Raia Hadsell and Maria-Florina Balcan and Hsuan-Tien Lin},
url = {https://proceedings.neurips.cc/paper/2020/hash/02a3c7fb3f489288ae6942498498db20-Abstract.html},
year = {2020},
date = {2020-01-01},
urldate = {2020-01-01},
booktitle = {Advances in Neural Information Processing Systems 33: Annual Conference
on Neural Information Processing Systems 2020, NeurIPS 2020, December
6-12, 2020, virtual},
abstract = {Recent work has discussed the limitations of counterfactual explanations to recommend actions for algorithmic recourse, and argued for the need of taking causal relationships between features into consideration. Unfortunately, in practice, the true underlying structural causal model is generally unknown. In this work, we first show that it is impossible to guarantee recourse without access to the true structural equations. To address this limitation, we propose two probabilistic approaches to select optimal actions that achieve recourse with high probability given limited causal knowledge (e.g., only the causal graph). The first captures uncertainty over structural equations under additive Gaussian noise, and uses Bayesian model averaging to estimate the counterfactual distribution. The second removes any assumptions on the structural equations by instead computing the average effect of recourse actions on individuals similar to the person who seeks recourse, leading to a novel subpopulation-based interventional notion of recourse. We then derive a gradient-based procedure for selecting optimal recourse actions, and empirically show that the proposed approaches lead to more reliable recommendations under imperfect causal knowledge than non-probabilistic baselines.},
keywords = {amir, isabel, project-interpretableML},
pubstate = {published},
tppubtype = {inproceedings}
}
