People

Isabel Valera
Saarland Informatics Campus
Building E1 1, R. 225
For administrative services, contact ml-office@lists.saarland-informatics-campus.de
To apply for PhD/PostDoc/HiWi/Thesis, see the information on the “Positions” page for the correct e-mail to use.
Otherwise, contact ivalera@cs.uni-saarland.de.
About me
I am a full Professor on Machine Learning at the Department of Computer Science of Saarland University (Saarbrücken, Germany), and Adjunct Faculty at MPI for Software Systems in Saarbrücken (Saarbrücken, Germany).
I am a fellow of the European Laboratory for Learning and Intelligent Systems (ELLIS), where I am part of the Robust Machine Learning Program and of the Saarbrücken Artificial Intelligence & Machine learning (Sam) Unit.
Prior to this, I was an independent group leader at the MPI for Intelligent Systems in Tübingen (Germany) until the end of the year. I have held a German Humboldt Post-Doctoral Fellowship, and a “Minerva fast track” fellowship from the Max Planck Society. I obtained my PhD in 2014 and MSc degree in 2012 from the University Carlos III in Madrid (Spain), and worked as postdoctoral researcher at the MPI for Software Systems (Germany) and at the University of Cambridge (UK).
Publications
2023
Karimi, Amir-Hossein; Barthe, Gilles; Schölkopf, Bernhard; Valera, Isabel
A Survey of Algorithmic Recourse: Contrastive Explanations and Consequential Recommendations Journal Article
In: ACM Comput. Surv., vol. 55, no. 5, pp. 95:1–95:29, 2023.
Abstract | Links | BibTeX | Tags: amir, isabel
@article{DBLP:journals/csur/KarimiBSV23,
title = {A Survey of Algorithmic Recourse: Contrastive Explanations and Consequential Recommendations},
author = {Amir-Hossein Karimi and Gilles Barthe and Bernhard Schölkopf and Isabel Valera},
url = {https://doi.org/10.1145/3527848},
doi = {10.1145/3527848},
year = {2023},
date = {2023-01-01},
urldate = {2023-01-01},
journal = {ACM Comput. Surv.},
volume = {55},
number = {5},
pages = {95:1–95:29},
abstract = {Machine learning is increasingly used to inform decision making in sensitive situations where decisions have consequential effects on individuals’ lives. In these settings, in addition to requiring models to be accurate and robust, socially relevant values such as fairness, privacy, accountability, and explainability play an important role in the adoption and impact of said technologies. In this work, we focus on algorithmic recourse, which is concerned with providing explanations and recommendations to individuals who are unfavorably treated by automated decision-making systems. We first perform an extensive literature review, and align the efforts of many authors by presenting unified definitions, formulations, and solutions to recourse. Then, we provide an overview of the prospective research directions toward which the community may engage, challenging existing assumptions and making explicit connections to other ethical challenges such as security, privacy, and fairness.},
keywords = {amir, isabel},
pubstate = {published},
tppubtype = {article}
}
2022
Kügelgen, Julius; Karimi, Amir-Hossein; Bhatt, Umang; Valera, Isabel; Weller, Adrian; Schölkopf, Bernhard
On the Fairness of Causal Algorithmic Recourse Proceedings Article
In: Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, Thirty-Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event, February 22 - March 1, 2022, pp. 9584–9594, AAAI Press, 2022.
Abstract | Links | BibTeX | Tags: amir, isabel, project-fairml
@inproceedings{DBLP:conf/aaai/KugelgenKBVWS22,
title = {On the Fairness of Causal Algorithmic Recourse},
author = {Julius Kügelgen and Amir-Hossein Karimi and Umang Bhatt and Isabel Valera and Adrian Weller and Bernhard Schölkopf},
url = {https://ojs.aaai.org/index.php/AAAI/article/view/21192},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
booktitle = {Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI
2022, Thirty-Fourth Conference on Innovative Applications of Artificial
Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances
in Artificial Intelligence, EAAI 2022 Virtual Event, February 22
- March 1, 2022},
pages = {9584–9594},
publisher = {AAAI Press},
abstract = {Algorithmic fairness is typically studied from the perspective of predictions. Instead, here we investigate fairness from the perspective of recourse actions suggested to individuals to remedy an unfavourable classification. We propose two new fair-ness criteria at the group and individual level, which—unlike prior work on equalising the average group-wise distance from the decision boundary—explicitly account for causal relationships between features, thereby capturing downstream effects of recourse actions performed in the physical world. We explore how our criteria relate to others, such as counterfactual fairness, and show that fairness of recourse is complementary to fairness of prediction. We study theoretically and empirically how to enforce fair causal recourse by altering the classifier and perform a case study on the Adult dataset. Finally, we discuss whether fairness violations in the data generating process revealed by our criteria may be better addressed by societal interventions as opposed to constraints on the classifier.},
keywords = {amir, isabel, project-fairml},
pubstate = {published},
tppubtype = {inproceedings}
}
2021
Mohammadi, Kiarash; Karimi, Amir-Hossein; Barthe, Gilles; Valera, Isabel
Scaling Guarantees for Nearest Counterfactual Explanations Proceedings Article
In: Fourcade, Marion; Kuipers, Benjamin; Lazar, Seth; Mulligan, Deirdre K. (Ed.): AIES '21: AAAI/ACM Conference on AI, Ethics, and Society, Virtual Event, USA, May 19-21, 2021, pp. 177–187, ACM, 2021.
Abstract | Links | BibTeX | Tags: amir, isabel, project-interpretableML
@inproceedings{DBLP:conf/aies/MohammadiKBV21,
title = {Scaling Guarantees for Nearest Counterfactual Explanations},
author = {Kiarash Mohammadi and Amir-Hossein Karimi and Gilles Barthe and Isabel Valera},
editor = {Marion Fourcade and Benjamin Kuipers and Seth Lazar and Deirdre K. Mulligan},
url = {https://doi.org/10.1145/3461702.3462514},
doi = {10.1145/3461702.3462514},
year = {2021},
date = {2021-01-01},
urldate = {2021-01-01},
booktitle = {AIES '21: AAAI/ACM Conference on AI, Ethics, and Society, Virtual
Event, USA, May 19-21, 2021},
pages = {177--187},
publisher = {ACM},
abstract = {Counterfactual explanations (CFE) are being widely used to explain algorithmic decisions, especially in consequential decision-making contexts (e.g., loan approval or pretrial bail). In this context, CFEs aim to provide individuals affected by an algorithmic decision with the most similar individual (i.e., nearest individual) with a different outcome. However, while an increasing number of works propose algorithms to compute CFEs, such approaches either lack in optimality of distance (i.e., they do not return the nearest individual) and perfect coverage (i.e., they do not provide a CFE for all individuals); or they do not scale to complex models such as neural networks. In this work, we provide a framework based on Mixed-Integer Programming (MIP) to compute nearest counterfactual explanations for the outcomes of neural networks, with both provable guarantees and runtimes comparable to gradient-based approaches. Our experiments on the Adult, COMPAS, and Credit datasets show that, in contrast with previous methods, our approach allows for efficiently computing diverse CFEs with both distance guarantees and perfect coverage.},
keywords = {amir, isabel, project-interpretableML},
pubstate = {published},
tppubtype = {inproceedings}
}
Karimi, Amir-Hossein; Schölkopf, Bernhard; Valera, Isabel
Algorithmic Recourse: from Counterfactual Explanations to Interventions Proceedings Article
In: Elish, Madeleine Clare; Isaac, William; Zemel, Richard S. (Ed.): FAccT '21: 2021 ACM Conference on Fairness, Accountability, and Transparency, Virtual Event / Toronto, Canada, March 3-10, 2021, pp. 353–362, ACM, 2021.
Abstract | Links | BibTeX | Tags: amir, isabel, project-interpretableML
@inproceedings{DBLP:conf/fat/KarimiSV21,
title = {Algorithmic Recourse: from Counterfactual Explanations to Interventions},
author = {Amir-Hossein Karimi and Bernhard Schölkopf and Isabel Valera},
editor = {Madeleine Clare Elish and William Isaac and Richard S. Zemel},
url = {https://doi.org/10.1145/3442188.3445899},
doi = {10.1145/3442188.3445899},
year = {2021},
date = {2021-01-01},
urldate = {2021-01-01},
booktitle = {FAccT '21: 2021 ACM Conference on Fairness, Accountability, and
Transparency, Virtual Event / Toronto, Canada, March 3-10, 2021},
pages = {353--362},
publisher = {ACM},
abstract = {As machine learning is increasingly used to inform consequential decision-making (e.g., pre-trial bail and loan approval), it becomes important to explain how the system arrived at its decision, and also suggest actions to achieve a favorable decision. Counterfactual explanations -"how the world would have (had) to be different for a desirable outcome to occur"- aim to satisfy these criteria. Existing works have primarily focused on designing algorithms to obtain counterfactual explanations for a wide range of settings. However, it has largely been overlooked that ultimately, one of the main objectives is to allow people to act rather than just understand. In layman's terms, counterfactual explanations inform an individual where they need to get to, but not how to get there. In this work, we rely on causal reasoning to caution against the use of counterfactual explanations as a recommendable set of actions for recourse. Instead, we propose a shift of paradigm from recourse via nearest counterfactual explanations to recourse through minimal interventions, shifting the focus from explanations to interventions.},
keywords = {amir, isabel, project-interpretableML},
pubstate = {published},
tppubtype = {inproceedings}
}
2020
Karimi, Amir-Hossein; Barthe, Gilles; Balle, Borja; Valera, Isabel
Model-Agnostic Counterfactual Explanations for Consequential Decisions Proceedings Article
In: Chiappa, Silvia; Calandra, Roberto (Ed.): The 23rd International Conference on Artificial Intelligence and Statistics, AISTATS 2020, 26-28 August 2020, Online [Palermo, Sicily, Italy], pp. 895–905, PMLR, 2020.
Abstract | Links | BibTeX | Tags: amir, isabel, project-interpretableML
@inproceedings{DBLP:conf/aistats/KarimiBBV20,
title = {Model-Agnostic Counterfactual Explanations for Consequential Decisions},
author = {Amir-Hossein Karimi and Gilles Barthe and Borja Balle and Isabel Valera},
editor = {Silvia Chiappa and Roberto Calandra},
url = {http://proceedings.mlr.press/v108/karimi20a.html},
year = {2020},
date = {2020-01-01},
urldate = {2020-01-01},
booktitle = {The 23rd International Conference on Artificial Intelligence and Statistics,
AISTATS 2020, 26-28 August 2020, Online [Palermo, Sicily, Italy]},
volume = {108},
pages = {895--905},
publisher = {PMLR},
series = {Proceedings of Machine Learning Research},
abstract = {Predictive models are being increasingly used to support consequential decision making at the individual level in contexts such as pretrial bail and loan approval. As a result, there is increasing social and legal pressure to provide explanations that help the affected individuals not only to understand why a prediction was output, but also how to act to obtain a desired outcome. To this end, several works have proposed optimization-based methods to generate nearest counterfactual explanations. However, these methods are often restricted to a particular subset of models (e.g., decision trees or linear models) and differentiable distance functions. In contrast, we build on standard theory and tools from formal verification and propose a novel algorithm that solves a sequence of satisfiability problems, where both the distance function (objective) and predictive model (constraints) are represented as logic formulae. As shown by our experiments on real-world data, our algorithm is: i) model-agnostic ({non-}linear, {non-}differentiable, {non-}convex); ii) data-type-agnostic (heterogeneous features); iii) distance-agnostic (l0, l1, l8, and combinations thereof); iv) able to generate plausible and diverse counterfactuals for any sample (i.e., 100% coverage); and v) at provably optimal distances.},
keywords = {amir, isabel, project-interpretableML},
pubstate = {published},
tppubtype = {inproceedings}
}
Karimi, Amir-Hossein; Kügelgen, Bodo Julius; Schölkopf, Bernhard; Valera, Isabel
Algorithmic recourse under imperfect causal knowledge: a probabilistic approach Proceedings Article
In: Larochelle, Hugo; Ranzato, Marc'Aurelio; Hadsell, Raia; Balcan, Maria-Florina; Lin, Hsuan-Tien (Ed.): Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020.
Abstract | Links | BibTeX | Tags: amir, isabel, project-interpretableML
@inproceedings{DBLP:conf/nips/KarimiKSV20,
title = {Algorithmic recourse under imperfect causal knowledge: a probabilistic approach},
author = {Amir-Hossein Karimi and Bodo Julius Kügelgen and Bernhard Schölkopf and Isabel Valera},
editor = {Hugo Larochelle and Marc'Aurelio Ranzato and Raia Hadsell and Maria-Florina Balcan and Hsuan-Tien Lin},
url = {https://proceedings.neurips.cc/paper/2020/hash/02a3c7fb3f489288ae6942498498db20-Abstract.html},
year = {2020},
date = {2020-01-01},
urldate = {2020-01-01},
booktitle = {Advances in Neural Information Processing Systems 33: Annual Conference
on Neural Information Processing Systems 2020, NeurIPS 2020, December
6-12, 2020, virtual},
abstract = {Recent work has discussed the limitations of counterfactual explanations to recommend actions for algorithmic recourse, and argued for the need of taking causal relationships between features into consideration. Unfortunately, in practice, the true underlying structural causal model is generally unknown. In this work, we first show that it is impossible to guarantee recourse without access to the true structural equations. To address this limitation, we propose two probabilistic approaches to select optimal actions that achieve recourse with high probability given limited causal knowledge (e.g., only the causal graph). The first captures uncertainty over structural equations under additive Gaussian noise, and uses Bayesian model averaging to estimate the counterfactual distribution. The second removes any assumptions on the structural equations by instead computing the average effect of recourse actions on individuals similar to the person who seeks recourse, leading to a novel subpopulation-based interventional notion of recourse. We then derive a gradient-based procedure for selecting optimal recourse actions, and empirically show that the proposed approaches lead to more reliable recommendations under imperfect causal knowledge than non-probabilistic baselines.},
keywords = {amir, isabel, project-interpretableML},
pubstate = {published},
tppubtype = {inproceedings}
}
Karimi, Amir-Hossein; Barthe, Gilles; Schölkopf, Bernhard; Valera, Isabel
A survey of algorithmic recourse: definitions, formulations, solutions, and prospects Journal Article
In: CoRR, vol. abs/2010.04050, 2020.
Abstract | Links | BibTeX | Tags: amir, isabel, project-interpretableML
@article{DBLP:journals/corr/abs-2010-04050,
title = {A survey of algorithmic recourse: definitions, formulations, solutions, and prospects},
author = {Amir-Hossein Karimi and Gilles Barthe and Bernhard Schölkopf and Isabel Valera},
url = {https://arxiv.org/abs/2010.04050},
year = {2020},
date = {2020-01-01},
urldate = {2020-01-01},
journal = {CoRR},
volume = {abs/2010.04050},
abstract = {Machine learning is increasingly used to inform decision-making in sensitive situations where decisions have consequential effects on individuals' lives. In these settings, in addition to requiring models to be accurate and robust, socially relevant values such as fairness, privacy, accountability, and explainability play an important role for the adoption and impact of said technologies. In this work, we focus on algorithmic recourse, which is concerned with providing explanations and recommendations to individuals who are unfavourably treated by automated decision-making systems. We first perform an extensive literature review, and align the efforts of many authors by presenting unified definitions, formulations, and solutions to recourse. Then, we provide an overview of the prospective research directions towards which the community may engage, challenging existing assumptions and making explicit connections to other ethical challenges such as security, privacy, and fairness.},
keywords = {amir, isabel, project-interpretableML},
pubstate = {published},
tppubtype = {article}
}
Karimi, Amir-Hossein; Kügelgen, Julius; Schölkopf, Bernhard; Valera, Isabel
Towards Causal Algorithmic Recourse Proceedings Article
In: Holzinger, Andreas; Goebel, Randy; Fong, Ruth; Moon, Taesup; Müller, Klaus-Robert; Samek, Wojciech (Ed.): xxAI - Beyond Explainable AI - International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers, pp. 139–166, Springer, 2020.
Abstract | Links | BibTeX | Tags: amir, isabel
@inproceedings{DBLP:conf/icml/KarimiKSV20,
title = {Towards Causal Algorithmic Recourse},
author = {Amir-Hossein Karimi and Julius Kügelgen and Bernhard Schölkopf and Isabel Valera},
editor = {Andreas Holzinger and Randy Goebel and Ruth Fong and Taesup Moon and Klaus-Robert Müller and Wojciech Samek},
url = {https://doi.org/10.1007/978-3-031-04083-2_8},
doi = {10.1007/978-3-031-04083-2_8},
year = {2020},
date = {2020-01-01},
urldate = {2020-01-01},
booktitle = {xxAI - Beyond Explainable AI - International Workshop, Held in Conjunction
with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended
Papers},
volume = {13200},
pages = {139–166},
publisher = {Springer},
series = {Lecture Notes in Computer Science},
abstract = {Algorithmic recourse is concerned with aiding individuals who are unfavorably treated by automated decision-making systems to overcome their hardship, by offering recommendations that would result in a more favorable prediction when acted upon. Such recourse actions are typically obtained through solving an optimization problem that minimizes changes to the individual’s feature vector, subject to various plausibility, diversity, and sparsity constraints. Whereas previous works offer solutions to the optimization problem in a variety of settings, they critically overlook real-world considerations pertaining to the environment in which recourse actions are performed.
The present work emphasizes that changes to a subset of the individual’s attributes may have consequential down-stream effects on other attributes, thus making recourse a fundamcausal problem. Here, we model such considerations using the framework of structural causal models, and highlight pitfalls of not considering causal relations through examples and theory. Such insights allow us to reformulate the optimization problem to directly optimize for minimally-costly recourse over a space of feasible actions (in the form of causal interventions) rather than optimizing for minimally-distant “counterfactual explanations”. We offer both the optimization formulations and solutions to deterministic and probabilistic recourse, on an individualized and sub-population level, overcoming the steep assumptive requirements of offering recourse in general settings. Finally, using synthetic and semi-synthetic experiments based on the German Credit dataset, we demonstrate how such methods can be applied in practice under minimal causal assumptions.},
keywords = {amir, isabel},
pubstate = {published},
tppubtype = {inproceedings}
}
The present work emphasizes that changes to a subset of the individual’s attributes may have consequential down-stream effects on other attributes, thus making recourse a fundamcausal problem. Here, we model such considerations using the framework of structural causal models, and highlight pitfalls of not considering causal relations through examples and theory. Such insights allow us to reformulate the optimization problem to directly optimize for minimally-costly recourse over a space of feasible actions (in the form of causal interventions) rather than optimizing for minimally-distant “counterfactual explanations”. We offer both the optimization formulations and solutions to deterministic and probabilistic recourse, on an individualized and sub-population level, overcoming the steep assumptive requirements of offering recourse in general settings. Finally, using synthetic and semi-synthetic experiments based on the German Credit dataset, we demonstrate how such methods can be applied in practice under minimal causal assumptions.
