People

Isabel Valera
Saarland Informatics Campus
Building E1 1, R. 225
For administrative services, contact ml-office@lists.saarland-informatics-campus.de
To apply for PhD/PostDoc/HiWi/Thesis, see the information on the “Positions” page for the correct e-mail to use.
Otherwise, contact ivalera@cs.uni-saarland.de.
About me
I am a full Professor on Machine Learning at the Department of Computer Science of Saarland University (Saarbrücken, Germany), and Adjunct Faculty at MPI for Software Systems in Saarbrücken (Saarbrücken, Germany).
I am a fellow of the European Laboratory for Learning and Intelligent Systems (ELLIS), where I am part of the Robust Machine Learning Program and of the Saarbrücken Artificial Intelligence & Machine learning (Sam) Unit.
Prior to this, I was an independent group leader at the MPI for Intelligent Systems in Tübingen (Germany) until the end of the year. I have held a German Humboldt Post-Doctoral Fellowship, and a “Minerva fast track” fellowship from the Max Planck Society. I obtained my PhD in 2014 and MSc degree in 2012 from the University Carlos III in Madrid (Spain), and worked as postdoctoral researcher at the MPI for Software Systems (Germany) and at the University of Cambridge (UK).
Publications
2022
Kügelgen, Julius; Karimi, Amir-Hossein; Bhatt, Umang; Valera, Isabel; Weller, Adrian; Schölkopf, Bernhard
On the Fairness of Causal Algorithmic Recourse Proceedings Article
In: Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, Thirty-Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event, February 22 - March 1, 2022, pp. 9584–9594, AAAI Press, 2022.
Abstract | Links | BibTeX | Tags: amir, isabel, project-fairml
@inproceedings{DBLP:conf/aaai/KugelgenKBVWS22,
title = {On the Fairness of Causal Algorithmic Recourse},
author = {Julius Kügelgen and Amir-Hossein Karimi and Umang Bhatt and Isabel Valera and Adrian Weller and Bernhard Schölkopf},
url = {https://ojs.aaai.org/index.php/AAAI/article/view/21192},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
booktitle = {Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI
2022, Thirty-Fourth Conference on Innovative Applications of Artificial
Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances
in Artificial Intelligence, EAAI 2022 Virtual Event, February 22
- March 1, 2022},
pages = {9584–9594},
publisher = {AAAI Press},
abstract = {Algorithmic fairness is typically studied from the perspective of predictions. Instead, here we investigate fairness from the perspective of recourse actions suggested to individuals to remedy an unfavourable classification. We propose two new fair-ness criteria at the group and individual level, which—unlike prior work on equalising the average group-wise distance from the decision boundary—explicitly account for causal relationships between features, thereby capturing downstream effects of recourse actions performed in the physical world. We explore how our criteria relate to others, such as counterfactual fairness, and show that fairness of recourse is complementary to fairness of prediction. We study theoretically and empirically how to enforce fair causal recourse by altering the classifier and perform a case study on the Adult dataset. Finally, we discuss whether fairness violations in the data generating process revealed by our criteria may be better addressed by societal interventions as opposed to constraints on the classifier.},
keywords = {amir, isabel, project-fairml},
pubstate = {published},
tppubtype = {inproceedings}
}
Rateike, Miriam; Majumdar, Ayan; Mineeva, Olga; Gummadi, Krishna P.; Valera, Isabel
Don't Throw it Away! The Utility of Unlabeled Data in Fair Decision Making Proceedings Article
In: FAccT '22: 2022 ACM Conference on Fairness, Accountability, and Transparency, Seoul, Republic of Korea, June 21 - 24, 2022, pp. 1421–1433, ACM, 2022.
Abstract | Links | BibTeX | Tags: ayanm, decision making, fair representation, fairness, isabel, label bias, miriam, project-fairml, selection bias, variational autoencoder
@inproceedings{DBLP:conf/fat/RateikeMMGV22,
title = {Don't Throw it Away! The Utility of Unlabeled Data in Fair Decision Making},
author = {Miriam Rateike and Ayan Majumdar and Olga Mineeva and Krishna P. Gummadi and Isabel Valera},
url = {https://doi.org/10.1145/3531146.3533199},
doi = {10.1145/3531146.3533199},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
booktitle = {FAccT '22: 2022 ACM Conference on Fairness, Accountability, and
Transparency, Seoul, Republic of Korea, June 21 - 24, 2022},
pages = {1421–1433},
publisher = {ACM},
abstract = {unbiased, i.e., equally distributed across socially salient groups. In many practical settings, the ground-truth cannot be directly observed, and instead, we have to rely on a biased proxy measure of the ground-truth, i.e., biased labels, in the data. In addition, data is often selectively labeled, i.e., even the biased labels are only observed for a small fraction of the data that received a positive decision. To overcome label and selection biases, recent work proposes to learn stochastic, exploring decision policies via i) online training of new policies at each time-step and ii) enforcing fairness as a constraint on performance. However, the existing approach uses only labeled data, disregarding a large amount of unlabeled data, and thereby suffers from high instability and variance in the learned decision policies at different times. In this paper, we propose a novel method based on a variational autoencoder for practical fair decision-making. Our method learns an unbiased data representation leveraging both labeled and unlabeled data and uses the representations to learn a policy in an online process. Using synthetic data, we empirically validate that our method converges to the optimal (fair) policy according to the ground-truth with low variance. In real-world experiments, we further show that our training approach not only offers a more stable learning process but also yields policies with higher fairness as well as utility than previous approaches.},
keywords = {ayanm, decision making, fair representation, fairness, isabel, label bias, miriam, project-fairml, selection bias, variational autoencoder},
pubstate = {published},
tppubtype = {inproceedings}
}
2021
Schöffer, Jakob; Kuehl, Niklas; Valera, Isabel
A Ranking Approach to Fair Classification Proceedings Article
In: COMPASS '21: ACM SIGCAS Conference on Computing and Sustainable Societies, Virtual Event, Australia, 28 June 2021 - 2 July 2021, pp. 115–125, ACM, 2021.
Abstract | Links | BibTeX | Tags: isabel, project-fairml
@inproceedings{DBLP:conf/dev/SchofferKV21,
title = {A Ranking Approach to Fair Classification},
author = {Jakob Schöffer and Niklas Kuehl and Isabel Valera},
url = {https://doi.org/10.1145/3460112.3471950},
doi = {10.1145/3460112.3471950},
year = {2021},
date = {2021-01-01},
urldate = {2021-01-01},
booktitle = {COMPASS '21: ACM SIGCAS Conference on Computing and Sustainable
Societies, Virtual Event, Australia, 28 June 2021 - 2 July 2021},
pages = {115--125},
publisher = {ACM},
abstract = {Algorithmic decision systems are increasingly used in areas such as hiring, school admission, or loan approval. Typically, these systems rely on labeled data for training a classification model. However, in many scenarios, ground-truth labels are unavailable, and instead we have only access to imperfect labels as the result of (potentially biased) human-made decisions. Despite being imperfect, historical decisions often contain some useful information on the unobserved true labels. In this paper, we focus on scenarios where only imperfect labels are available and propose a new fair ranking-based decision system based on monotonic relationships between legitimate features and the outcome. Our approach is both intuitive and easy to implement, and thus particularly suitable for adoption in real-world settings. More in detail, we introduce a distance-based decision criterion, which incorporates useful information from historical decisions and accounts for unwanted correlation between protected and legitimate features. Through extensive experiments on synthetic and real-world data, we show that our method is fair in the sense that a) it assigns the desirable outcome to the most qualified individuals, and b) it removes the effect of stereotypes in decision-making, thereby outperforming traditional classification algorithms. Additionally, we are able to show theoretically that our method is consistent with a prominent concept of individual fairness which states that “similar individuals should be treated similarly.”},
keywords = {isabel, project-fairml},
pubstate = {published},
tppubtype = {inproceedings}
}
2020
Kilbertus, Niki; Rodriguez, Manuel Gomez; Schölkopf, Bernhard; Muandet, Krikamol; Valera, Isabel
Fair Decisions Despite Imperfect Predictions Proceedings Article
In: Chiappa, Silvia; Calandra, Roberto (Ed.): The 23rd International Conference on Artificial Intelligence and Statistics, AISTATS 2020, 26-28 August 2020, Online [Palermo, Sicily, Italy], pp. 277–287, PMLR, 2020.
Abstract | Links | BibTeX | Tags: isabel, project-fairml
@inproceedings{DBLP:conf/aistats/KilbertusRSMV20,
title = {Fair Decisions Despite Imperfect Predictions},
author = {Niki Kilbertus and Manuel Gomez Rodriguez and Bernhard Schölkopf and Krikamol Muandet and Isabel Valera},
editor = {Silvia Chiappa and Roberto Calandra},
url = {http://proceedings.mlr.press/v108/kilbertus20a.html},
year = {2020},
date = {2020-01-01},
urldate = {2020-01-01},
booktitle = {The 23rd International Conference on Artificial Intelligence and Statistics,
AISTATS 2020, 26-28 August 2020, Online [Palermo, Sicily, Italy]},
volume = {108},
pages = {277--287},
publisher = {PMLR},
series = {Proceedings of Machine Learning Research},
abstract = {Consequential decisions are increasingly informed by sophisticated data-driven predictive models. However, consistently learning accurate predictive models requires access to ground truth labels. Unfortunately, in practice, labels may only exist conditional on certain decisions—if a loan is denied, there is not even an option for the individual to pay back the loan. In this paper, we show that, in this selective labels setting, learning to predict is suboptimal in terms of both fairness and utility. To avoid this undesirable behavior, we propose to directly learn stochastic decision policies that maximize utility under fairness constraints. In the context of fair machine learning, our results suggest the need for a paradigm shift from "learning to predict" to "learning to decide". Experiments on synthetic and real-world data illustrate the favorable properties of learning to decide, in terms of both utility and fairness.},
keywords = {isabel, project-fairml},
pubstate = {published},
tppubtype = {inproceedings}
}
2019
Zafar, Muhammad Bilal; Valera, Isabel; Gomez-Rodriguez, Manuel; Gummadi, Krishna P.
Fairness Constraints: A Flexible Approach for Fair Classification Journal Article
In: J. Mach. Learn. Res., vol. 20, pp. 75:1–75:42, 2019.
Abstract | Links | BibTeX | Tags: isabel, project-fairml
@article{DBLP:journals/jmlr/ZafarVGG19,
title = {Fairness Constraints: A Flexible Approach for Fair Classification},
author = {Muhammad Bilal Zafar and Isabel Valera and Manuel Gomez-Rodriguez and Krishna P. Gummadi},
url = {http://jmlr.org/papers/v20/18-262.html},
year = {2019},
date = {2019-01-01},
urldate = {2019-01-01},
journal = {J. Mach. Learn. Res.},
volume = {20},
pages = {75:1--75:42},
abstract = {Algorithmic decision making is employed in an increasing number of real-world applicationstions to aid human decision making. While it has shown considerable promise in terms of improved decision accuracy, in some scenarios, its outcomes have been also shown to impose an unfair (dis)advantage on people from certain social groups (e.g., women, blacks). In this context, there is a need for computational techniques to limit unfairness in algorithmic decision making. In this work, we take a step forward to fulfill that need and introduce a flexible constraint-based framework to enable the design of fair margin-based classifiers. The main technical innovation of our framework is a general and intuitive measure of decision boundary unfairness, which serves as a tractable proxy to several of the most popular computational definitions of unfairness from the literature. Leveraging our measure, we can reduce the design of fair margin-based classifiers to adding tractable constraints on their decision boundaries. Experiments on multiple synthetic and real-world datasets show that our framework is able to successfully limit unfairness, often at a small cost in terms of accuracy.},
keywords = {isabel, project-fairml},
pubstate = {published},
tppubtype = {article}
}
Adel, Tameem; Valera, Isabel; Ghahramani, Zoubin; Weller, Adrian
One-Network Adversarial Fairness Proceedings Article
In: The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pp. 2412–2420, AAAI Press, 2019.
Abstract | Links | BibTeX | Tags: isabel, project-fairml
@inproceedings{DBLP:conf/aaai/AdelVGW19,
title = {One-Network Adversarial Fairness},
author = {Tameem Adel and Isabel Valera and Zoubin Ghahramani and Adrian Weller},
url = {https://doi.org/10.1609/aaai.v33i01.33012412},
doi = {10.1609/aaai.v33i01.33012412},
year = {2019},
date = {2019-01-01},
urldate = {2019-01-01},
booktitle = {The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI
2019, The Thirty-First Innovative Applications of Artificial Intelligence
Conference, IAAI 2019, The Ninth AAAI Symposium on Educational
Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii,
USA, January 27 - February 1, 2019},
pages = {2412--2420},
publisher = {AAAI Press},
abstract = {There is currently a great expansion of the impact of machine learning algorithms on our lives, prompting the need for objectives other than pure performance, including fairness. Fairness here means that the outcome of an automated decisionmaking system should not discriminate between subgroups characterized by sensitive attributes such as gender or race. Given any existing differentiable classifier, we make only slight adjustments to the architecture including adding a new hidden layer, in order to enable the concurrent adversarial optimization for fairness and accuracy. Our framework provides one way to quantify the tradeoff between fairness and accuracy, while also leading to strong empirical performance.},
keywords = {isabel, project-fairml},
pubstate = {published},
tppubtype = {inproceedings}
}
2018
Valera, Isabel; Singla, Adish; Rodriguez, Manuel Gomez
Enhancing the Accuracy and Fairness of Human Decision Making Proceedings Article
In: Bengio, Samy; Wallach, Hanna M.; Larochelle, Hugo; Grauman, Kristen; Cesa-Bianchi, Nicolò; Garnett, Roman (Ed.): Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pp. 1774–1783, 2018.
Abstract | Links | BibTeX | Tags: isabel, project-fairml
@inproceedings{DBLP:conf/nips/ValeraSR18,
title = {Enhancing the Accuracy and Fairness of Human Decision Making},
author = {Isabel Valera and Adish Singla and Manuel Gomez Rodriguez},
editor = {Samy Bengio and Hanna M. Wallach and Hugo Larochelle and Kristen Grauman and Nicolò Cesa-Bianchi and Roman Garnett},
url = {https://proceedings.neurips.cc/paper/2018/hash/0a113ef6b61820daa5611c870ed8d5ee-Abstract.html},
year = {2018},
date = {2018-01-01},
urldate = {2018-01-01},
booktitle = {Advances in Neural Information Processing Systems 31: Annual Conference
on Neural Information Processing Systems 2018, NeurIPS 2018, December
3-8, 2018, Montréal, Canada},
pages = {1774--1783},
abstract = {Societies often rely on human experts to take a wide variety of decisions affecting their members, from jail-or-release decisions taken by judges and stop-and-frisk decisions taken by police officers to accept-or-reject decisions taken by academics. In this context, each decision is taken by an expert who is typically chosen uniformly at random from a pool of experts. However, these decisions may be imperfect due to limited experience, implicit biases, or faulty probabilistic reasoning. Can we improve the accuracy and fairness of the overall decision making process by optimizing the assignment between experts and decisions?
In this paper, we address the above problem from the perspective of sequential decision making and show that, for different fairness notions from the literature, it reduces to a sequence of (constrained) weighted bipartite matchings, which can be solved efficiently using algorithms with approximation guarantees. Moreover, these algorithms also benefit from posterior sampling to actively trade off exploitation---selecting expert assignments which lead to accurate and fair decisions---and exploration---selecting expert assignments to learn about the experts' preferences and biases. We demonstrate the effectiveness of our algorithms on both synthetic and real-world data and show that they can significantly improve both the accuracy and fairness of the decisions taken by pools of experts.},
keywords = {isabel, project-fairml},
pubstate = {published},
tppubtype = {inproceedings}
}
In this paper, we address the above problem from the perspective of sequential decision making and show that, for different fairness notions from the literature, it reduces to a sequence of (constrained) weighted bipartite matchings, which can be solved efficiently using algorithms with approximation guarantees. Moreover, these algorithms also benefit from posterior sampling to actively trade off exploitation---selecting expert assignments which lead to accurate and fair decisions---and exploration---selecting expert assignments to learn about the experts' preferences and biases. We demonstrate the effectiveness of our algorithms on both synthetic and real-world data and show that they can significantly improve both the accuracy and fairness of the decisions taken by pools of experts.
2017
Zafar, Muhammad Bilal; Valera, Isabel; Gomez-Rodriguez, Manuel; Gummadi, Krishna P.
Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment Proceedings Article
In: Barrett, Rick; Cummings, Rick; Agichtein, Eugene; Gabrilovich, Evgeniy (Ed.): Proceedings of the 26th International Conference on World Wide Web, WWW 2017, Perth, Australia, April 3-7, 2017, pp. 1171–1180, ACM, 2017.
Abstract | Links | BibTeX | Tags: isabel, project-fairml
@inproceedings{DBLP:conf/www/ZafarVGG17,
title = {Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment},
author = {Muhammad Bilal Zafar and Isabel Valera and Manuel Gomez-Rodriguez and Krishna P. Gummadi},
editor = {Rick Barrett and Rick Cummings and Eugene Agichtein and Evgeniy Gabrilovich},
url = {https://doi.org/10.1145/3038912.3052660},
doi = {10.1145/3038912.3052660},
year = {2017},
date = {2017-01-01},
urldate = {2017-01-01},
booktitle = {Proceedings of the 26th International Conference on World Wide Web,
WWW 2017, Perth, Australia, April 3-7, 2017},
pages = {1171--1180},
publisher = {ACM},
abstract = {Automated data-driven decision making systems are increasingly being used to assist, or even replace humans in many settings. These systems function by learning from historical decisions, often taken by humans. In order to maximize the utility of these systems (or, classifiers), their training involves minimizing the errors (or, misclassifications) over the given historical data. However, it is quite possible that the optimally trained classifier makes decisions for people belonging to different social groups with different misclassification rates (e.g., misclassification rates for females are higher than for males), thereby placing these groups at an unfair disadvantage. To account for and avoid such unfairness, in this paper, we introduce a new notion of unfairness, disparate mistreatment, which is defined in terms of misclassification rates. We then propose intuitive measures of disparate mistreatment for decision boundary-based classifiers, which can be easily incorporated into their formulation as convex-concave constraints. Experiments on synthetic as well as real world datasets show that our methodology is effective at avoiding disparate mistreatment, often at a small cost in terms of accuracy.},
keywords = {isabel, project-fairml},
pubstate = {published},
tppubtype = {inproceedings}
}
Zafar, Muhammad Bilal; Valera, Isabel; Gomez-Rodriguez, Manuel; Gummadi, Krishna P.
Fairness Constraints: Mechanisms for Fair Classification Proceedings Article
In: Singh, Aarti; Zhu, Xiaojin (Jerry) (Ed.): Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, AISTATS 2017, 20-22 April 2017, Fort Lauderdale, FL, USA, pp. 962–970, PMLR, 2017.
Abstract | Links | BibTeX | Tags: isabel, project-fairml
@inproceedings{DBLP:conf/aistats/ZafarVGG17,
title = {Fairness Constraints: Mechanisms for Fair Classification},
author = {Muhammad Bilal Zafar and Isabel Valera and Manuel Gomez-Rodriguez and Krishna P. Gummadi},
editor = {Aarti Singh and Xiaojin (Jerry) Zhu},
url = {http://proceedings.mlr.press/v54/zafar17a.html},
year = {2017},
date = {2017-01-01},
urldate = {2017-01-01},
booktitle = {Proceedings of the 20th International Conference on Artificial Intelligence
and Statistics, AISTATS 2017, 20-22 April 2017, Fort Lauderdale,
FL, USA},
volume = {54},
pages = {962--970},
publisher = {PMLR},
series = {Proceedings of Machine Learning Research},
abstract = {Algorithmic decision making systems are ubiquitous across a wide variety of online as well as offline services. These systems rely on complex learning methods and vast amounts of data to optimize the service functionality, satisfaction of the end user and profitability. However, there is a growing concern that these automated decisions can lead, even in the absence of intent, to a lack of fairness, i.e., their outcomes can disproportionately hurt (or, benefit) particular groups of people sharing one or more sensitive attributes (e.g., race, sex). In this paper, we introduce a flexible mechanism to design fair classifiers by leveraging a novel intuitive measure of decision boundary (un)fairness. We instantiate this mechanism with two well-known classifiers, logistic regression and support vector machines, and show on real-world data that our mechanism allows for a fine-grained control on the degree of fairness, often at a small cost in terms of accuracy.},
keywords = {isabel, project-fairml},
pubstate = {published},
tppubtype = {inproceedings}
}
Zafar, Muhammad Bilal; Valera, Isabel; Gomez-Rodriguez, Manuel; Gummadi, Krishna P.; Weller, Adrian
From Parity to Preference-based Notions of Fairness in Classification Proceedings Article
In: Guyon, Isabelle; Luxburg, Ulrike; Bengio, Samy; Wallach, Hanna M.; Fergus, Rob; Vishwanathan, S. V. N.; Garnett, Roman (Ed.): Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pp. 229–239, 2017.
Abstract | Links | BibTeX | Tags: isabel, project-fairml
@inproceedings{DBLP:conf/nips/ZafarVGGW17,
title = {From Parity to Preference-based Notions of Fairness in Classification},
author = {Muhammad Bilal Zafar and Isabel Valera and Manuel Gomez-Rodriguez and Krishna P. Gummadi and Adrian Weller},
editor = {Isabelle Guyon and Ulrike Luxburg and Samy Bengio and Hanna M. Wallach and Rob Fergus and S. V. N. Vishwanathan and Roman Garnett},
url = {https://proceedings.neurips.cc/paper/2017/hash/82161242827b703e6acf9c726942a1e4-Abstract.html},
year = {2017},
date = {2017-01-01},
urldate = {2017-01-01},
booktitle = {Advances in Neural Information Processing Systems 30: Annual Conference
on Neural Information Processing Systems 2017, December 4-9, 2017,
Long Beach, CA, USA},
pages = {229--239},
abstract = {The adoption of automated, data-driven decision making in an ever expanding range of applications has raised concerns about its potential unfairness towards certain social groups. In this context, a number of recent studies have focused on defining, detecting, and removing unfairness from data-driven decision systems. However, the existing notions of fairness, based on parity (equality) in treatment or outcomes for different social groups, tend to be quite stringent, limiting the overall decision making accuracy. In this paper, we draw inspiration from the fair-division and envy-freeness literature in economics and game theory and propose preference-based notions of fairness -- given the choice between various sets of decision treatments or outcomes, any group of users would collectively prefer its treatment or outcomes, regardless of the (dis)parity as compared to the other groups. Then, we introduce tractable proxies to design margin-based classifiers that satisfy these preference-based notions of fairness. Finally, we experiment with a variety of synthetic and real-world datasets and show that preference-based fairness allows for greater decision accuracy than parity-based fairness.},
keywords = {isabel, project-fairml},
pubstate = {published},
tppubtype = {inproceedings}
}
