People

Huyen Khanh Vo
Saarland Informatics Campus
Building E1.1, Room 2.22.1
About me
I am a PhD student at the CS@Max Planck PhD program, advised by Prof. Isabel Valera. My research interest focuses on the optimization challenges in training probabilistic multimodal generative models.
Previously, I was a Research Resident at FPT Software AI Center, one of the only two AI residency programs in Vietnam, working with Prof. Tan M. Nguyen and Dr. Thieu N. Vo.
In 2023, I graduated from the Honors Program at School of Information and Communication Technology, Hanoi University of Science & Technology (HUST) with a Computer Science Bachelor’s degree. During my time here, I was a research student at Data Science Laboratory, advised by Dr. Linh V. Ngo and Prof. Khoat Q. Than.
Publications
2022
Sánchez-Mart'ın, Pablo; Rateike, Miriam; Valera, Isabel
VACA: Designing Variational Graph Autoencoders for Causal Queries Proceedings Article
In: Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, Thirty-Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event, February 22 - March 1, 2022, pp. 8159–8168, AAAI Press, 2022.
Abstract | Links | BibTeX | Tags: isabel, miriam, pablo
@inproceedings{DBLP:conf/aaai/Sanchez-MartinR22b,
title = {VACA: Designing Variational Graph Autoencoders for Causal Queries},
author = {Pablo Sánchez-Mart'ın and Miriam Rateike and Isabel Valera},
url = {https://ojs.aaai.org/index.php/AAAI/article/view/20789},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
booktitle = {Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI
2022, Thirty-Fourth Conference on Innovative Applications of Artificial
Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances
in Artificial Intelligence, EAAI 2022 Virtual Event, February 22
- March 1, 2022},
pages = {8159–8168},
publisher = {AAAI Press},
abstract = {In this paper, we introduce VACA, a novel class of variational graph autoencoders for causal inference in the absence of hidden confounders, when only observational data and the causal graph are available. Without making any parametric assumptions, VACA mimics the necessary properties of a Structural Causal Model (SCM) to provide a flexible and practical framework for approximating interventions (do-operator) and abduction-action-prediction steps. As a result, and as shown by our empirical results, VACA accurately approximates the interventional and counterfactual distributions on diverse SCMs. Finally, we apply VACA to evaluate counterfactual fairness in fair classification problems, as well as to learn fair classifiers without compromising performance.},
keywords = {isabel, miriam, pablo},
pubstate = {published},
tppubtype = {inproceedings}
}
Rateike, Miriam; Majumdar, Ayan; Mineeva, Olga; Gummadi, Krishna P.; Valera, Isabel
Don't Throw it Away! The Utility of Unlabeled Data in Fair Decision Making Proceedings Article
In: FAccT '22: 2022 ACM Conference on Fairness, Accountability, and Transparency, Seoul, Republic of Korea, June 21 - 24, 2022, pp. 1421–1433, ACM, 2022.
Abstract | Links | BibTeX | Tags: ayanm, decision making, fair representation, fairness, isabel, label bias, miriam, project-fairml, selection bias, variational autoencoder
@inproceedings{DBLP:conf/fat/RateikeMMGV22,
title = {Don't Throw it Away! The Utility of Unlabeled Data in Fair Decision Making},
author = {Miriam Rateike and Ayan Majumdar and Olga Mineeva and Krishna P. Gummadi and Isabel Valera},
url = {https://doi.org/10.1145/3531146.3533199},
doi = {10.1145/3531146.3533199},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
booktitle = {FAccT '22: 2022 ACM Conference on Fairness, Accountability, and
Transparency, Seoul, Republic of Korea, June 21 - 24, 2022},
pages = {1421–1433},
publisher = {ACM},
abstract = {unbiased, i.e., equally distributed across socially salient groups. In many practical settings, the ground-truth cannot be directly observed, and instead, we have to rely on a biased proxy measure of the ground-truth, i.e., biased labels, in the data. In addition, data is often selectively labeled, i.e., even the biased labels are only observed for a small fraction of the data that received a positive decision. To overcome label and selection biases, recent work proposes to learn stochastic, exploring decision policies via i) online training of new policies at each time-step and ii) enforcing fairness as a constraint on performance. However, the existing approach uses only labeled data, disregarding a large amount of unlabeled data, and thereby suffers from high instability and variance in the learned decision policies at different times. In this paper, we propose a novel method based on a variational autoencoder for practical fair decision-making. Our method learns an unbiased data representation leveraging both labeled and unlabeled data and uses the representations to learn a policy in an online process. Using synthetic data, we empirically validate that our method converges to the optimal (fair) policy according to the ground-truth with low variance. In real-world experiments, we further show that our training approach not only offers a more stable learning process but also yields policies with higher fairness as well as utility than previous approaches.},
keywords = {ayanm, decision making, fair representation, fairness, isabel, label bias, miriam, project-fairml, selection bias, variational autoencoder},
pubstate = {published},
tppubtype = {inproceedings}
}
2021
Schrouff, Jessica; Dieng, Awa; Rateike, Miriam; Kwegyir-Aggrey, Kweku; Farnadi, Golnoosh
Algorithmic Fairness through the Lens of Causality and Robustness (AFCR) 2021 Proceedings Article
In: Schrouff, Jessica; Dieng, Awa; Rateike, Miriam; Kwegyir-Aggrey, Kweku; Farnadi, Golnoosh (Ed.): Algorithmic Fairness through the Lens of Causality and Robustness Workshop, AFCR 2021, virtual, December 13, 2021, pp. 1–5, PMLR, 2021.
@inproceedings{DBLP:conf/afci/SchrouffDRKF21,
title = {Algorithmic Fairness through the Lens of Causality and Robustness (AFCR) 2021},
author = {Jessica Schrouff and Awa Dieng and Miriam Rateike and Kweku Kwegyir-Aggrey and Golnoosh Farnadi},
editor = {Jessica Schrouff and Awa Dieng and Miriam Rateike and Kweku Kwegyir-Aggrey and Golnoosh Farnadi},
url = {https://proceedings.mlr.press/v171/schrouff22a.html},
year = {2021},
date = {2021-01-01},
urldate = {2021-01-01},
booktitle = {Algorithmic Fairness through the Lens of Causality and Robustness
Workshop, AFCR 2021, virtual, December 13, 2021},
volume = {171},
pages = {1–5},
publisher = {PMLR},
series = {Proceedings of Machine Learning Research},
keywords = {miriam},
pubstate = {published},
tppubtype = {inproceedings}
}
