Society-Aware Machine Learning

Isabel Valera has been recently awarded with an ERC Starting Grant for her project “Society-Aware Machine Learning: The paradigm shift demanded by society to trust machine learning.” The project will be officially starting early 2023, and we will be seeking for PhD students and postdoctoral researchers to join the team!

Fair Machine Learning

Our research group has been a pioneer at establishing the methodological foundations of fairness in ML, and especially, at providing algorithmic solutions to the limitations of common practices in this research area, which often come at a high societal cost in terms of unfairness.


Interpretable Machine Learning

Machine learning is increasingly used to inform consequential decision-making (e.g., pre-trial bail and loan approval). In these settings, explainability play an important role in the adoption and impact of these technologies. In particular, when algorithmic decisions affect individuals, it becomes important to explain how the system arrived at its decision, and also suggest actions to achieve a favorable decision.

Robust Generative Models for Real-world Data

A core goal of our research is to develop robust ML methods that can properly handle realistic assumptions and, especially, the complex nature of real-world data. In this context, we have extensively worked on ML for mixed continuous and discrete data (e.g., tabular data), as well as with missing data (NeurIPS’15; ICML’17; AAAI’19; JMLR’20)—e.g., to the best of my knowledge, HI-VAE (Pattern Recognition’20) was the first deep generative model to handle both heterogeneous and missing data.