People

Deborah D. Kanubala
Saarland Informatics Campus
Building E1.1, Room 2.21
About me
Hello, I am Deborah D. Kanubala, a Ph.D. student focused on Developing Fair Machine Learning Models under the supervision of Prof. Dr. Isabel Valera. My research interests includes fairness, causality, and interpretability.
I am a co-organizer of WiMLDS Accra-Ghana and also contribute to the organisation of the Deep Learning Indaba.
For more details, visit my website: https://kanubalad.github.io/
Publications
2026
Majumdar, Ayan; Kanubala, Deborah Dormah; Gupta, Kavya; Valera, Isabel
A Causal Framework to Measure and Mitigate Non-binary Treatment Discrimination Journal Article
In: CoRR, vol. abs/2503.22454, 2026.
@article{DBLP:journals/corr/abs-2503-22454,
title = {A Causal Framework to Measure and Mitigate Non-binary Treatment Discrimination},
author = {Ayan Majumdar and Deborah Dormah Kanubala and Kavya Gupta and Isabel Valera},
url = {https://doi.org/10.48550/arXiv.2503.22454},
doi = {10.48550/ARXIV.2503.22454},
year = {2026},
date = {2026-03-19},
urldate = {2026-03-19},
journal = {CoRR},
volume = {abs/2503.22454},
abstract = {Fairness studies of algorithmic decision-making systems often simplify complex decision processes, such as bail or lending decisions, into binary classification tasks (e.g., approve or not approve). However, these approaches overlook that such decisions are not inherently binary; they also involve non-binary treatment decisions (e.g., loan or bail terms) that can influence the downstream outcomes (e.g., loan repayment or reoffending). We argue that treatment decisions are integral to the decision-making process and, therefore, should be central to fairness analyses. Consequently, we propose a causal framework that extends and complements existing fairness notions by explicitly distinguishing between decision-subjects’ covariates and the treatment decisions. Our framework leverages path-specific counterfactual reasoning to: (i) measure treatment disparity and its downstream effects in historical data; and (ii) mitigate the impact of past unfair treatment decisions when automating decision-making. We use our framework to empirically analyze four widely used loan approval datasets to reveal potential disparity in non-binary treatment decisions and their discriminatory impact on outcomes, highlighting the need to incorporate treatment decisions in fairness assessments. Finally, by intervening in treatment decisions, we show that our framework effectively mitigates treatment discrimination from historical loan approval data to ensure fair risk score estimation and (non-binary) decision-making processes that benefit all stakeholders.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
2025
Azime, Israel Abebe; Kanubala, Deborah Dormah; Afonja, Tejumade; Fritz, Mario; Valera, Isabel; Klakow, Dietrich; Slusallek, Philipp
Accept or Deny? Evaluating LLM Fairness and Performance in Loan Approval across Table-to-Text Serialization Approaches Journal Article
In: CoRR, vol. abs/2508.21512, 2025.
@article{DBLP:journals/corr/abs-2508-21512,
title = {Accept or Deny? Evaluating LLM Fairness and Performance in Loan Approval across Table-to-Text Serialization Approaches},
author = {Israel Abebe Azime and Deborah Dormah Kanubala and Tejumade Afonja and Mario Fritz and Isabel Valera and Dietrich Klakow and Philipp Slusallek},
url = {https://doi.org/10.48550/arXiv.2508.21512},
doi = {10.48550/ARXIV.2508.21512},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {CoRR},
volume = {abs/2508.21512},
abstract = {Large Language Models (LLMs) are increasingly employed in high-stakes decision-making tasks, such as loan approvals. While their applications expand across domains, LLMs struggle to process tabular data, ensuring fairness and delivering reliable predictions. In this work, we assess the performance and fairness of LLMs on serialized loan approval datasets from three geographically distinct regions: Ghana, Germany, and the United States. Our evaluation focuses on the model’s zero-shot and in-context learning (ICL) capabilities. Our results reveal that the choice of serialization format significantly affects both performance and fairness in LLMs, with certain formats such as GReaT and LIFT yielding higher F1 scores but exacerbating fairness disparities. Notably, while ICL improved model performance by 4.9-59.6% relative to zero-shot baselines, its effect on fairness varied considerably across datasets. Our work underscores the importance of effective tabular data representation methods and fairness-aware models to improve the reliability of LLMs in financial decision-making.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Kanubala, Deborah Dormah; Valera, Isabel
On the Misalignment Between Legal Notions and Statistical Metrics of Intersectional Fairness Journal Article
In: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2025.
@article{Kanubala2025OnTM,
title = {On the Misalignment Between Legal Notions and Statistical Metrics of Intersectional Fairness},
author = {Deborah Dormah Kanubala and Isabel Valera},
url = {https://api.semanticscholar.org/CorpusID:282175621},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society},
abstract = {Intersectional (un)fairness, as conceptualized in legal and social theory, emphasizes the non-additive and structurally complex nature of discrimination against individuals at the intersection of multiple sensitive attributes (such as race, gender, etc). Recent works have proposed statistical metrics for intersectional fairness by estimating disparities across groups of individuals sharing two or more sensitive attributes. However, it is unclear if these metrics detect uniquely intersectional discrimination. We therefore pose the following question, Do current statistical intersectional metrics detect the non-additive discrimination highlighted by intersectionality theory? More specifically, to answer this, we run controlled synthetic data experiments that explicitly allow us to control for single, multiple, intersectional, and compounded forms of discrimination. Our analyses show that current statistical metrics for intersectional fairness behave more like multi-attribute disparity measures. Specifically, they respond more strongly to additive or compounded biases than to non-additive interaction effects. While they effectively capture disparities across multiple sensitive attributes, they often fail to detect uniquely intersectional discrimination. These findings reveal a fundamental misalignment between existing intersectional fairness metrics and the legal and theoretical foundations of intersectionality. We argue that if intersectional fairness metrics are to be deemed truly intersectional, they must be explicitly designed to account for the structural, non-additive nature of intersectional discrimination.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Ayvaz, Deniz Sezin; Belenguer, Lorenzo; He, Hankun; Kanubala, Deborah Dormah; Li, Mingxu; Low, Soung; Mougan, Carlos; Onwuegbuche, Faithful Chiagoziem; Pi, Yulu; Sikora, Natalia; Tran, Dan; Verma, Shresth; Wang, Hanzhi; Xie, Skyler; Pelletier, Adeline
Measuring Fairness in Financial Transaction Machine Learning Models Journal Article
In: CoRR, vol. abs/2501.10784, 2025.
@article{DBLP:journals/corr/abs-2501-10784,
title = {Measuring Fairness in Financial Transaction Machine Learning Models},
author = {Deniz Sezin Ayvaz and Lorenzo Belenguer and Hankun He and Deborah Dormah Kanubala and Mingxu Li and Soung Low and Carlos Mougan and Faithful Chiagoziem Onwuegbuche and Yulu Pi and Natalia Sikora and Dan Tran and Shresth Verma and Hanzhi Wang and Skyler Xie and Adeline Pelletier},
url = {https://doi.org/10.48550/arXiv.2501.10784},
doi = {10.48550/ARXIV.2501.10784},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {CoRR},
volume = {abs/2501.10784},
abstract = {Mastercard, a global leader in financial services, develops and deploys machine learning models aimed at optimizing card usage and preventing attrition through advanced predictive models. These models use aggregated and anonymized card usage patterns, including cross-border transactions and industry-specific spending, to tailor bank offerings and maximize revenue opportunities. Mastercard has established an AI Governance program, based on its Data and Tech Responsibility Principles, to evaluate any built and bought AI for efficacy, fairness, and transparency. As part of this effort, Mastercard has sought expertise from the Turing Institute through a Data Study Group to better assess fairness in more complex AI/ML models. The Data Study Group challenge lies in defining, measuring, and mitigating fairness in these predictions, which can be complex due to the various interpretations of fairness, gaps in the research literature, and ML-operations challenges.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Andrews, Kenya S.; Kanubala, Deborah Dormah; Aruleba, Kehinde D.; Castro, Francisco Enrique Vicente; Revelo, Renata A.
A Justice Lens on Fairness and Ethics Courses in Computing Education: LLM-Assisted Multi-Perspective and Thematic Evaluation Journal Article
In: CoRR, vol. abs/2510.18931, 2025.
@article{DBLP:journals/corr/abs-2510-18931,
title = {A Justice Lens on Fairness and Ethics Courses in Computing Education: LLM-Assisted Multi-Perspective and Thematic Evaluation},
author = {Kenya S. Andrews and Deborah Dormah Kanubala and Kehinde D. Aruleba and Francisco Enrique Vicente Castro and Renata A. Revelo},
url = {https://doi.org/10.48550/arXiv.2510.18931},
doi = {10.48550/ARXIV.2510.18931},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {CoRR},
volume = {abs/2510.18931},
abstract = {Course syllabi set the tone and expectations for courses, shaping the learning experience for both students and instructors. In computing courses, especially those addressing fairness and ethics in artificial intelligence (AI), machine learning (ML), and algorithmic design, it is imperative that we understand how approaches to navigating barriers to fair outcomes are being this http URL expectations should be inclusive, transparent, and grounded in promoting critical thinking. Syllabus analysis offers a way to evaluate the coverage, depth, practices, and expectations within a course. Manual syllabus evaluation, however, is time-consuming and prone to inconsistency. To address this, we developed a justice-oriented scoring rubric and asked a large language model (LLM) to review syllabi through a multi-perspective role simulation. Using this rubric, we evaluated 24 syllabi from four perspectives: instructor, departmental chair, institutional reviewer, and external evaluator. We also prompted the LLM to identify thematic trends across the courses. Findings show that multiperspective evaluation aids us in noting nuanced, role-specific priorities, leveraging them to fill hidden gaps in curricula design of AI/ML and related computing courses focused on fairness and ethics. These insights offer concrete directions for improving the design and delivery of fairness, ethics, and justice content in such courses.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
2024
Kanubala, Deborah Dormah; Valera, Isabel; Gupta, Kavya
Fairness Beyond Binary Decisions: a Case Study on German Credit Proceedings Article
In: Cerrato, Mattia; Coronel, Alesia Vallenas; Ahrweiler, Petra; Loi, Michele; Pechenizkiy, Mykola; Tamò-Larrieux, Aurelia (Ed.): Proceedings of the 3rd European Workshop on Algorithmic Fairness, Mainz, Germany, July 1st to 3rd, 2024, CEUR-WS.org, 2024.
@inproceedings{DBLP:conf/ewaf/KanubalaVG24,
title = {Fairness Beyond Binary Decisions: a Case Study on German Credit},
author = {Deborah Dormah Kanubala and Isabel Valera and Kavya Gupta},
editor = {Mattia Cerrato and Alesia Vallenas Coronel and Petra Ahrweiler and Michele Loi and Mykola Pechenizkiy and Aurelia Tamò-Larrieux},
url = {https://ceur-ws.org/Vol-3908/paper_15.pdf},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
booktitle = {Proceedings of the 3rd European Workshop on Algorithmic Fairness,
Mainz, Germany, July 1st to 3rd, 2024},
volume = {3908},
publisher = {CEUR-WS.org},
series = {CEUR Workshop Proceedings},
abstract = {Data-driven approaches are increasingly used to (partially) automate decision-making in credit scoring
by predicting whether an applicant is “creditworthy or not” based on a set of features about the applicant,
such as age and income, along with what we refer here to as treatment decisions, e.g., loan amount and
duration. Existing data-driven approaches for automating and evaluating the accuracy and fairness of
such credit decisions ignore that treatment decisions (here, loan terms) are part of the decision and
thus may be subject to discrimination. This discrimination can propagate to the final outcome (repaid
or not) of positive decisions (granted loans). In this extended abstract, we rely on causal reasoning
and a broadly studied fair machine-learning dataset, the German credit, to i) show that the current fair
data-driven approach neglects discrimination in treatment decisions (i.e., loan terms) and its downstream
consequences on the decision outcome (i.e., ability to repay); and ii) argue for the need to move beyond
binary decisions in fair data-driven decision-making in consequential settings like credit scoring},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
by predicting whether an applicant is “creditworthy or not” based on a set of features about the applicant,
such as age and income, along with what we refer here to as treatment decisions, e.g., loan amount and
duration. Existing data-driven approaches for automating and evaluating the accuracy and fairness of
such credit decisions ignore that treatment decisions (here, loan terms) are part of the decision and
thus may be subject to discrimination. This discrimination can propagate to the final outcome (repaid
or not) of positive decisions (granted loans). In this extended abstract, we rely on causal reasoning
and a broadly studied fair machine-learning dataset, the German credit, to i) show that the current fair
data-driven approach neglects discrimination in treatment decisions (i.e., loan terms) and its downstream
consequences on the decision outcome (i.e., ability to repay); and ii) argue for the need to move beyond
binary decisions in fair data-driven decision-making in consequential settings like credit scoring
2023
Gadosey, Pius; Kanubala, Deborah; Sonna, Belona
AI Ethics Education for Future African Leaders Book Chapter
In: pp. 87-101, 2023, ISBN: 978-3-031-23034-9.
@inbook{inbook,
title = {AI Ethics Education for Future African Leaders},
author = {Pius Gadosey and Deborah Kanubala and Belona Sonna},
doi = {10.1007/978-3-031-23035-6_7},
isbn = {978-3-031-23034-9},
year = {2023},
date = {2023-01-01},
urldate = {2023-01-01},
pages = {87-101},
abstract = {From the Greek word “ethos”, which means custom, habit or character, the word ethics can mean and has been defined in many different ways by ethics and morality theorists.},
keywords = {},
pubstate = {published},
tppubtype = {inbook}
}
2021
Amegadzie, Julius; Kanubala, Deborah; Cobbina, K. A.; Acquaye, Christabel
State and Future Prospects of Artificial Intelligence (AI) in Ghana Proceedings Article
In: pp. 1-10, 2021.
@inproceedings{inproceedings,
title = {State and Future Prospects of Artificial Intelligence (AI) in Ghana},
author = {Julius Amegadzie and Deborah Kanubala and K. A. Cobbina and Christabel Acquaye},
doi = {10.22624/AIMS/iSTEAMS-2021/V27P1},
year = {2021},
date = {2021-01-01},
urldate = {2021-01-01},
pages = {1-10},
abstract = {This paper aims to give a broad scope of the current state of AI in Ghana. The paper highlights the existing
institutions leveraging AI technologies, points out some current challenges with regards to AI adoption, and
identifies some exciting prospects of AI given the current state of the country},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
institutions leveraging AI technologies, points out some current challenges with regards to AI adoption, and
identifies some exciting prospects of AI given the current state of the country
