Explainable Fairness in Recommendation

International Conference on Research and Development in Information Retrieval (ACM SIGIR)


Existing research on fairness-aware recommendation has mainly focused on the quantification of fairness and the development of fair recommendation models, neither of which studies a more substantial problem–the identification of the source of model disparities in recommendation. This information is critical for the recommender system designers to understand the intrinsic recommendation mechanism and provides insights on how to improve model fairness to the decision makers. Fortunately, with the rapid development of explainable AI, we are able to use model explainability to gain insights into model (un)fairness. In this paper, We study the problem of explainable fairness in recommendation as we believe this type of study would motivate and guide the reasoning of fair recommender systems with a more promising and unified methodology. Particularly, we focus on a common setting with feature-aware recommendation and popularity bias. We propose a counterfactual explainable fairness framework, called CEF, which generates explanations that are able to improve fairness without significantly hurting the performance. The CEF framework formulates an optimization problem to learn the “minimal” change to a given feature that changes the recommendation results to a certain level of fairness. Based on the counterfactual recommendation result of each feature, we calculate an explainability score in term of the fairness-utility trade-off to rank all the feature-based explanations, and select the top ones as fairness explanations. Experimental results on several real-world datasets validate that our method is able to effectively provide explanations to the model disparities and these explanations can achieve better fairness-utility trade-off when using them for recommendation than all the baselines.

Latest Publications

Sustainable AI: Environmental Implications, Challenges and Opportunities

Carole-Jean Wu, Ramya Raghavendra, Udit Gupta, Bilge Acun, Newsha Ardalani, Kiwan Maeng, Gloria Chang, Fiona Aga Behram, James Huang, Charles Bai, Michael Gschwind, Anurag Gupta, Myle Ott, Anastasia Melnikov, Salvatore Candido, David Brooks, Geeta Chauhan, Benjamin Lee, Hsien-Hsin S. Lee, Bugra Akyildiz, Max Balandat, Joe Spisak, Ravi Jain, Mike Rabbat, Kim Hazelwood

MLSys - 2022

Looper: an end-to-end ML platform for product decisions

Igor L. Markov, Hanson Wang, Nitya Kasturi, Shaun Singh, Mia Garrard, Yin Huang, Sze Wai Yuen, Sarah Tran, Zehui Wang, Igor Glotov, Tanvi Gupta, Peng Chen, Boshuang Huang, Xiaowen Xie, Michael Belkin, Sal Uryasev, Sam Howie, Eytan Bakshy, Norm Zhou

KDD - 2022