no code implementations • 27 Feb 2024 • Muhammad Faaiz Taufiq, Jean-Francois Ton, Yang Liu
In machine learning fairness, training models which minimize disparity across different sensitive groups often leads to diminished accuracy, a phenomenon known as the fairness-accuracy trade-off.
1 code implementation • NeurIPS 2023 • Muhammad Faaiz Taufiq, Arnaud Doucet, Rob Cornish, Jean-Francois Ton
Off-Policy Evaluation (OPE) in contextual bandits is crucial for assessing new policies using existing data without costly experimentation.
1 code implementation • 10 Aug 2023 • Yang Liu, Yuanshun Yao, Jean-Francois Ton, Xiaoying Zhang, Ruocheng Guo, Hao Cheng, Yegor Klochkov, Muhammad Faaiz Taufiq, Hang Li
However, a major challenge faced by practitioners is the lack of clear guidance on evaluating whether LLM outputs align with social norms, values, and regulations.
1 code implementation • 17 Jan 2023 • Rob Cornish, Muhammad Faaiz Taufiq, Arnaud Doucet, Chris Holmes
We consider how to assess the accuracy of a digital twin using real-world data.
1 code implementation • 10 Jan 2023 • Muhammad Faaiz Taufiq, Patrick Blöbaum, Lenon Minorics
Shapley values are model-agnostic methods for explaining model predictions.
no code implementations • 9 Jun 2022 • Muhammad Faaiz Taufiq, Jean-Francois Ton, Rob Cornish, Yee Whye Teh, Arnaud Doucet
Most off-policy evaluation methods for contextual bandits have focused on the expected outcome of a policy, which is estimated via methods that at best provide only asymptotic guarantees.