top of page

Papers to Look Out For at FAccT 2021

The fourth annual ACM FAccT AI conference starts today, with more than 80 published papers on methods, practice, and the philosophy of fair, accountable, and transparent artificial intelligence.


We have compiled a list of the papers we are most excited about:


A. Coston, N. Guha, L. Lu, D. Ouyang, A. Chouldechova, D. Ho

The paper shows that it is possible to create an auditing test for detecting sampling bias in smartphone-collected mobility data, using voting turnout data.


Why is the paper important

Anonymized massively-collected datasets are continuously being used in designing and assessing public policy responses. For example, COVID-19 policy is being influenced by analyzing smartphone-based mobility data. However, these data often suffer from underrepresented demographics and can lead to policies that disproportionately affect vulnerable groups. This paper proposes a method to detect sampling biases using large-scale administrative data, such as individual voter turnout.


Main Findings

  • Administrative data (e.g. voter turnout) can be used as an audit for sampling bias.

  • Significant demographic disparities can be detected on a popular COVID-19 response dataset used by the CDC.

  • Demographic disparities may lead to detrimental policy decisions for vulnerable groups.


M. Ghadiri, S. Samadi, S. Vempala

The paper proposes Fair Loyd, a k-means clustering algorithm that chooses cluster centers via an objective that minimizes the maximum of the average cost across different protected groups.


Why is the paper important

It is proven that Loyd’s k-means clustering algorithm can result in significant gaps in the average clustering costs of different demographic groups, such as gender, education and race. This bias is important in resource allocation applications, where minimizing the cost for every group is paramount. Unlike previous works that focus on proportional representation within each cluster, this paper proposes a new k-means learning objective, that aims on minimizing the maximum average cost across different groups.


Main Findings

  • Fair Loyd achieves equal average clustering cost across different groups.

  • The total average clustering cost is increased by a maximum of 4%.

  • Fair Loyd only incurs a small overhead in the running time of the algorithm.


A. Karimi, B. Schölkopf, I. Valera


This paper proposes a causal perspective on recourse by formalizing it as finding the minimal structural interventions that when performed, will change the outcome of the model.


Why is the paper important

AI systems are increasingly used in ways that can affect people’s opportunities and access to resources and services. Examples include applications in which decisions are automated on who should or should not receive a loan, be interviewed for a job opening or be granted a mortgage. In such automated decisions, people should have an option of recourse, meaning the ability to change the outcome of the system by changing the actionable variables that influenced the decision. This paper argues for the need to consider causal relations between variables in our methods for algorithmic recourse.


Main Findings

  • The causal perspective on recourse is interesting and well argued. However, as the authors recognise, it assumes and relies upon a true causal model. Acquiring such a model remains an open challenge in research.


S. Jesus, C.Belém, V. Balayan, J. Bento, P. Saleiro, P. Bizarro, J. Gama

This paper proposes an evaluation methodology to assess the impact of XAI methods in practice.


Why is the paper important

The evaluation of explanation methods is a challenge in itself. Many of the proposed explanation approaches have so far been evaluated based on performance metrics or proxy tasks not on real-life decision-making tasks. Over time research is recognising that a robust evaluation of these methods should apprehend the end-user and the end-task. There is increasing evidence that current explanation methods are not equipped when it comes to their usefulness in real-life applications. This paper presents their results for evaluating several post-hoc explanations methods in their assistance of fraud analysts performing a fraud detection task.


Main Findings

  • The decision-making of the fraud analysts is assessed for different levels of information. (1) Data only, (2) data and model prediction, and (3) data, model prediction and explanation. It is found that the average accuracy is highest when the fraud analysts are only provided with the data, although the average decision time increases.

  • Providing the fraud analysts with a model prediction plus explanations does increase the decision accuracy compared to only providing the model prediction.

  • The authors compare 3 post-hoc explanations methods: LIME, SHAP and TreeInterpreter and find that LIME is less preferred by the end-users.


N. Garg, H. Li, F. Monachou

This paper evaluates different fairness-pursuing admission policies adopted by schools, using a market model that optimizes the dual-objectives of admitting 1) the most qualified and 2) a diverse cohort of students.

Why is the paper important

The University of California has recently announced that they plan to suspend the requirement that applicants submit SAT scores, in an attempt to increase opportunities to underprivileged students. However, only dropping test scores can exacerbate disparities, since tests can be effectively used to identify high-skilled applicants within protected groups. The authors use a dual-objective model to evaluate this policy proposal, in combination with correcting feature differences and applying affirmative action.


Main Findings

  • Affirmative action policies while using group membership as a feature is an effective policy with respect to the dual-objective, but can also be insufficient in identifying high-skilled disadvantaged users.

  • Dropping standardized testing exacerbates disparities in many cases, but is effective when there are substantial barriers to standard testing.

  • When multiple schools are competing with each other, a policy change at a single school can harm its competitiveness against the rest.


M. Young, M. Katell, J. Lee, S. Narayan, M. Epstein, B. Herman, D. Dailey, C. Bintz, V. Guetler, D. Raz, A. Tam, P. Jobe, F. Putz, B. Barghouti, B. Robick, P. Krafft

This paper presents the Algorithmic Equity Toolkit (AEKit), a set of action-oriented tools aiming to increase public participation in technology advocacy for AI policies.

Why is the paper important

Many reflective tools have recently been proposed to address the issues caused by public authorities using biased black-box algorithms. These tools are being implemented and evaluated within the limits of technology firms and their users. However, it is usually the communities negatively affected by the algorithms that push for accountability and transparency of such systems. The toolkit proposed in this paper aims to equip these groups with the interrogatory resources they need to make their claims stronger.


Main Contributions

  • A flowchart for identifying technologies that rely on artificial intelligence.

  • A questionnaire for interrogating the harm and bias dimensions of a technology.

  • A worksheet for disentangling the intended purposes of a system.

  • A set of definitions for understanding technical terms.



B. Knowles, J. Richards

This paper proposes a theoretical model that aims to promote public trust in AI, emphasizing on the need of a comprehensive and visibly functioning regulatory ecosystem that is based on externally auditable documentation.


Why is the paper important

Recent research has focused on the necessity of consumer trust for AI adoption. However, in most cases, the general public are not consumers of AI, but are subject to it, unwillingly and without any means of assessing its trustworthiness. The authors argue that public trust in AI will thus be achieved only via a robust regulatory system that safeguards the public against harmful consequences. They then propose a framework for achieving that with the use of externally auditable documentation.


Main Contributions

  • A framework for promoting public trust in AI.

  • Promoting the need for major effort towards developing a standardized documentation that facilitates external audits.


V. Suriyakumar, N. Papernot, A. Goldenberg, M. Ghassemi

This paper investigates the use of differential privacy in health care, uncovering lesser known limitations with regards to tradeoffs between privacy, utility, robustness and fairness.

Why is the paper important

Differential privacy algorithms have been effective in mitigating re-identification through linkage attacks, by neglecting information from the tails of a data distribution. However, health-care data, due to their complex nature, are often highly individualized and thus heavy-tailed. The utility loss by employing these algorithms is therefore very likely to hinder delivered care, especially in minority groups. This work investigates the tradeoffs between accuracy, robustness and fairness when differential private algorithms are used in health-care.


Main Contributions

  • The privacy-utility trade off increases with tail length.

  • Differential privacy does not improve shift robustness.

  • Differential privacy gives unfair influence to majority groups, that is not visible with standard measures of group fairness.


You can find all the accepted papers here.

See you at the conference!


Comments


bottom of page