M

Your Notebook is now a Draft.

Once it's ready, please submit it for review by our team of Community Moderators. Thank you!

Pending

This content now needs to be approved by community moderators.

Submitted

This essay was submitted and is waiting for review.

{{qctrl.question.primaryProject.name}}

Explainable AI and Trust Issues

by neilenatarajan {{qctrl.question.publish_time | dateStr}} Edited on {{qctrl.question.edited_time | dateStr}} {{"estimatedReadingTime" | translate:({minutes: qctrl.question.estimateReadingTime()})}}
  • Facebook
  • Twitter

  • AI researchers exploring ways to increase trust in AI recognize that one barrier to trust, often, is a lack of explanation. This recognition has led to the development of the field of Explainable Artificial Intelligence (XAI). In their paper Formalizing Trust in Artificial Intelligence, Jacovi et al. classify an AI system as trustworthy to a contract if it is capable of maintaining this contract: A recommender algorithm might be trusted to make good recommendations, and a classification algorithm might be trusted to classify things appropriately. When a classification algorithm makes grossly inappropriate classifications, we feel betrayed, and the algorithm loses our trust. (Of course, a system may be untrustworthy even as we continue to place trust in it.) This essay explores current legal implementations of XAI as they relate to explanation, trust, and human data subjects (e.g. users of Google or Facebook)—while forecasting outcomes relevant to XAI. 

    Explaining explanation

    In his paper Explanation in Artificial Intelligence, Tim Miller advocates for increasing trust in AI using two complementary approaches: generating decisions while taking into account how easily a human might understand them (explainability or interpretability), and explicitly explaining those decisions (explanation). Miller also provides a prescriptive account of explanation borrowed from the social sciences: 

    1. Explanations should be contrastive. They are responses to considering counterfactual cases.
    2. Explanations should be selective. That is, more information is not, in general, better. Furthermore, this selection is influenced by a number of cognitive biases.
    3. Explanations should not refer to probability, likelihood, or other statistics. Instead, they should refer to causes.
    4. Explanations are a part of a social interaction. As such, they are presented relative to the explainer's preconceptions about the explainee's beliefs. Similarly, they should follow rules of social interaction.

    Miller’s work focuses on what makes an explanation satisfying: We are more easily persuaded by causal stories than we are by statistics. That does not mean we should be. Consider that many AI researchers are focused on how people can be made to trust AI; fewer are focused on whether people should be made to trust AI. For example, in his book, Interpretable Machine Learning, Christopher Molnar surveys work in XAI, paying particular attention to model-agnostic, post-hoc explanation methods—i.e. algorithms that do not examine the model itself and can not influence the output. These algorithms, though they may impact the trust in a model, do not change how the model works, and so cannot impact trustworthiness. This would not be so problematic were their goal to enable users to evaluate the trustworthiness of a model, but as Molnar notes, “there may be a misalignment between the goal of the explaining machine (create trust) and the goal of the recipient (understand the prediction or behavior).” Molnar goes on to make his predictions for the future of explainability. Among them: Interpretability tools will catalyze the adoption of AI, and the focus within the XAI community will be on model-agnostic tools.

    Regulating AI

    Legal debate around the topic is similarly explanation-focused. The ‘right to explanation’ regarding automated decision-making is a hot topic in legal circles. People (data subjects) deserve to understand why important decisions—such as the rejection of a loan application or the denial of bail—were made about them. This is true when humans make these decisions, but it is doubly true when AI systems make these decisions.

    Until relatively recently, there was no regulation of the use of AI to make decisions impacting data subjects. In 2016, the European Union adopted the General Data Protection Regulation (GDPR), which aimed to address, among other issues, individuals’ rights to their data. The regulation has served as a template for many other countries, including Turkey, South Korea, Kenya, and Argentina.

    The GDPR includes provisions such as the right of access, which allows data subjects to access any of their personal data stored by a data controller, and the right of erasure, which allows the data subjects to request that a data controller erase all data pertaining to the subject. Importantly for us, the GDPR includes the rights data subjects have with respect to AI: That is, data subjects should have the right to obtain an explanation of the decision reached in the case of automated decision-making. However, this right to explanation is mentioned only in a Recital, and these are not legally binding under EU law. Thus, at present, the EU has no legally binding restrictions on explanations related to the use of automated decision-making.

    Conspicuously missing from the list of countries with EU-inspired data protection laws is the United States. Though the US has state laws such as the California Consumer Privacy Act (CCPA), which adopts many GDPR-esque regulations in California, and federal laws like the Equal Credit Opportunity Act (ECOA), which regulates the use of automated decision-making in the case of credit action, the US federal government has not implemented data governance laws similar to the GDPR.

    Though GDPR data governance rights do not apply in general within the US, the ECOA does include a form of the right to explanation. In exact words, creditors are required to provide debtors with a ‘statement of specific reasons’ (ECOA 1002.9) when taking adverse credit action. The official interpretation to the ECOA details precisely what qualifies as specific reasons. Crucially, “the decision was made by an automated system” is not alone sufficient. 

    Perhaps the most comprehensive implementation of the right to explanation is found in France. The French Loi pour une République numérique includes a clause on decision-making that is based upon algorithmic treatment. In this case, the law states, the decision-makers must communicate the principal details of the treatment to the citizen. This includes the data processed, the treatment parameters and weights, the operations carried out by the treatment, and the role of the algorithmic process in the decision.

    While this ordinance only applies to administrators, it goes beyond the GDPR in that it is legally binding, applies to all decisions involving algorithmic treatment (even if they were not made solely by algorithms), and details precisely what qualifies as explanation. Indeed, in many ways, the French iteration can be said to be the most mature conception of the right to explanation.

    There is much need for ideation, innovation, and legislation in the field of AI trust and trustworthiness. At present much of the discourse among AI researchers revolves around how to increase user trust in AI. This motivates much of the field of XAI, which, in turn, has made its way into legal discourse. However, what is needed is not study into how we can increase trust in AI, but whether we should increase trust in AI. That is, we should be researching methods of increasing AI trustworthiness, as well as methods that empower users to correlate trust with trustworthiness. The aim of research into AI trust should not be to make users trust AI, but rather to make AI worth trusting.

    Categories:
    Artificial Intelligence
    Computing and Math
    Submit Essay

    Once you submit your essay, you can no longer edit it.