After Explainability: AI Metaphors and Materialisations Beyond Transparency
Event Details
Date: June 17 – 18, 2024
Place: ITeG, Pfannkuchstr. 1, 34121 Kassel & online
Registration: To participate please register online.
Contact: Goda Klumbytė
The event location and virtual link will be sent to registered participants via email.
Workshop Description
In the last few decades, explainability and interpretability of AI and machine learning systems have become important societal and political concerns, exemplified through the negative cases of “black box” algorithms. Often predicated on the ideal of transparency of algorithmic decision making (ADM) systems and their mechanisms of inference, explainability and interpretability become features of ADMs to be designed either incrementally during the development process or addressed post-factum through various explanation methods and techniques. These features are also implicitly or explicitly connected to the ethics of algorithms: as the logic goes, more transparent, interpretable, and explainable systems can enable humans who use them to make better decisions and lead to more trustworthy AI applications. In this sense, the capacity for ethical interaction with AI rests on the understandability of such systems, which in turn relies on these systems to be transparent and/or interpretable enough to be explainable.
Even though explainability and transparency of AI can indeed contribute to greater agentiveness on the users’ side, both explainability and transparency in AI are often defined through narrow, technical terms. Additionally, explanations might illuminate how the system generates inferences (such as by demonstrating which variables contribute most to the decision), however, they might not engage explanations of the broader social, political, environmental effects of such systems. Furthermore, explainability and transparency design is often geared towards engineers themselves or direct users of the systems (as opposed to broader audience or those negatively affected) and relies heavily on natural language explanations and visualisations as the main modalities of communication, appealing to universal, disembodied reason as the main form of perception.
While these more conventional research areas are important and valuable, this workshop calls for more exploratory approaches to ethical interactions with/in AI beyond concepts of transparency and explainability, particularly through engaging the rich knowledges in humanities and social sciences. We are especially interested in how the goals of explainability, transparency and ethics could be re-thought in the face of other epistemic traditions, such as feminist, Black, Indigenous, post/de-colonial thought, new materialisms, critical posthumanism, and other critical theoretical perspectives. How concepts such as opacity/poetics of relation (Glissant, Fereira da Silva), embodied/carnal knowledges (Bolt and Barret), affective resonances (Paasonen), reparation (Davis et al.), response-ability (Haraway), friction (Hamraie and Fritsch), holistic futuring (Sloane et al.) and other terms rooted in critical perspectives could generate different articulations of the framework of interpretability/explainability in AI? How might theoretical premises such as relational ethics (Birhane et al.), care ethics (de la Bellacasa), cloud ethics (Amoore) and other conceptual apparati offer alternative ways of engaging in interactions with ADM systems? Participants are welcome from a diverse spectrum of disciplines, including, media studies, philosophy, history of technology, social sciences and humanities more broadly, as well as arts, design, and machine learning.
Organisers
DFG research network Gender, Medien und Affekt and Participatory IT Design department at the University of Kassel. This workshop is also part of the AI Forensics project, funded by Volkswagen Foundation.
Programme
June 17, Monday
Time | Agenda |
---|---|
10:30 – 11:00 | Introduction – Goda Klumbytė, University of Kassel |
11:00 – 11:30 | Eugenia Stamboliev, University of Vienna, “Post-Critical AI Literacy” |
11:30 – 12:00 | Alex Taylor, Edinburgh University, “Flows and Scale” |
12:00 – 12:30 | Discussion |
12:30 – 14:00 | Lunch break |
14:00 – 14:30 | Katrin Köppert, Academy of Fine Arts Leipzig, “Inexplicability” |
14:30 – 15:00 | Simon Strick, ZeM Brandenburgisches Zentrum für Medienwissenschaften, “Overwhelming/Amplification” |
15:00 – 15:30 | Discussion |
15:30 – 16:00 | Coffee break |
16:00 – 16:30 | Arif Kornweitz, HfG Karlsruhe, “Accountability” |
16:30 – 17:00 | Fabian Offert, Paul Kim, Qiaoyu Cai, University of California, Santa Barbara, “XAI as Science” |
17:00 – 17:30 | Discussion |
June 18, Tuesday
Time | Agenda |
---|---|
10:30 – 11:00 | Recap and summary |
11:00 – 11:30 | Rachael Garrett, KTH Royal Institute of Technology, “Felt Ethics” |
11:30 – 12:00 | Goda Klumbytė, University of Kassel, and Dominik Schindler, Imperial College London, “Experiential Heuristics” |
12:00 – 12:30 | Discussion |
12:30 – 14:00 | Lunch break |
14:00 – 14:30 | Conrad Moriarty-Cole, Bath Spa University, “The Machinic Imaginary” |
14:30 – 15:00 | Nelly Yaa Pinkrah, TU Dresden, “Opacity” |
15:00 – 15:30 | Discussion |
15:30 – 16:00 | Closing |