Müller VC (2025)
Publication Type: Book chapter / Article in edited volumes
Publication year: 2025
Publisher: John Wiley & Sons
Edited Volumes: A Companion to Applied Philosophy of AI
Series: Blackwell Companions to Philosophy
City/Town: Hoboken, NJ
Pages Range: 71-81
ISBN: 9781394238620
DOI: 10.1002/9781394238651.ch6
It is known that big data analytics and AI pose a threat to privacy, and that some of this is due to some kind of “black box problem” in AI. I explain how this becomes a problem in the context of justification for judgments and actions. Furthermore, I suggest distinguishing three kinds of opacity: (1) the subjects do not know what the system does (“shallow opacity”), (2) the analysts do not know what the system does (“standard black box opacity”), or (3) the analysts cannot possibly know what the system might do (“deep opacity”). If the agents, data subjects as well as analytics experts, operate under opacity, then these agents cannot provide justifications for judgments that are necessary to protect privacy, e.g., they cannot give “informed consent” or guarantee “anonymity.” It follows from these points that agents in big data analytics and AI often cannot make the judgments needed to protect privacy. So I conclude that big data analytics makes the privacy problems worse and the remedies less effective. As a positive note, I provide a brief outlook on technical ways to handle this situation.
APA:
Müller, V.C. (2025). Deep Opacity in AI: A Threat to XAI and Standard Privacy Protection Mechanisms. In Martin Hähnel, Regina Müller (Eds.), A Companion to Applied Philosophy of AI. (pp. 71-81). Hoboken, NJ: John Wiley & Sons.
MLA:
Müller, Vincent C.. "Deep Opacity in AI: A Threat to XAI and Standard Privacy Protection Mechanisms." A Companion to Applied Philosophy of AI. Ed. Martin Hähnel, Regina Müller, Hoboken, NJ: John Wiley & Sons, 2025. 71-81.
BibTeX: Download