What is bias in AI?

Machine learning bias is a phenomenon that occurs when a trained model produces results that are systematically prejudiced. This occurs because they are created by individuals who have conscious or unconscious preferences that may go undiscovered until the algorithms are used, and potentially amplified, publically. In this keynote at NIPS 2017, Kate Crawford argues that treating this as a technical problem means ignoring the underlying social problem, and has the potential to make things worse. In this section, artistic research in the field of biases and AI will be listed out.




Biometric Mirror

2018 | University of Melbourne

Biometric Mirror is a provocative installation that detects and display people’s personality traits and physical attractiveness based solely on a photo of their face. It exposes the implications of artificial intelligence and facial analysis in public space. The aim is to investigate the attitudes that emerge as people are presented with different perspectives on their own, anonymized biometric data distinguished from a single photograph of their face. It sheds light on the specific data that people oppose and approve, the sentiments it evokes, and the underlying reasoning. Biometric Mirror also presents an opportunity to reflect on whether the plausible future of artificial intelligence is a future we want to see take shape.

Link : Biometric Mirror


The Normalizing Machine

2018 | Mushon Zer-Aviv

The Normalizing Machine is an interactive installation presented as experimental research in machine-learning. It aims to identify and analyze the image of social normalcy. Each participant is asked to point out who looks most normal from a line up of previously recorded participants. The machine analyzes the participant decisions and adds them to its’ aggregated algorithmic image of normalcy.

Link : The Normalizing Machine

  • biased_machines.txt
  • Last modified: 2019/08/04 13:52
  • by waag