From the panoptic eyes in the sky of satellites and drones to the intimacy of smart houses and phones—the pervasive nature of machine vision produces an inescapability to algorithmic subjugation. How does machine vision recognises a domestic item such as a desk lamp? This project aims to analyse the algorithm of commercial object recognition software (like the ones from Tesla and Facebook) by reverse-engineering its process. Object recognition software can be used as an agent in potential life and death situations and this project questions its reliability.
When a machine sees an object, the recognition system firstly reduces the noise to see the essence of the object. Recognition takes place when the essence resembles an archetype in its trained database. However, who decides how the training data is selected, categorised and confirmed? How easily does a machine fail to distinguish its surroundings due to minor distortions and external factors? And how much agency is assigned to such systems in critical decision-making moments?