Uncanny Mirror (2018)
Created by a pioneer in AI (artificial intelligence), the Uncanny Mirror creates shifting, real-time AI portraits of each viewer. It constantly mirrors, analyses, builds and morphs each face it sees from all the faces it has ever seen. In the case of the New Media Gallery installation the cache of faces have been captured in exhibitions at Seoul, Basel, Munich, Vevey/Switzerland, St Petersburg and at New Media Gallery. Questions around the nature of facial recognition and determinations around audience diversity and racial discrimination thus become an integral and important aspect of this work. The work contains a built-in camera that captures viewers multiple times. Facial-Recognition technology is employed. Each new portrait draws on the accumulated knowledge of the machine; each portrait produced contains something of those who came before. Once a face is recognized, selected biometric face markers are extracted together with a rough idea of pose and hand movements; the more it sees the more it learns. It takes the input, makes a sketch and then passes this information on to the next model, which takes the sketch and creates the image seen in the ‘mirror’. The more it sees a particular face, the faster it recognizes and reflects that face. When nobody is in front of the mirror it learns and it dreams. What the viewer then observes is a flow of colour and digital movement containing shifting and abstracted face parts, hair, apparel and surroundings.
Mario Klingemann is a German artist known for his work involving neural networks, code, and algorithms and recognized as a pioneer in the field of Artificial Intelligence (AI) art. He is considered a pioneer in the use of computer learning in the arts. He uses algorithms and artificial intelligence to create and investigate systems an is particularly interested in the human perception of art and creativity, researching methods in which machines can augment or emulate these processes. His wide-ranging artistic research spans generative art, cybernetic aesthetics, information theory, feedback loops, pattern recognition, emergent behaviours, neural networks, cultural heritage data and storytelling. He is artist in residence at the Google Arts & Culture Lab, winner of the Lumen Prize Gold 2018, the British Library Labs Creative Award. His work has been featured in art publications as well as academic research and has been shown worldwide in international museums and at art festivals like Ars Electronica, ZKM, the Photographers' Gallery, Collección Solo Madrid, Nature Morte Gallery New Delhi, Residenzschloß Dresden, Grey Area Foundation, Mediacity Biennale Seoul, the British Library, MoMA, and the Centre Pompidou.
“My preferred tools are neural networks, code and algorithms. My interests are manifold and in constant evolution, involving artificial intelligence, deep learning, generative and evolutionary art, glitch art, data classification and visualization or robotic installations. If there is one common denominator it’s my desire to understand, question and subvert the inner workings of systems of any kind. I also have a deep interest in human perception and aesthetic theory. Since I taught myself programming in the early 1980s, I have been trying to create algorithms that are able to surprise and to show almost autonomous creative behavior. The recent advancements in artificial intelligence, deep learning and data analysis make me confident that in the near future “machine artists” will be able to create more interesting work than humans. GANs, short for generative adversarial networks, are a particular architecture of deep neural networks which turned out to be very effective at learning how to generate new images based on a set of training examples. They are not the only way to create images with artificial intelligence, but many artists are using them because they are relatively easy to train and at the same time very versatile tools. The principle how they work has been explained many times by now, so I will not explain them in detail. The basic idea is that you have two neural networks, one, the generator tries to make images that look similar to the training examples it has been given. The other one, the discriminator, tries to learn to distinguish real images, like the ones from the training set from fake images, the ones that the generator makes. Initially both networks know nothing about the task they have to do and produce very unconvincing results, but every time one of the models makes a mistake, like when the generator gets caught with a fake or the discriminator lets a fake image pass, they learn from that and slightly improve their methods. Over time this back and forth between them makes both models become very good at their tasks so that eventually we as the human discriminators might not be able to distinguish a real image from a fake one anymore. My long-term goal is to find out how much autonomy I can give a machine and how far I can remove myself from this process. Apart from that I try not to get bored and to satisfy my curiosity about how the world might work and where it is heading.”