AI Artists Logo.png

Mike Tyka

Mike Tyka is an artist, researcher and Google engineer whose work on DeepDream helped popularize artificial neural networks as an artistic medium.

Mike Tyka - How We End Up At The End Of Life (2).jpg


Since 2015, Mike Tyka has worked with artificial neural networks as an artistic medium and tool. He created some of the first large-scale artworks using Iterative DeepDream and collaborated with Refik Anadol to create a pioneering immersive projection installations called Archive Dreaming. His latest generative portraits series "Portraits of Imaginary People" has been shown at ARS Electronica in Linz, at the New Museum in Karuizawa (Japan) and at the Seoul Museum of Art.

Tyka co-founded the Artists and Machine Intelligence (AMI) program at Google. which supports artists in the field of machine learning and spurs activity in creative AI. The program that brings artists and engineers together to realize projects using machine intelligence. By supporting this emerging form of artistic collaboration, AMI opens research to new ways of thinking about and working with intelligent systems.

Tyka is a science sculpture artist, biochemist, Googler, glassblower, and creator of Grooviks Cube.

Selected artworks using AI:

1. Portraits of Imaginary People

Tyka generated some of the first ever portraits using Generative Adversarial Networks (GANs), predating the GAN craze of 2018 by over year. The series, titled "Portraits of Imaginary People" explores the latent space of human faces by training a neural network to imagine and then depict portraits of people who don’t exist.

Mike Tyka - Portraits of Imaginary People 2.jpg

I See You

Archival print, 20"x20" 2017.

By Mike Tyka.



Digital. 2017.

By Mike Tyka.

Mike Tyka - komarova6969.jpg


Digital. 2017.

By Mike Tyka.

You can view more from the series "Portraits of Imaginary People" here on Tyka’s website.

Tyka explains: “This series explores the latent space of human faces by training a neural network to imagine and then depict portraits of people who don’t exist. To do so, many thousands of photographs of faces taken from Flickr are fed to a type of machine-learning program called a Generative Adversarial Network (GAN). GANs work by using two neural networks that play an adversarial game: one (the "Generator") tries to generate increasingly convincing output, while a second (the "Discriminator") tries to learn to distinguish real photos from the artificially generated ones.”

Tyka continues: “At first, both networks are poor at their respective tasks. But as the Discriminator network starts to learn to predict fake from real, it keeps the Generator on its toes, pushing it to generate harder and more convincing examples. In order to keep up, the Generator gets better and better, and the Discriminator correspondingly has to improve its response. With time, the images generated become increasingly realistic, as both adversaries try to outwit each other. The images you see here are thus a result of the rules and internal correlations the neural networks learned from the training images.”

Tyka curated the standouts from his series of portraits and created a hardcover book, pictured below.

Portrait of imaginary people book.jpg

Portraits Of Imaginary People (Book)

Hardcover Book - Buy Here
ISBN: 978-1-926968-41-4
SIZE: 9.5” x 11.5”
FINISHING: Vellum Wrap w. Deboss & Foil Stamp
INTERIOR: 60 Pages Colour

Tyka also created a participatory exhibition from these portraits. "Us and Them" (pictured below) is a multi-modal installation that combines "Portraits of Imaginary people" with neural-net text generation and kinetic sculpture.

Us and Them.jpg

Us and Them (Installation)

Kinetic Installation

2018, Commissioned by Seoul Museum of Art

By Mike Tyka

Trained on a recently released set of two hundred thousand tweets from accounts identified as bots after the 2016 US presidential election and consequently evicted from Twitter, this piece features 20 machine-learning-driven printers which endlessly spew AI-generated political tweets by imaginary, generated people. The descending curtain of printer paper creates a central space with two chairs, inviting two people to sit, converse and connect, despite the torrent of machine-generated political propaganda that surrounds them.

The piece examines our new world which we created, a digital attention economy, in which we're constantly distracted, digitally connected and yet yearning for human connection. A fertile ground for political manipulation, propaganda is now fully automated and targeted, using machine learning to analyse its targets while pretending to be human. Synthetic media, also thorough modern computer vision technology and neural networks, blurs the boundary between what is a real and what is not. Doublespeak, such as the term "fake news" is further used to undermine what we know is real while pushing intentionally misleading information on an unprecedented scale.

"Us and Them" invites the viewer to reexamine their relationship with the machine we live inside and to seek true connection with one another. The 20 thermal receipt printers are live and continuously printing AI generated tweets at a slow but steady pace, letting this sculpture evolve and change every day. Ultimately the torrent overwhelm and bury the central space and chairs.

2. DeepDream Series

Tyka co-authored the seminal article Inceptionism: Going Deeper Into Neural Networks, which pioneered a fascinating new way to use neural networks to process images. The Deep Dream aesthetic caused a media sensation, quickly becoming its own subgenre of art.

Mike Tyka - Inceptionism - Cities - Iterative_Places205-GoogLeNet_4.jpg

Inceptionism: Cities

Neural net, digital. 2015.

By Mike Tyka.

Mike Tyka - Castles In The Sky With Diamonds.jpg

Castles In The Sky With Diamonds

Neural net, Archival print, 60"x48". 2016.

By Mike Tyka.



Neural net, Archival print, 36x60". 2017.

By Mike Tyka.

in his article describing Inceptionism, Tyka explains more about the tech behind the process. Although we recommend reading it in its entirety, we’ve paraphrased some of it here to describe what’s happening:

“One way to visualize what goes on in image classification neural networks is to turn the network upside down – and ask it to enhance an input image in such a way as to elicit a particular interpretation... But Instead of prescribing which feature we want the network to amplify, we can also let the network make that decision. We can feed the network an arbitrary image and let the network analyze the picture. We then pick a layer and ask the network to enhance whatever it detected. Each layer of the network deals with features at a different level of abstraction, so the complexity of features we generate depends on which layer we choose to enhance. For example, lower layers tend to produce strokes or simple ornament-like patterns, because those layers are sensitive to basic features such as edges and their orientations…

If we choose higher-level layers, which identify more sophisticated features in images, complex features or even whole objects tend to emerge. Again, we just start with an existing image and give it to our neural net. We ask the network: “Whatever you see there, I want more of it!” This creates a feedback loop: if a cloud looks a little bit like a bird, the network will make it look more like a bird. This in turn will make the network recognize the bird even more strongly on the next pass and so forth, until a highly detailed bird appears, seemingly out of nowhere…

If we apply the algorithm iteratively on its own outputs and apply some zooming after each iteration, we get an endless stream of new impressions, exploring the set of things the network knows about. We can even start this process from a random-noise image, so that the result becomes purely the result of the neural network, as seen in the following images: The techniques presented here help us understand and visualize how neural networks are able to carry out difficult classification tasks, improve network architecture, and check what the network has learned during training. It also makes us wonder whether neural networks could become a tool for artists—a new way to remix visual concepts—or perhaps even shed a little light on the roots of the creative process in general.”

Tyka on using artificial intelligence:

1. How has AI impacted your creative practice?

“It’s mostly acted as a new medium to explore both on the technological side (“What else can this do?”) and on the artistic side (“How do I control or guide this thing, to express myself”). The impact ML is and will be having in a wider societal sense has also given me much to think about, and is influencing how and what I’m trying to express artistically.”

2. What excites you most about AI as an artist?

“Making art with ML is fun because you’re dealing with a system that is sometimes unpredictable. You have to kind of get into a forth and back with the medium. You never quite know what you’re going to get.”

3. What specific AI / machine learning technologies does you use?

“I’ve used the DeepDream algorithm a lot early on but now have almost entirely switched to using GANs or other explicitly generative networks.”

Tyka’s Background:

Mike Tyka studied Biochemistry and Biotechnology at the University of Bristol. He obtained his PhD in Biophysics in 2007 and went on to work as a research fellow at the University of Washington and has been studying the structure and dynamics of protein molecules. In particular, he has been interested in protein folding and has been writing computer simulation software to better understand this fascinating process. Protein folding is the way our genetic code is interpreted from an abstract sequence of data into the functional enzymes and nano machines that drive our bodies. Mike currently works on machine learning at Google in Seattle.

Mike became involved in creating sculpture and art in 2009 when he helped design and construct Groovik's Cube, a 35ft tall, functional, multi-player Rubik"s cube. Since then he co-founded ALTSpace, a shared art studio in Seattle, and started creating sculptures of protein molecules. He hopes to capture some of the hidden beauty of these amazing nanomachines, make it accessible to the general public, and maybe act as inspiration for those who want to learn more about the biochemistry that make life possible.

Tyka’s exhibitions:

Tyka’s speaking engagements:

Learn more about Mike Tyka:

Explore DeepDream:

Since Tyka popularized DeepDream as a creative tool, several services now let you play with the technology for yourself. If you’re interested in experimenting with DeepDream, here is a short list of services that Tyka curated in a blog post on the topic.

1. DeepDream Generators:

2. DeepDream Derivatives