AI Artists Logo.png

Daniel Ambrosi

Daniel Ambrosi is recognized as one of the founding creators of the AI art movement and noted for the nuanced balance he achieves in human-AI hybrid art.

IthacaFalls.jpg

Intro:

Based near Silicon Valley in Half Moon Bay, California, Ambrosi has been exploring novel methods of visual presentation for almost 40 years since entering the Program of Computer Graphics at Cornell University where he earned a Bachelor of Architecture degree and a Masters in 3D Graphics.

In 2011, Ambrosi devised a unique form of computational photography that generates exceptionally immersive landscape images. More recently, Ambrosi's "Dreamscapes" build upon his previous experiments by adding a powerful new graphics tool to his artistic workflow: an enhanced version of “DeepDream,” a computer vision program evolved from Google engineers’ desire to visualize the inner workings of Deep Learning artificial intelligence models.

With proprietary access to a customized version of DeepDream expressly modified by two brilliant software engineers, Joseph Smarr (Google) and Chris Lamb (NVIDIA), to operate successfully on his giant images, Ambrosi has been empowered to develop large scale artworks that display exquisite sensitivity and intricacy. Ambrosi's engaging AI-augmented artworks and grand format landscape images have been exhibited at international conferences, art fairs, and gallery shows, installed in major tech offices, featured in multiple publications, and are collected by enthusiastic patrons worldwide.

Selected AI Artworks:

Ambrosi’s series, Dreamscapes, imbues giant landscape images with a stunning degree of unexpected detail through a combination of computational photography and artificial intelligence.

PtMontara.jpg

Point Montara Solstice

Montara, CA

AzaleaWalk.jpg

Azalea Walk

Central Park, New York City

The Needle - Daniel Ambrosi.jpg

The Needle

Maui, HI

IthacaFalls.jpg

Ithaca Falls

Ithaca, NY

Ambrosi on using Artificial Intelligence:

How has AI impacted your creative practice?

  • “AI has enabled me to achieve a specific artistic intent, one that preceded the arrival of the tool itself. In the process of learning how to work with this new tool, it contributed a level of intricacy, mystery, and grace to my work that would have been prohibitively difficult if not impossible for me to attain by my own hand and imagination. In that sense, my AI has been a virtuosic collaborator. And, as anyone who has had the good fortune to work closely with a virtuoso knows, my AI partner has been a true inspiration and teacher. I literally see the world differently now; my extensive exposure to DeepDream's way of interpreting my landscape images has caused me to see actual landscapes differently at times, especially in certain lighting conditions. It has enhanced my ability to see creatively.”

How would you describe your approach to AI art within the context of the AI art movement?

  • “In my view, the emerging AI art movement constitutes a spectrum of creative applications that range from works that are primarily "AI-generated" to those that are "AI-augmented" or human-AI hybrid art. I place myself far on the AI-augmented end of the spectrum because my AI (a proprietary version of Google's DeepDream customized for my purposes) was placed *in service* of a specific artistic intent that preceded the tool itself. For the better part of the last two decades I had been experimenting with computational photography techniques to find more effective ways to communicate the experiences that I was having in the presence of special places; great landscapes, cityscapes, and even indoor spaces. For me at least, I realized that truly special places go beyond a mere sight; they take my breath away, and when a scene is powerful enough to do that, I inevitably find myself waxing philosophical on the nature of perception and of reality itself. This made me realize that if I wanted others to experience--via a two-dimensional static image--what I was experiencing in the real world, I would have to find a way to create images that moved people not just visually, but also viscerally and cognitively. About 8 years ago, I got two-thirds of the way there when I devised my XYZ photography technique (capturing, blending, and stitching together a cubic array of photos; multiple shots high x multiple shots wide x multiple exposures "deep"). This development enabled me to start creating images that made people gasp and, from many accounts, feel as though they could step right into the scene. But the cognitive aspect still eluded me. That all changed when DeepDream came along and I saw the possibility of using it in a scaled-up and nuanced way to achieve my goals. Thanks to the hard work and ingenuity of my generous engineering collaborators, Joseph Smarr (Google) and Chris Lamb (NVIDIA), that possibility became a reality and it continues to serve me well to this day.”

What led you to create the physical displays of Dreamscapes that you’re known for?

  • “I believe there are only three ways to fully appreciate the immersiveness and intricacy of my work: print it big, view it in virtual reality, or create an immersive experience using projection mapping on walls, floors, and ceilings of large spaces. While I've greatly enjoyed the experience of VR proofs-of-concept with my own work at Google, Facebook, and at NYC-based AR/VR incubator, The Glimpse Group; and while I lust to see my work displayed as immersive digital projections like those created by the folks at ‎L'Atelier des Lumières; I don't have much control as to how or when my work gets exhibited in these ways. Prints, on the other hand, can be readily produced and displayed in a manner that's fully under my control. After much experimentation and research, I've found that backlit tension fabric printed using digital dye sublimation provides the optimal physical experience of my work. Aside from the fact that this medium permits extremely large seamless printing of uncompromising quality and stability, I find the fabric itself to be quite compatible with my artwork. Interestingly, the AI-augmented manipulations of my photographic imagery has a bit of a visual weave to it. The tension fabric also has a weave, albeit much finer. You'll notice when closely inspecting my printed works that it's hard to tell where the physical weave of the fabric ends and the visual weave of the AI "hallucinations" begin. This is a happy accident that I did not see coming.”

What do you consider the underlying focus of your work in Dreamscapes?

  • “The underlying focus of my work is to reverse engineer the psychology behind the human experience of special places. What I mean by ‘special places’ are precise locations in our world where something very powerful happens; namely, a reaction that goes beyond the visual to also encompass a visceral and cognitive response.”

What excites or worries you most about AI?

  • “Like many other observers, I worry about the unintended havoc a powerful AI could wreak on our society. AI does not have to be conscious to cause harm; it merely needs to be competent. As AI expert Stuart Russell points out, Facebook's AI algorithm designed to increase social media engagement and click-throughs has likely been a key contributor to the increasing polarization of our politics. I'll throw my faith at the people involved with organizations like the Future of Life Institute and the recently launched Human-Centered Artificial Intelligence Institute at Stanford University to help steer AI development away from disaster. On a more positive note, what excites me most is the potential of human-AI hybrid efforts, which evidence seems to indicate yields results unmatched by either ingenious humans or powerful AI alone. These human-machine "centaurs" provide the most promising path forward in my opinion.”

  • “While I hold no illusions that this intelligence is sentient, unlike others who may have a passing interest in seeing what an off-the-shelf version of DeepDream can do to their images, I am engaged in a relationship with this intelligence that is pushing each of us to develop and mature. And while the efforts of my ingenious engineering colleagues have granted DeepDream superpowers, so has this modified version of open source software unlocked a superpower for me in that I can now create compelling works of art with a complexity and richness that I could never execute fully on my own. Interestingly, accepting this superpower has required giving up a degree of control in that I can’t really tell the software exactly what to do and, in fact, I honestly don’t even fully understand how or why it’s doing what it’s doing. This is a bargain I believe many of us will have to make in the future of our work or even daily life with the rapid advancement of artificial intelligence and deep learning systems. But to me this is an optimistic story because there is no sense in which the computer is trying to replace me, thwart my intentions, or suppress my vision. After all, it has no innate desire to create art, nor any ability to discern which of the parameter settings are most aesthetically pleasing to other humans. It’s just a tool, albeit a very powerful tool that is somewhat beyond our comprehension. Ultimately, however, I still make the decisions as to how to steer it and what to keep or discard.”

What specific AI / machine learning technologies does you use?

  • “I use a proprietary modified version of Google's original DeepDream open source code that my engineering collaborators Joseph Smarr (Google) and Chris Lamb (NVIDIA) customized to operate successfully on my giant panoramic images. As released, DeepDream was not much more than demo software; it took Joseph and Chris about 6 months of intermittent hacking on nights and weekends to keep DeepDream from crashing on my multi-hundred megapixel images. I'm eternally grateful to these two brilliant young engineers who were crazy enough to honor my request for help and too stubborn to give up once they started.”

  • “DeepDream, especially as modified by my engineering team, is incredibly powerful software with an enormous range of options from which to choose the desired dreaming style and characteristics. The key was hosting their software on a monster cloud-based compute server utilizing four separate graphics processing units (GPU) — a supercomputer in the sky, if you will. One benefit of this approach is that it enables me to run four different experiments at a time, one on each GPU. This made it fairly quick and straightforward for me to exhaustively catalog the “macro” style of all 84 layers of DeepDream’s neural network, and to fully understand the effects of tweaking the four parameter settings that can be applied to each of these styles.”

Ambrosi’s Selected Speaking Engagements:

Learn more about Daniel Ambrosi: