AI Artists Logo.png

The History of AI Art

Special thanks to the BBC, MIT and Forbes for their support in the creation of this timeline of AI and Art.

AI Art Timeline - Dave Chenell.png
 
Thomas_Bayes.gif

Thinking in numbers (1763)

Artificial intelligence requires the ability to learn and make decisions, often based on incomplete information. In 1763, Thomas Bayes developed a framework for reasoning about the probability of events, using math to update the probability of a hypothesis as more information becomes available. Thanks to his work, Bayesian inference would become an important approach in machine learning.

Ada+Lovelace.jpg

From numbers to poetry (1842)

In 1842, English mathematician Ada Lovelace was helping Charles Babbage publish the first algorithm to be carried out by his Analytical Engine, the first general-purpose mechanical computer. Yet Lovelace saw opportunities beyond the math. She envisioned a computer that could crunch not just numbers, but solve problems of any complexity. At the time it was revolutionary that machines have applications beyond pure calculation. She called the idea Poetical Science: "[The Analytical Engine] might act upon other things besides number, were objects found whose mutual fundamental relations could be expressed by those of the abstract science of operations… Supposing, for instance, that the fundamental relations of pitched sounds in the science of harmony and of musical composition were susceptible of such expression and adaptations, the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent."

ROBOT.jpg

“Robot” enters vernacular (1921)

Czech writer Karel Čapek introduces the word "robot" in his play R.U.R. (Rossum's Universal Robots). The word "robot" comes from the word "robota" (work).

Grey Walter’s nature-inspired 'tortoise'. It was the world’s first mobile, autonomous robot. Clip from Timeshift: (BBC Four, 2009).

Grey Walter’s nature-inspired 'tortoise'. It was the world’s first mobile, autonomous robot. Clip from Timeshift: (BBC Four, 2009).

World War 2 triggers fresh thinking (1942)

World War Two brought together scientists from many disciplines, including the emerging fields of neuroscience and computing. In Britain, mathematician Alan Turing and neurologist Grey Walter were two of the bright minds who tackled the challenges of intelligent machines. They traded ideas in an influential dining society called the Ratio Club. Walter built some of the first ever robots. Turing went on to invent the so-called Turing Test, which set the bar for an intelligent machine: a computer that could fool someone into thinking they were talking to another person.

Walter Pitts and Warren McCulloch.jpg

Neurons go artificial (1943)

Warren S. McCulloch and Walter Pitts publish “A Logical Calculus of the Ideas Immanent in Nervous Activity” in the Bulletin of Mathematical Biophysics. This influential paper, in which they discussed networks of idealized and simplified artificial “neurons” and how they might perform simple logical functions, will become the inspiration for computer-based “neural networks” (and later “deep learning”) and their popular description as “mimicking the brain.”

Giant Brains.jpg

Can a machine think? (1949)

Edmund Berkeley publishes Giant Brains: Or Machines That Think in which he writes:

“Recently there have been a good deal of news about strange giant machines that can handle information with vast speed and skill….These machines are similar to what a brain would be if it were made of hardware and wire instead of flesh and nerves… A machine can handle information; it can calculate, conclude, and choose; it can perform reasonable operations with information. A machine, therefore, can think.”

The debate over machine intelligence - what constitutes thinking, creativity, autonomy, and even consciousness - rages on today.

Isaac Asimov explainS his Three Laws of Robotics to prevent intelligent machines from turning evil. Clip from Timeshift (BBC Four, 2009).

Isaac Asimov explainS his Three Laws of Robotics to prevent intelligent machines from turning evil. Clip from Timeshift (BBC Four, 2009).

Science fiction steers the conversation (1950)

In 1950, “I Robot” was published – a collection of short stories by science fiction writer Isaac Asimov.

Asimov was one of several science fiction writers who picked up the idea of machine intelligence, and imagined its future. His work was popular, thought-provoking and visionary, helping to inspire a generation of roboticists and scientists. He is best known for the Three Laws of Robotics, designed to stop our creations turning on us. But he also imagined developments that seem remarkably prescient – such as a computer capable of storing all human knowledge that anyone can ask any question.

Marvin Minsky founded the Artificial Intelligence Laboratory at Massachusetts Institute of Technology (MIT).

Marvin Minsky founded the Artificial Intelligence Laboratory at Massachusetts Institute of Technology (MIT).

A 'top-down' approach (1956)

The term 'artificial intelligence' was coined for a summer conference at Dartmouth University, organised by a young computer scientist, John McCarthy.

Top scientists debated how to tackle AI. Some, like influential academic Marvin Minsky, favoured a top-down approach: pre-programming a computer with the rules that govern human behaviour. Others preferred a bottom-up approach, such as neural networks that simulated brain cells and learned new behaviours. Over time Minsky's views dominated, and together with McCarthy he won substantial funding from the US government, who hoped AI might give them the upper hand in the Cold War.

Arthur Samuels, who coined the phrase “machine learning,” works with an early IBM machine and a checkerboard.

Arthur Samuels, who coined the phrase “machine learning,” works with an early IBM machine and a checkerboard.

“Machine learning” coined (1959)

Arthur Samuel coins the term “machine learning,” reporting on programming a computer “so that it will learn to play a better game of checkers than can be played by the person who wrote the program.”

Thinking machine HAL 9000’s interviews with the BBC. From 2001: A Space Odyssey (Stanley Kubrick, MGM 1968)

Thinking machine HAL 9000’s interviews with the BBC. From 2001: A Space Odyssey (Stanley Kubrick, MGM 1968)

2001: A Space Odyssey imagines where AI could lead (1968)

Minsky influenced science fiction too. He advised Stanley Kubrick on the film 2001: A Space Odyssey, featuring an intelligent computer, HAL 9000.

During one scene, HAL is interviewed on the BBC talking about the mission and says that he is "fool-proof and incapable of error." When a mission scientist is interviewed he says he believes HAL may well have genuine emotions. The film mirrored some predictions made by AI researchers at the time, including Minsky, that machines were heading towards human level intelligence very soon. It also brilliantly captured some of the public’s fears, that artificial intelligences could turn nasty.

Researchers spent six years developing Shakey.

Researchers spent six years developing Shakey.

Tough problems to crack (1969)

AI was lagging far behind the lofty predictions made by advocates like Minsky – something made apparent by Shakey the Robot.

Shakey was the first general-purpose mobile robot able to make decisions about its own actions by reasoning about its surroundings. It built a spatial map of what it saw, before moving. But it was painfully slow, even in an area with few obstacles. Each time it nudged forward, Shakey would have to update its map. A moving object in its field of view could easily bewilder it, sometimes stopping it in its tracks for an hour while it planned its next move.

Composition by AARON.jpg

The autonomous picture creator (1973)

Since 1973, Harold Cohen—a painter, a professor at the University of California, San Diego, and a onetime representative of Britain at the Venice Biennale—has been collaborating with a program called AARON. AARON has been able to make pictures autonomously for decades; even in the late 1980s Cohen was able to joke that he was the only artist who would ever be able to have a posthumous exhibition of new works created entirely after his own death.

Are the pictures the evolving program has made over the last four decades really works by Harold Cohen, or independent creations by AARON itself, or perhaps collaborations between the two? It is a delicate problem. AARON has never moved far out of the general stylistic idiom in which Cohen himself worked in the 1960s, when he was a successful exponent of color field abstraction. Clearly, AARON is his pupil in that respect.

The unresolved questions about machine art are, first, what its potential is and, second, whether—irrespective of the quality of the work produced—it can truly be described as “creative” or “imaginative.” These are problems, profound and fascinating, that take us deep into the mysteries of human art-making.

Ken Olsen, founder of Digital Equipment Corporation, was among the first business leaders to realise the commercial benefit of AI.

Ken Olsen, founder of Digital Equipment Corporation, was among the first business leaders to realise the commercial benefit of AI.

A solution for big business (1987)

After a long “AI winter” - when people began seriously doubting AI’s ability to reach anything near human levels of intelligence - AI's commercial value started to be realised, attracting new investment.

The new commercial systems were far less ambitious than early AI. Instead of trying to create a general intelligence, these ‘expert systems’ focused on much narrower tasks. That meant they only needed to be programmed with the rules of a very particular problem. The first successful commercial expert system, known as the RI, began operation at the Digital Equipment Corporation helping configure orders for new computer systems. By 1986 it was saving the company an estimated $40m a year.

The paper “A statistical approach to language translation,” shifted real-world language problems from rules to probabilities that could be learned.

The paper “A statistical approach to language translation,” shifted real-world language problems from rules to probabilities that could be learned.

From rules to probabilistic learning (1988)

Members of the IBM T.J. Watson Research Center publish “A statistical approach to language translation,” heralding the shift from rule-based to probabilistic methods of machine translation, and reflecting a broader shift to “machine learning” based on statistical analysis of known examples, not comprehension and “understanding” of the task at hand (IBM’s project Candide, successfully translating between English and French, was based on 2.2 million pairs of sentences, mostly from the bilingual proceedings of the Canadian parliament).

Rodney Brooks became director of the MIT Artfificial Intelligence Laboratory, a post once held by Marvin Minsky.

Rodney Brooks became director of the MIT Artfificial Intelligence Laboratory, a post once held by Marvin Minsky.

Back to nature for “bottom-up” inspiration (1990)

Expert systems couldn't crack the problem of imitating biology. Then AI scientist Rodney Brooks published a new paper: Elephants Don’t Play Chess. Brooks was inspired by advances in neuroscience, which had started to explain the mysteries of human cognition. Vision, for example, needed different 'modules' in the brain to work together to recognise patterns, with no central control. Brooks argued that the top-down approach of pre-programming a computer with the rules of intelligent behaviour was wrong. He helped drive a revival of the bottom-up approach to AI, including the long unfashionable field of neural networks.

ALICE.jpg

A.L.I.C.E. chatbot learns how to speak from the web (1995)

Richard Wallace develops the chatbot A.L.I.C.E (Artificial Linguistic Internet Computer Entity), inspired by Joseph Weizenbaum's ELIZA program, but with the addition of natural language sample data collection on an unprecedented scale, enabled by the advent of the Web.

Deep Blue "thinks like God" according to Gary Kasparov. From Andrew Marr’s History of the World (BBC One, 2012).

Deep Blue "thinks like God" according to Gary Kasparov. From Andrew Marr’s History of the World (BBC One, 2012).

Man vs. machine: fight of the 20th century (1997)

Supporters of top-down AI still had their champions: supercomputers like Deep Blue, which in 1997 took on world chess champion Garry Kasparov.

The IBM-built machine was, on paper, far superior to Kasparov - capable of evaluating up to 200 million positions a second. But could it think strategically? The answer was a resounding yes. The supercomputer won the contest, dubbed 'the brain's last stand', with such flair that Kasparov believed a human being had to be behind the controls. Some hailed this as the moment that AI came of age. But for others, this simply showed brute force at work on a highly specialised problem with clear rules.

The Roomba vacuum has cleaned up commercially – over 10 million units have been bought across the world.

The Roomba vacuum has cleaned up commercially – over 10 million units have been bought across the world.

The first robot for the home (2002)

Rodney Brook's spin-off company, iRobot, created the first commercially successful robot for the home – an autonomous vacuum cleaner called Roomba.

Cleaning the carpet was a far cry from the early AI pioneers' ambitions. But Roomba was a big achievement. Its few layers of behaviour-generating systems were far simpler than Shakey the Robot's algorithms, and were more like Grey Walter’s robots over half a century before. Despite relatively simple sensors and minimal processing power, the device had enough intelligence to reliably and efficiently clean a home. Roomba ushered in a new era of autonomous robots, focused on specific tasks.

According to Google, its speech recognition technology had an 8% word error rate as of 2015.

According to Google, its speech recognition technology had an 8% word error rate as of 2015.

Starting to crack the big problems (2008)

In November 2008, a small feature appeared on the new Apple iPhone – a Google app with speech recognition.

It seemed simple. But this heralded a major breakthrough. Despite speech recognition being one of AI's key goals, decades of investment had never lifted it above 80% accuracy. Google pioneered a new approach: thousands of powerful computers, running parallel neural networks, learning to spot patterns in the vast volumes of data streaming in from Google's many users. At first it was still fairly inaccurate but, after years of learning and improvements, Google now claims it is 92% accurate.

Fei-Fei+Li+-+ImageNet.jpg

ImageNet democratizes data (2009)

Stanford researcher Fei-Fei Li saw her colleagues across academia and the AI industry hammering away at the same concept: a better algorithm would make better decisions, regardless of the data. But she realized a limitation to this approach—the best algorithm wouldn’t work well if the data it learned from didn’t reflect the real world. Her solution: build a better dataset. “We decided we wanted to do something that was completely historically unprecedented. We’re going to map out the entire world of objects.” The resulting dataset was called ImageNet. Fei-Fei Li released ImageNet, a free database of 14 million images that had been labeled by tens of thousands of Amazon Mechanical Turk workers. AI researchers started using ImageNet to train neural networks to catalog photos and identify objects. The dataset quickly evolved into an annual competition to see which algorithms could identify objects with the lowest error rate. Many see it as the catalyst for the AI boom the world is experiencing today.

Robots are now able to learn mathematics.

Robots are now able to learn mathematics.

Dance bots (2010)

At the same time as massive mainframes were changing the way AI was done, new technology meant smaller computers could also pack a bigger punch. These new computers enabled humanoid robots, like the NAO robot, which could do things predecessors like Shakey had found almost impossible. NAO robots used lots of the technology pioneered over the previous decade, such as learning enabled by neural networks. At Shanghai's 2010 World Expo, some of the extraordinary capabilities of these robots went on display, as 20 of them danced in perfect harmony for eight minutes.

Watson is now used in medicine. It mines vast sets of data to find facts relevant to a patient’s history and makes recommendations to doctors.

Watson is now used in medicine. It mines vast sets of data to find facts relevant to a patient’s history and makes recommendations to doctors.

Man vs machine: fight of the 21st century (2011)

In 2011, IBM's Watson took on the human brain on US quiz show Jeopardy. This was a far greater challenge for the machine than chess. Watson had to answer riddles and complex questions. Its makers used a myriad of AI techniques, including neural networks, and trained the machine for more than three years to recognise patterns in questions and answers. Watson trounced its opposition – the two best performers of all time on the show. The victory went viral and was hailed as a triumph for AI.

One of the neurons in an artificial neural network trained from still frames of unlabeled YouTube videos learned to detect cats.


One of the neurons in an artificial neural network trained from still frames of unlabeled YouTube videos learned to detect cats.

Learning cat faces (2012)

Jeff Dean and Andrew Ng report on an experiment in which they showed a very large neural network 10 million unlabeled images randomly taken from YouTube videos, and “to our amusement, one of our artificial neurons learned to respond strongly to pictures of... cats.”

Collage by the Painting Fool, inspired by news from Afghanistan.

Collage by the Painting Fool, inspired by news from Afghanistan.

The painting fool (2013)

The Painting Fool is the brainchild of Simon Colton, a professor of computational creativity at Goldsmiths College, London, who has suggested that if programs are to count as creative, they’ll have to pass something different from the Turing test. He suggests that rather than simply being able to converse in a convincingly human manner, as ­Turing proposed, an artificially intelligent artist would have to behave in ways that were “skillful,” “appreciative,” and ­“imaginative.”

In one exhibition, the program scanned an article in the Guardian on the war in Afghanistan, extracted keywords such as “NATO,” “troops,” and “British,” and found images connected with them. Then it put these together to make a composite image reflecting the “content and mood” of the newspaper article.

In a Paris exhibition, the sitters for portraits faced not a human artist but a laptop, on whose screen the “painting” took place. The Painting Fool executed pictures of visitors in different moods, responding to emotional keywords derived from 10 articles read—once again—in the Guardian. If the tally of negativity was too great (always a danger with news coverage), Colton programmed the software to enter a state of despondency in which it refused to paint at all, a virtual equivalent of the artistic temperament.

In many states in America it is legal for driverless cars to take to the road.

In many states in America it is legal for driverless cars to take to the road.

Are machines intelligent now? (2014)

Sixty-four years after Turing published his idea of a test that would prove machine intelligence, a chatbot called Eugene Goostman finally passed.

But very few AI experts saw this a watershed moment. Eugene Goostman was seen as 'taught for the test', using tricks to fool the judges. It was other developments in 2014 that really showed how far AI had come in 70 years. From Google's billion dollar investment in driverless cars, to Skype's launch of real-time voice translation, intelligent machines were now becoming an everyday reality that would change all of our lives.

Partnership on AI - Team.jpg

Partnership on AI (2016)

The Partnership on AI was founded to conduct research, organize discussions, share insights, provide thought leadership, consult with relevant third parties, respond to questions from the public and media, and create educational material that advances the understanding of AI technologies including machine perception, learning, and automated reasoning. The Partnership is led by Founding Executive Director Terah Lyons, who formerly served as Policy Advisor to the U.S. Chief Technology Officer in the White House Office of Science and Technology Policy (OSTP).

Inceptionism art generated by Google’s Deep Dream algorithms.

Inceptionism art generated by Google’s Deep Dream algorithms.

Google Deep Dream is born (2015)

In June 2015, Alex Mordvintsev and Google’s Brain AI research team published some fascinating results. After some training in identifying objects from visual clues, and being fed photographs of skies and random-shaped stuff, the program began generating digital images suggesting the combined imaginations of Walt Disney and Pieter Bruegel the Elder, including a hybrid “Pig-Snail,” “Camel-Bird” and “Dog-Fish.” This birthed a new form of art called “Inceptionism”, named after the Inception algorithm, in which a neural network would progressively zoom in on an image and try to “see” it within the framework of what it already knew.

Taryn Southern - AI Generated Music.jpg

AI co-produces mainstream pop album (2017)

Taryn Southern is a pop artist working with several AI platforms to co-produce her debut album I AM AI. Her 2017 single “Break Free” below is a human-AI collaboration. You can hear about Taryn’s creative process in How AI-Generated Music is Changing The Way Hits Are Made, an interview with DJ and Future of Music producer Dani Deahl. Taryn explains: “Using AI, I’m writing my lyrics and vocal melodies to the music and using that as a source of inspiration. Because I’m able to iterate with the music and give it feedback and parameters and edit as many times as I need, it still feels like it’s mine.” 

AI artwork sells for $432,500, and Christie’s becomes the first auction house to offer a work of art created by an algorithm.

AI artwork sells for $432,500, and Christie’s becomes the first auction house to offer a work of art created by an algorithm.

AI art makes $432,500 at an auction (2018)

Can a machine generate the next Picasso masterpiece on its own? This question was thrust into the limelight by artist collective Obvious, a Paris-based trio fascinated by the artistic potential of artificial intelligence. Obvious fed an algorithm 15,000 images of portraits from different time periods. The algorithm generated its own portraits, attempting to create original works that could pass as man-made. When it went under the hammer in the Prints & Multiples sale at Christie’s in October 2018, Portrait of Edmond Belamy sold for an incredible $432,500, signalling the arrival of AI art on the world auction stage.

Machines reflect our consciousness (2018)

Memo Akten uses technology to reflect our humanity and explore how we make sense of the world. Memo trained a machine learning algorithm to “see” using images that represent essential concepts in human life. To get images reflecting our shared humanity, he downloaded photos from Flickr tagged with these words: everything, world, universe, space, mountains, oceans, flowers, art, life, love, faith, ritual, god, etc. Then he programmed the machine to “imagine” new images based on all those images— creating a new world of landscapes, objects and ideas never before seen, but based on our own experience of life. The result is a breathtaking journey through the “imagination of a machine” which has been trained on concepts core to our existence. 

Screen Shot 2019-02-28 at 6.09.23 PM.jpg

AIArtists.org charts the future (2019)

Technology is accelerating so fast that it’s nearly impossible to keep up. What does the future hold for machine learning and art? For artificial intelligence and humanity? For neural networks and consciousness?

These are the grand questions that today’s artists are exploring in their work. Join us as we chart a course of the future, and track all the AI artists that are pushing the field forward.