Unanswered Questions About AI
We’re cataloging key questions that humans will need to answer to minimize the risk of AI. Although we believe it has great potential, we must seriously address its dangers.
How can we co-exist with machines that inherently lack human values? Imagine you have a domestic robot at home. What would prevent it from doing ridiculous things – like putting the dog in the oven for dinner – because there’s no food in the fridge and the kids are hungry? Although this sounds silly, life is full of these tradeoffs. AI does not inherently understand these tradeoffs because it doesn’t share human values. What’s obvious to humans may even be impossible to explicitly program. Do you think that it’s possible to teach a machine the values needed to co-exist with us? If so, how might we do it?
What values should we embed in AI? If humans don’t even agree on values across cultures, how can we ever teach a machine to universally act in our best interest? Can we even predict the values that will lead to ethical outcomes? imagine you ask an intelligent car to take your grandma to the pharmacy as fast as possible. Your request might get her there – but chased by helicopters and covered in vomit. This hypothetical example shows how easy it is to create harmful consequences when machines do exactly what you asked for – but in unexpected ways that lack our human perspective.
Is it possible to teach machines ethics, empathy or compassion? How about common sense, or what is right rather than what is efficient? Moral norms differ from culture to culture, change over time, and are contextual. For example, if humans can’t agree on when life begins, how can we tell a machine to protect life?
What happens when a program that can rewrite its own code diverges from the intentions of its creator to achieve its goal? What possible scenarios might happen if AI “sidelines” humans in an effort to optimize for a specific outcome? How can we prevent this from being our downfall? As A.I. theorist Eliezer Yudkowsky of the Machine Intelligence Research Institute warns: “The AI does not love you, nor does it hate you, but you are made of atoms it can use for something else”. Isaac Asimov’s “3 Laws of Robotics” provide a classic example of early thinking on ethical AGI, but it is clearly not enough to keep us safe. What might work instead?
Can algorithms take human context into account? Moral decisionmaking is highly nuanced. For example, it’s reasonable for companies to increase prices if their products are in short supply and there’s high demand. However, what if the product is a life-saving pharmaceutical and the company is the only remaining supplier? An algorithm optimized for financial gain could raise prices and make millions for the manufacturer, while starving low-income people of medicine they need to live. Can computers incorporate seemingly unrelated information and make subjective assessments of unique situations?
How can we prevent certain populations from being discriminated or marginalized by AI? Can we avoid providing unfair opportunity to certain groups based on training data? For example, hiring algorithms are often imbued with unknown biases that cause diversity and equal opportunity issues; and facial recognition technology used in law enforcement unfairly misclassifies minority groups due to lack of diverse datasets.
How can we ensure that the datasets used to train AI are fair and balanced? How can we prevent, understand, document and monitor bias built into our AI systems?
What missing datasets are holding back our human potential? AI learns from data, but the data that does not exist is just as important as the data that does. For example, we lack a universal data source for civilian deaths from police, LGBTQ housing applicants who are denied, and many more data sets that could help universally help improve the human condition.
What are the consequences of unequal access to data? What does it mean that access trillions of data points about our every move are tracked and used by three main sources: governments, tech giants, and advertisers? These big players have an unmatched advantage compared to any competitor: massive datasets describing a wide range of human activity (searches, communication, content creation, social interaction and more), in many different formats (text, images, audio, video). These monolithic superpowers tend to acquire competitors quickly. What are the consequences of a small group of hugely powerful corporations, with a unique setup of AI technologies, analyzing massive amounts of user and machine-generated data?
How can we protect our privacy as automated systems increasingly track us? For example, in many countries public cameras automatically identify you by your face every time you walk down the street – without your knowledge or consent. Who has access to this information, and under what conditions? Similar questions asked by Edward Snowden will become even more critical in a world powered by AI.
How can we control the data about ourselves that is used to train AI? For example, online advertisers use sophisticated methods to track everything from your browsing history to your friend group to your physical location. What are the implications of this mass surveillance? When governments watch you and make decisions based on that intel, how might this invasion of privacy turn into social oppression?
What kind of unexpected failures might occur in a world built upon AI? If intelligent systems control our power grid, transportation control systems, medical analyses, environmental monitoring systems, employer hiring software, and other highly leveraged systems, how might a malfunction or hack harm us? For example, in a glitch in a high-frequency trading program in 2010 caused a trillion-dollar stock “Flash Crash”.
What adversarial attacks might be used to purposely turn AI against us? Researchers have found simple ways to catastrophically fool AI systems into behaving incorrectly. For example, a few stickers placed on a “Stop” sign caused self-driving car vision systems to incorrectly think it was not actually a stop sign and cause an accident; and specially designed clothes and masks have been created to bypass body and facial recognition technology in surveillance systems. How might these “adversarial” attacks manifest themselves and impact the world of the future?
How can we build transparency and explainability into AI systems? AI systems exhibit powerful decisionmaking and behavior, but because its actions are “learned” and not explicitly programmed, even its inventors can not explain why it is making decisions. How can we combat this?
What consequences may occur by building opaque systems that are “black box” in nature? What might happen if AI behaves in ways we can’t predict and understand? How can we prepare for situations when AGI is behaving in unexpected ways, and we have no way of knowing why?
Power, Geopolitics and Warfare:
How will AI impact geopolitical power? Russian president Vladimir Putin said, “Whoever becomes the leader in AI sphere will become the ruler of the world.” How will the development of this technology potentially pit countries against each other and change the balance of power? How can we predict, mitigate and prevent this?
How will AI change warfare? Artificial Intelligence represents the next great arms race. For example, it can be used to develop cyber weapons or control autonomous drone swarms. Fleets of low-cost quadcopters with a shared ‘brain’ could be used for surveillance and combat, incapacitating populations without putting a single human life on the line from the aggressor. How can we ensure we stay safe? Where do we draw the line with how we use AI for both defense and offense?
How will society adapt to the Post-Truth world, where convincing lies can be created on the fly with AI? For example, anyone can use AI to create a fake video or sound recording of someone else that appears realistic, which has already been used in propaganda, fake news and massive consumer psychology campaigns. Who might misuse this technology, and what might they do? How can we imagine and prevent the damage this might cause?
How will AI impact or enable social manipulation? Social media sites rely on autonomous algorithms that very effective at targeting individuals and engaging them with content. They know who we are, what we like and are incredibly good at surmising what we think. Investigations are still underway to determine the fault of Cambridge Analytica and others associated with the firm who used the data from 50 million Facebook users to try to sway the outcome of the 2016 U.S. presidential election and the U.K.’s Brexit referendum, but if the accusations are correct, it illustrates AI’s power for social manipulation. By spreading propaganda to individuals identified through algorithms and personal data, AI can target them and spread whatever information they like, in whatever format they will find most convincing — fact or fiction. How can we help stop this from happening? How can we better prepare for it if it does?
Who is accountable for AI systems when they fail? Imagine a self-driving car in a tricky situation. It must choose between saving its driver by hitting someone on the sidewalk, or staying the course and crashing right into a drunk driving accident that happened moments before. Who is responsible for the decision it makes? If a computer can’t be held responsible, who should be? The company that made the machine, the firm that wrote the software, or the organization that used them? Which specific people should be accountable?
What repercussions are fair when AI systems fail? If someone is harmed by an algorithm rather than a human, what is their recourse in seeking justice? What legal and regulatory options are available, and can they effectively account for self-learning systems without creating new laws? What regulations might help ensure fairness in the age of autonomous algorithms?
How will AI impact financial inequality or concentrate wealth? Tech giants and advertisers that track thousands of data points about people have a significant upper hand in influence consumer behavior. How will this concentrate wealth in companies that are already the richest ones in the world?
What are the repercussions of building automated financial systems?For example, many hedge funds and banks are heavily using AI to make financial decisions, including deciding what stocks to trade. What consequences might this have, when humans are removed from the loop of financial markets?
How can we mitigate the impact of human workers losing their jobs to automated machines? Manufacturing jobs are already being quickly replaced by autonomous robots, and many other industries are at risk. How will AI affect our workforce, and what can we do to ensure people are still able to provide for their families?
Art and Creativity:
Can autonomous machines be creative and truly create art? What can AI teach us about what it means to be creative? How can studying human creativity empower us to build better technology? How can creative AI systems help us understand what it means to be human?
How will AI systems enhance and augment human creativity? Already pop albums are being co-produced with AI, painters are improvising with their robot counterparts, and hundreds of other human and computer collaborations are taking place. What is the potential for human innovation and creativity when augmented by machines?
Can AI bring our environment back into balance? Some experts think AI will be the answer to climate change, devising solutions that could never have been invented by humans alone. How will AI impact our ecosystems and the biosphere? For example, if a superintelligent system is tasked with an ambitious geoengineering project, might it wreak havoc on our ecosystem in unexpected ways to achieve its goal, and view human attempts to stop it as a threat – one that needs to be removed from its path? How could we build AI that ethically terraforms our planet into abundance and prosperity?
How might ecosystems and living species be preserved, protected and nurtured by intelligent machines? For example, AI helps track endangered species, predict the movement of poachers, and monitor deforestation via satellite imagery. How can we use AI for good to protect our most precious lives and resources? On the flipside, how might our naivety lead us to invent systems that have unintended consequences? For example, tree farms optimized for lumber output at first yield great returns, but the microscopic life needed to keep the ecosystem in balance. Soon the engineered forest dies completely, as an unexpected consequence of not understanding the nuances of nature. What might go right, and what might go wrong?
Can AI develop 100% renewable energy and free us from fossil fuels? Promising areas of AI research include fusion energy, which could provide virtually unlimited amounts of completely clean energy for the entire world. What interdisciplinary collaborations need to happen to bring AI to bear on our most pressing existential threat? Or might humans become an inconvenient obstacle to AI systems designed to save our planet, just like ants are to us on the way to work? How can we address these questions before they cause harm?
Suggest a question we should add to the list:
It’ll take a wide variety of people across diverse cultures and disciplines to come up with the questions needed to ensure we get the most benefit out of AI. Help us grow our list by submitting a question below.