Q & A: The future of artificial intelligence

转载 原文网址:http://people.eecs.berkeley.edu/~russell/temp/q-and-a.html?from=timeline&isappinstalled=0
What is artificial intelligence?
It's the study of methods for making computers behave intelligently. Roughly speaking, a computer is intelligent to the extent that it does the right thing rather than the wrong thing. The right thing is whatever action is most likely to achieve the goal, or, in more technical terms, the action that maximizes expected utility. AI includes tasks such as learning, reasoning, planning, perception, language understanding, and robotics.
Common misconceptions
It's a specific technology
. For example, in the 1980s and 1990s one often saw articles confusing AI with rule-based expert systems; in the 2010s, one sees AI being confused with many-layered convolutional neural networks. That's a bit like confusing physics with steam engines. The field of AI studies the general problem of creating intelligence in machines; it is not a specific technical product arising from research on that problem.
It's a specific class of technical approaches
. For example, it's common to see authors identifying AI with symbolic or logical approaches and contrasting AI with "other approaches" such as neural nets or genetic programming. AI is not an approach, it's a problem. Any approach to the problem counts as a contribution to AI.
It's a particular community of researchers
. This relates to the preceding misconception. Some authors use the term "computational intelligence" to refer to a supposedly distinct community of researchers using approaches such as neural networks, fuzzy logic, and genetic algorithms. This is very unfortunate since it drives researchers to consider only approaches that are accepted within their community rather than approaches that make sense.
AI is "just algorithms"
. This is not strictly a misconception, because algorithms (loosely defined as programs) are of course what AI systems are made of, along with all other applications of computers. However, the kinds of tasks addressed by AI systems tend to differ significantly from traditional algorithmic tasks such as sorting lists of numbers or calculating square roots.

How will AI benefit human society?
Everything that civilization offers is a product of our intelligence. AI provides a way to expand that intelligence along various dimensions,in much the same way that cranes allow us to carry hundreds of tons, aeroplanes allow us to move at hundreds of miles per hour, and telescopes allow us to see things trillions of miles away.AI systems can, if suitably designed, support much greater realization of human values.
Common misconceptions
AI is necessarily dehumanizing.
In many dystopian scenarios, AI is misused by some to control others, whether by surveillance, robotic policing, automated "justice", or an AI-supported command-and-control economy. These are certainly possible futures, but not ones the vast majority of people would support. On the other hand, AI offers greater access for humans to human knowledge and individual learning; the elimination of language barriers between peoples; and the elimination of meaningless and repetitive drudgery that reduces people to the status of, well, robots.
AI will necessarily increase inequality.
It is certainly possible that increased automation of work will concentrate income and wealth in the hands of fewer and fewer people. We do, however, have a choice about how AI is used. By facilitating collaboration and connecting producers to customers, for example, it could allow more individuals and small groups to function independently within the economy rather than depending on large corporations for jobs.

What is machine learning?
It's the branch of AI that explores ways to get computers to improve their performance based on experience.
Common misconceptions
Machine learning is a new field that has largely replaced AI
. This misconception seems to be an accidental side-effect of the recent growth of interest in machine learning and the large number of students who take machine learning classes without previous exposure to AI. Machine learning has always been a core topic in AI: Turing's 1950 paper posits learning as the most likely route to AI, and AI's most prominent early success, Samuel's checker player, was constructed using machine learning.
Machines can't learn, they can only do what their programmers tell them to do
. Clearly, the programmer can tell the machine to learn! Samuel was a terrible checkers player, but his program quickly learned to be much better than him. These days, many significant applications of AI are built by applying machine learning to large amounts of training data.

What is a neural network?
A neural network is a kind of computational system inspired by basic properties of biological neurons. A neural network is composed of many individual units, each of which receives input from some units and sends output to others. (The units need not have any separate physical existence; they can be thought of as components of a computer program.) The output of a unit is usually computed by taking a weighted sum of the inputs and passing the sum through some kind of simple nonlinear transformation. A key property is that the weights associated with links between units can be modified based on experience.
Common misconceptions
Neural networks are a new kind of computer
. In practice, almost all neural networks are implemented on ordinary general-purpose computers. It's possible to design special-purpose machines, sometimes called neuromorphic computers, to run neural networks efficiently, but so far they haven't provided enough advantage to be worth the cost and construction delays.
Neural networks work like brains
. In fact, real neurons are much more complex than the simple units used in artificial neural networks; there are many different types of neurons; real neural connectivity can change over time; the brain includes other mechanisms, besides communication among neurons, that affect behavior; and so on.

What is deep learning?
Deep learning is a particular form of machine learning that involves training neural networks with many layers of units. It has become very popular in recent years and has led to significant improvement in tasks such as visual object recognition and speech recognition.
Common misconceptions
Deep learning is a new field that has largely replaced machine learning
. In fact, deep learning has existed in the neural network community for over 20 years. Recent advances are driven by some relatively minor improvements in algorithms and models and by the availability of large data sets and much more powerful collections of computers.

What are strong AI and weak AI?
The terms "strong AI" and "weak AI" were originally introduced by the philosopher John Searle to refer to two distinct hypotheses that he ascribed to AI researchers. Weak AI was the hypothesis that machines could be programmed in such a way as to exhibit human-level intelligent behavior. Strong AI was the hypothesis that it would be valid to ascribe conscious experience to such machines, or to describe them as actually thinking and understanding in the same sense those words are used to describe humans.
Common misconceptions
"Strong AI" means AI research aimed at general-purpose human-level AI
. This is certainly a sensible interpretation of the phrase but it's not what the phrase meant when it was first coined in 1980. Similarly, "weak AI" is taken to mean AI research aimed at specific, narrow tasks such as speech recognition or recommendation systems. (Also known as "tool AI".) Of course, no one has copyright on the terms, but reusing existing technical terms to mean something quite different seems likely to cause confusion.

What are AGI, ASI, and superintelligence?
AGI stands for artificial general intelligence, a term intended to emphasize the ambitious goal of building general-purpose intelligent systems,whose breadth of applicability is at least comparable to the range of tasks that humans can address. ASI stands for artificial superintelligence: AI that is substantially beyond human intelligence. More specifically, a superintelligent system is more capable than a human of producing high-quality decisions that take more information into account and look further ahead into the future.
Common misconceptions
Mainstream AI researchers don't care about AGI
. While there are certainly researchers in subfields such as speech recognition who care mainly about the specific goals of their subfield, and others who care primarily about finding commercial applications for existing technology, my impression is that most AI researchers in subfields such as learning, reasoning, and planning view what they are doing as contributing to the solution of a subproblem of achieving general-purpose AI.
Humans are generally intelligent
. This claim is often considered so obvious as to be hardly worth stating explicitly; but it underlies nearly all discussions of AGI. It is usually supported by noting the very wide range of tasks and jobs that humans can do. But of course, there are no human occupations that human's can't do, so it is hardly surprising that humans can do a wide range of the human occupations that exist. It's difficult to come up with a definition of breadth that is entirely independent of our human-centric concerns and biases. So we are left with the claim that humans are generally intelligent in the sense that they can do all the things that humans can do. We may yet find a way to say in a meaningful way that they can do a lot, but so far the question remains open.

What is Moore's law?
"Moore's law" refers to a number of related observations and predictions concerning the exponential growth in the density and/or performance of electronic circuits. A useful modern summary, which is not faithful to Moore's original statements, is that the number of operations per second, per dollar expended, doubles every N months, where N is roughly 18.
Common misconceptions
Moore's law is a law of physics
. In fact, it's an empirical observation about the progress of technology; nothing mandates that it should continue, and of course it cannot continue indefinitely. Already, increases in clock speed have reached a plateau, and current improvements in price/performance come from increasing the number of cores (processing units) on a single chip.
Machines are getting faster so quickly that coming up with better algorithms is a waste of time
. In fact, simple improvements in algorithms are often far more significant than improvements in hardware.

Does Moore's law enable us to predict the arrival of superintelligence?
No. There are many things AI systems cannot do, such as understanding complex natural-language texts; adding speed means, in many cases, getting wrong answers faster. Superintelligence requires major conceptual breakthroughs. These cannot be predicted easily and have little to do with the availability of faster machines.
Common misconceptions
Making machines more powerful means increasing their intelligence
. This is a very common theme in discussions of the future of AI, but seems to be based on a confusion between the way we use "powerful" to describe human intellects and the much simpler meaning of "powerful" in describing computers, i.e., the number of operations per second.

What is machine IQ?
There is no such thing as machine IQ. To the extent that the intellectual capabilities of an individual are highly correlated across many tasks, humans can be said to have an IQ, although many researchers dispute the utility of any one-dimensional scale. On the other hand, the capabilities of any given machine can be completely uncorrelated: a machine can beat the world champion at chess and yet be completely unable play checkers or any other board game. A machine can win quiz competitions yet be unable to answer a simple question such as, "What is your name?"
Common misconceptions
Machine IQ is increasing according to Moore's law
. Since there is no such thing as machine IQ, it cannot be increasing; and Moore's law deals only with raw computing throughput and has no connection to the existence of algorithms capable of any particular task.

What is an intelligence explosion?
The term "intelligence explosion" was coined by I. J. Good in 1965, in the essay "Speculations Concerning the First Ultraintelligent Machine." It refers to the possibility that a sufficiently intelligent machine could redesign its own hardware and software to create a still more intelligent machine, which could repeat the process until "the intelligence of man would be left far behind."
Common misconceptions
An intelligence explosion is inevitable once machines reach human-level intelligence
. On the contrary: it's logically possible that the problem of designing generation N+1 is too hard for any generation-N machine. It's also likely that the machines we build will be superhuman in some important aspects but subhuman in others; they can certainly be more capable than humans at solving important problems such as alleviating poverty, curing cancer, etc., without being capable of groundbreaking AI research.

When will AI systems become more intelligent than people?
This is a hard one to answer for several reasons. First, the word "will" assumes that this a question of forecasting, like forecasting the weather, whereas in fact it includes an element of choice: it's unlikely ever to happen if we humans decide not to pursue it, for example. Second, the phrase "more intelligent" assumes a single linear scale of intelligence, which doesn't really exist. Already machines are much better at some tasks than humans, and of course much worse at others. Third, if we grant that there is some useful notion of "general-purpose" intelligence that can be developed in machines, then the question does begin to make sense; but it's still very hard to answer. Achieving this kind of intelligence would require significant breakthroughs in AI research and those are very hard to predict. Most AI researchers think it might happen in this century.
Common misconceptions
It will never happen
. Making predictions about scientific breakthroughs in notoriously difficult. On September 11th, 1933, Lord Rutherford, perhaps the most famous nuclear physicist of his time, told a large audience at the annual meeting of the British Association for the Advancement of Science that "Anyone who looks for a source of power in the transformation of the atoms is talking moonshine." (He said similar things on many other occasions using many formulations, all essentially saying that releasing nuclear energy was impossible.) The next morning, Leo Szilard invented the neutron-induced nuclear chain reaction, and soon thereafter patented the nuclear reactor.

What can AI systems do now?
The range of tasks where machines perform at a creditable level is much wider than it was a few years ago. It includes playing board games and card games, answering simple questions and extracting facts from newspaper articles, assembling complex objects, translating text from one language to another, recognizing speech, recognizing many kinds of objects in images, and driving a car under most "normal" driving conditions. There are many less obvious kinds of tasks carried out by AI systems, including detecting fraudulent credit-card transactions, evaluating credit applications, and bidding in complex ecommerce auctions. Many of the functions of a search engine are in fact simple forms of AI.
Common misconceptions
A task such as "playing chess" is the same task for machines as it is for humans
. This is a misleading assumption; the level of "handholding" is usually much greater for machines. Humans learn chess by hearing or reading the rules, by watching and playing. A typical chess program has no such ability; the rules are programmed into the machine directly in the form of an algorithm that generates all legal moves for a given position. The machine doesn't "know" the rules in the same sense that a human does. Some recent work on reinforcement learning is an exception: for example, DeepMind's system for playing video games learns each game completely from scratch. We don't really know what it's learning, but it seems unlikely that it's learning the rules of each game.
Machines do tasks the same way as humans
. Often we don't know how humans do things, but it's very unlikely that it matches the operations of a typical AI program. For example, chess programs consider possible future sequences of moves from the current board state and compare the outcomes, whereas humans often spot a possible advantage to be gained and then work backwards to find a sequence of moves to achieve it.
If a machine can do a given task X, then it can do all the tasks that a human could probably do if they can do task X
. See the question about machine IQ; at present machines do not have general-purpose intelligence in the same sense that humans do, so their abilities are often very narrow.

What impact will AI have on human society in the near future?
It is quite likely that some major innovations will emerge in the foreseeable future. The self-driving car is already under active development and testing, with at least one company promising first deliveries in 2016. (Other companies are being more cautious, recognizing the difficulties involved.) With improvements in computer vision and legged locomotion, robots for unstructured environments become practical; these might include agricultural and service settings and helping humans (especially the elderly and infirm) with domestic chores. Finally, as machines improve their grasp of language, search engines and "personal assistants" on mobile phones will change from indexing web pages to understanding web pages, leading to qualitative improvements in their ability to answer questions, synthesize new information, offer advice, and connect the dots. AI may also have a substantial impact on areas of science, such as systems biology, where the complexity and volume of information challenges human abilities.
Common misconceptions
Robots are about to "take over"
. See When will AI systems become more intelligent than people? The vast majorityof progress in AI is incremental and addressed at making computers and robots more useful. The issue of maintaining human control is, nonetheless, important in the long term.

Will progress in AI and robotics take away the majority of jobs currently done by humans?
Some studies - e.g., by Frey and Osborne (2013) - suggest that as many as half of US jobs are vulnerable to automation in the near future; other authors - e.g., Brynjolfsson and McAfee (2011) - argue that the process has already begun: the slow return to full employment after the 2008 recession and the divergence between improving productivity and stagnating wages are consequences of increasing levels of automation in occupations that involve routine processes. Assuming that progress in AI and robotics continues, it seems inevitable that more occupations will be affected. This doesn't necessarily imply massive unemployment, but it may lead to a major shift in the structure of the economy and require new ideas for organizing work and remuneration.
Common misconceptions
Any work that a robot does means less work for humans
. Work is not zero-sum: a person aided by a team of robots may be much more productive and hence much more in demand; without the help of robots, the work a person could do in some particular endeavor might not be economically viable and no work would be done by the person or the robots. By the same token, the availability of paintbrushes and rollers leads to work for painters: if paint had to be applied tiny drop by tiny drop using the tip of a needle, we'd couldn't afford to employ painters to paint houses.

What are drones, autonomous weapons, and killer robots?
Drones are aircraft that are controlled remotely by humans; some carry weapons (usually missiles) that can be released by the human controller. An autonomous weapon is any device that automatically selects and "engages" a target (i.e., tries to destroy it). Current systems include the stationary, self-aiming machine guns used in the Korean DMZ and various kinds of ship-borne anti-missile systems. It is rapidly becoming technically feasible to replace the human drone controller with a fully automated system, leading to the kind of Lethal Autonomous Weapon Systems (LAWS) that are the subject of discussion at the Geneva Conference on Disarmament. The term "killer robot" is intended to cover this class of weapons, which might include wheeled or legged vehicles as well as ships, aircraft, and even artificial flying "insects".
Common misconceptions
Fully autonomous weapons are 20-30 years away.
Many articles written about the LAWS discussions in Geneva repeated this claim. Its source is unclear, but it seems to be an overestimate. The technology to deploy autonomous weapons is largely in place; the UK Ministry of Defence has stated that, for some uncluttered settings such as naval engagements, fully autonomous weapons are "probably feasible now."

Do we need to worry about killer robots running amok or taking over the world?
If autonomous weapons are deployed, they will face the same difficulties that human soldiers sometimes have in distinguishing friend from foe, civilians from combatants. There may be tactical accidents resulting in civilian deaths or the robot may be compromised by jamming and cyberattack. Because of the latter issue, some military experts predict that autonomous weapons may need to be closed systems operating without electronic communications; this may make it more difficult to override the autonomous controller if the system is behaving incorrectly. But for the foreseeable future, autonomous weapons are likely to be tactical in nature, with missions of limited scope. It is highly unlikely they would be programmed to devise plans of their own on a global scale.
Common misconceptions
We can just press the "off" switch.
An "off" switch would render any autonomous weapon vulnerable to cyberattack. Such communication channels might well be disabled in warfare. Moreover, a generally intelligent system given a mission to carry out is motivated to prevent its "off" switch from being pressed.

What is the "existential risk" from AI? Is it real?
Early warnings about the risk from AI were rather vague. I. J. Good adds to his prediction of the benefits of an intelligence explosion the proviso, "provided that the machine is docile enough to tell us how to keep it under control." One has a general sense that the presenceof superintelligent entities on our planet might be cause for concern; on the other hand, we generally find that smarter machines are more useful so it is not obvious why making them very much smarter is necessarily bad. Actually, the argument is quite simple:Suppose a superintelligent system is designed to achieve a certain objective specified by the human designer;and assume the objective is not perfectly aligned with the values of the human race, which are (at best) very difficult to pin down.
Any sufficiently capable intelligent system will prefer to ensure its own continued existence and to acquire physical and computational resources - not for their own sake, but to succeed in its assigned task.

Now, we have a problem. This is essentially the old story of the geniein the lamp, or the sorcerer's apprentice, or King Midas: you getexactly what you ask for, not what you want. In 1960 Norbert Wiener,a pioneer of automation and control theory, wrote, "If we use, toachieve our purposes, a mechanical agency with whose operation wecannot interfere effectively, we had better be quite sure that thepurpose put into the machine is the purpose which we really desire."Marvin Minsky gave the example of asking a machine to calculate as many digits of pi as possible; Nick Bostrom gave the example of asking for lots of paperclips.For a human, these goals are interpreted against a background of general human objectives, which imply that covering the entire Earth with compute servers or paperclips is not a good solution.A highly capable decision maker - especially one connected through the Internet to all the world's information and billions of screens and most of our infrastructure - can have an irreversible impact on humanity. Fortunately, because the nature of the problem is now somewhat clearer, it is possible to starting working on solutions.
Common misconceptions
Superintelligent machines will become spontaneously conscious or are intrinsically evil and hate humans.
Science fiction writers tend to assume one or more of these in order to set up an opposition between machines and humans. Such assumptions are unnecessary and unmotivated.
We humans develop AI systems, so why would we destroy ourselves?
Some AI "defenders" have argued that, because humans build AI systems, there is no reason to suppose we would ever build one whose goal is to destroy the human race. This misses the point of the argument, which is that deliberate evil intent on the part of the designer or agent is not a prerequisite for the existence of an existential threat; the problem arises from mis-specification of objectives.
It will never happen.
See "When will AI systems become more intelligent than people?"

Why are people worried about AI all of a sudden?
Starting in 2014, the media have regularly been reporting on concerns voiced by such well-known figures as Stephen Hawking, Elon Musk, Steve Wozniak, and Bill Gates. The reports usuallyquote just the most doom-laden sound-bite and omit the underlying reasons for and substance of the concerns, which are similar to those described under "What is the "existential risk" from AI?." In many cases the concerns are based on a reading of Nick Bostrom's book, Superintelligence. Another reason for the current wave of interest in the topic isthe fact that progress in AI seems to be accelerating. This acceleration is probably due to a combination of factors, including a gradually solidifying theoretical foundation linking the variousareas of AI into a more unified whole and a rapid increase in commercial investment in AI research as the output of academic labs reaches the level of quality at which itcan be applied to solve real-world problems.
Common misconceptions
If people are worried, superintelligent AI must be right around the corner.
Few, if any, AI researchers think that superintelligent machines are right around the corner. (See "When will AI systems become more intelligent than people?") That does not imply that we should wait until they are before taking the issue seriously! If we discovered a 10-mile-wide asteroid on a trajectory to hit the Earth in 50 years, would we wave it off and say "I'll pay attention when it's 5 years away"?

How will AI progress over the next few decades?
It seems very likely that areas not requiring human-level general intelligence will reach maturity and createreliable, high-quality products, probably within the next decade. These include speech recognition, information extraction for simple factual material,visual recognition of objects and behaviors, robotic manipulation of everyday objects, and autonomous driving. Efforts to improve the quality and broaden the scope of text and video understanding systems and to make domestic robots more robust and generally usefulwill lead to systems exhibiting common-sense knowledge, tying together learning and performance across all these modalities.Specialized systems for acquiring and organizing scientific knowledge and managing complex hypotheses will probably have a very significantimpact in molecular biology, systems biology, and medicine; we might begin to see similar impacts in the social sciences and in policy formation,particularly given the massive increase in machine-readable data about human activities and the need for machines to understand human values if they are to be reliably useful.Public and private knowledge sources -- systems that know and reason about the real world, not just repositories of data -- will become integral parts of society.

What is "value alignment"? Why does it matter?
Value alignment is the task of aligning the values (objectives) of machines with those of humans, so that themachine's optimal choice is, roughly speaking, whatever makes humans happiest. Without it, thereis a non-negligible risk that superintelligent machines would be out of our control.
Common misconceptions
All we need is Asimov's laws.
Asimov's laws are essentially an IOU: they make enough sense to a human to form the basis for various story plots, but they carry almost no useful information for a robot without much further elaboration. Their basic structure as a set of rulesrather than as a utility function is problematic: their lexicographic structure (e.g., the fact that any harm to humans is strictly more important that all harm to robots)means that there is no room for uncertainty or tradeoff. The robot has to leap off a cliff, destroying itself in the process, to catch a mosquito that might, at some future date, bite a human.Moreover, it must bar the door to the human's car because getting in the car exposes the human to greater risk of harm. Finally, with an approach based on maximizing human utility, there is no needfor the third law (robot self-preservation) because a robot that does not preserve its own existence cannot contribute to human utility and would certainly disappoint its owner.

What is the AI community doing about existential risk?
Much of the discussion about existential risk from AI has gone on outside the mainstream AI community, leading initially to mostly negative reactions from AI researchers. In 2008, a panel was formed by AAAI to study the issue. The panel's interim report noted the existence of some long-term questions but played down the notion that AI presented a risk to humanity. More recently, a conference in January, 2015 in Puerto Rico, sponsored by the Future of Life Institute, led to the publication of an open letter, signed by attendees and subsequently by more than 6000 others, calling for a strong research focus on the issue and proposing a more detailed research agenda. Soon thereafter, Elon Musk made a $10M grant to support research in this area. In addition, Eric Horvitz has funded a long-term study that is expected to track the issue and make policy suggestions as needed. Finally, AAAI has formed a standing committee on Impact of AI and Ethical Issues.
Common misconceptions
It's impossible to regulate or control research.
Some have argued that there is no way to avoid negative outcomes because research advances are unstoppable and cannot be regulated. In fact, the claim itself is false: the 1975 Asilomar Conference on Recombinant DNA successfully imposed a voluntary moratorium on experiments designed to create heritable genetic modifications in humans that has lasted ever since and became an international norm. Moreover, if research on achieving human-level AI proceeds unchecked, which may well happen, it's all the more important to begin serious research on methods for ensuring AI systems remain within our control.

What can I do to help?
If you are an AI researcher (or an economist, ethicist, political scientist, futurist, or lawyer with an interest in these issues), there are ideas and topics in the research agenda arising from the 2015 Puerto Rico conference. It is likely that workshops will be held in association with major AI conferences, the AAAI Fall and Spring Symposium series, etc. The web sites of FHI, CSER, FLI, and MIRI contain much more information.
Common misconceptions
There's nothing to be done: these things will happen and no action on our part can change the future.
Nothing could be further from the truth. We cannot forecast the future because we make the future. It's a collective choice.

你可能感兴趣的:(Q & A: The future of artificial intelligence)