TED--Grady Booch: Don't fear superintelligent AI

When I was a kid, I was thequintessential(bring a perfect example of a particular type of person or thing. typical.)nerd(If you say that someone is a nerd, you mean that they are stupid or ridiculous, especially because they wear unfashionable clothes or show too moch interest in computer or science.). I think some of you were, too.And you, sir, who laughed the loudest, you probably still are.

I grew up in a small town in thedusty(If places, roads, or other things outside are dusty, they are covered with tiny bits of earth or sand, usually because it has not rained for a long time.)plains(a large area of flat dry land.)of north Texas, the son of asheriff(an  elected law officer of a county in the US.)who was the son of apastor(a Christian priest in some Protestant churches.). Getting into trouble was not anoption(An option is something that you can choose to do in perference to one or more alternatives.). And so I started readingcalculus( calculus is a branch of advanced mathematics which deals with variable quantities.)books for fun.

You did, too. That led me to building alaser (A laser is a narrow beam of concentrated light produced by a special machine. It is used for cutting very hard materials, and in many technical fields such as surgery and telecommunications.)and a computer and model rockets, and that led me to making rocket fuel in my bedroom. Now, in scientific terms, we call this a very bad idea.

Around that same time, Stanley Kubrick's "2001: A SpaceOdyssey(a series of experiences that teach you something about yourself or about life.)" came to the theaters, and my life was forever changed. I loved everything about that movie, especially the HAL 9000. Now, HAL was asentient(able to experience things through your senses.)computer designed to guide the Discovery spacecraft from the Earth toJupiter(the largest planet of the solar system, fifth in order of distance from the sun.). HAL was also aflawed(something that is flawed has a mark, fault , or mistake in it.) character, for in the end he chose tovalue(to think that someone or something is important.)the mission over human life. Now, HAL was a fictional character, butnonetheless(nevertheless.)he speaks to our fears, our fears of beingsubjugated(to defeat a person or group and make them obey you.)by some unfeeling, artificial intelligence who is indifferent to our humanity.

I believe that such fears are unfounded. Indeed, we stand at a remarkable time in human history, where, driven byrefusal(when you say firmly that you will not do ,give ,or accept something .)to accept the limits of our bodies and our minds, we are building machines ofexquisite(extremely beautiful and very delicately made.), beautifulcomplexity(the state of being complicated.)and grace that will extend the human experience in ways beyond our imagining.

After a career that led me from theAir Force Academy(美国空军军官学校)to Space Command to now, I became a systems engineer, and recently I wasdrawn(someone who looks drawn has a thin pale face, because they are ill, tired, or worried.)into an engineering problem associated with NASA's mission to Mars.Now, in space flights to the Moon, we can rely upon mission control in Houston to watch over all aspects of a flight. However, Mars is 200 times further away, and as a result it takes on average 13 minutes for a signal to travel from the Earth to Mars. If there's trouble, there's not enough time. And so a reasonable engineering solution calls for us to put mission control inside the walls of the Orion spacecraft. Another fascinating idea in the missionprofile(a short description that gives important details about a person, a group of people, or a place.)places(to put something somewhere, eapecially with care.)

humanoid(having a human shape and human qualities.)robots on the surface of Mars before the humans themselves arrive, first to buildfacilities(rooms, euqipment, or services that are provided for a particular purpose.)and later to serve ascollaborative(a job or piece of work that involves two or more people working together to achieve something.)members of the science team.

Now, as I looked at this from an engineering perspective, it became very clear to me that what I needed to architect was a smart, collaborative, socially intelligent artificial intelligence. In other words, I needed to build something very much like a HAL but without thehomicidal(likely to murder someone.)tendencies.

Let'spause(to stop speaking or doing something for a short time before starting again.)for a moment. Is it really possible to build an artificial intelligence like that? Actually, it is. In many ways, this is a hard engineering problem withelements(one part of feature of a whole system, plan, piece of work etc, especially one that is basic or important.)of AI, not some wet hair ball of an AI problem that needs to be engineered. Toparaphrase(to experess in a shorter, clearer, or different way what someone has siad or written.)Alan Turing, I'm not interested in building a sentient machine. I'm not building a HAL. All I'm after is a simple brain, something that offers the illusion of intelligence.

The art and the science of computing have come a long way since HAL was onscreen, and I'd imagine if his inventor Dr. Chandra were here today, he'd have a whole lot of questions for us. Is it really possible for us to take a system of millions upon millions of devices, to read in their data streams, to predict their failures and act in advance? Yes. Can we build systems thatconverse(to hace a conversation with someone.)with humans in natural language?Yes. Can we build systems that recognize objects, identify emotions,emote(to clearly show emotion, especially when you are acting.)themselves, play games and even read lips? Yes. Can we build a system that sets goals, that carries out plans against those goals and learns along the way? Yes. Can we build systems that have a theory of mind? This we are learning to do. Can we build systems that have anethical(relating to principles of what is right and wrong.)andmoral(relating to the principles of what is right and wrong behaviour, and with the difference between good and evil.)foundation? This we must learn how to do. So let's accept for a moment that it's possible to build such an artificial intelligence for this kind of mission and others.

The next question you must ask yourself is, should we fear it? Now, every new technology brings with it some measure of(a measure of something--an amount of something good or something that you want,for example success or freedom.)trepidation(a feeling of anxiety or fear about something that is going to happen.).When we first saw cars, peoplelamented(to express feeling of great sadness about something.)that we would see the destruction of the family. When we first saw telephones come in, people were worried it would destroy all civil conversation. At a point in time we saw the written word becomepervasive(existing everywhere.), people thought we would lose our ability to memorize. These things are all true to adegree(partly; the level or amount of something.), but it's also the case that these technologies brought to us things that extended the human experience in someprofound(having a strong influence or effect.)ways.

So let'stake this a little further(take something a stage/step further. ---to take action at a more serious or higher level, especially in order to get the result you want.). I do not fear the creation of an AI like this, because it will eventuallyembody(to include something.)some of our values. Consider this: building acognitive(related to the process of knowing, understanding ,and learning something.)system is fundamentally different than building a traditional software-intensive system of the past. We don't program them. We teach them. In order to teach a system how to recognize flowers, I show it thousands of flowers of the kinds I like. In order to teach a system how to play a game — Well, I would. You would, too. I like flowers. Come on. To teach a system how to play a game like Go, I'd have it play thousands of games of Go, but in the process I also teach it how to discern a good game from a bad game. If I want to create an artificially intelligent legal assistant, I will teach it somecorpus(a collection of all the writing of a particular kind or by a particular person./ a large collection of written or spoken language, that is used for studying the language.))of law but at the same time I amfusing(to combine different qualities,idea, or things, or to combined.)with it the sense ofmercy(if someone shows mercy, they choose to forgive or to be kind to someone who they have the power to hurt or punish.)and justice that is part of that law. In scientific terms, this is what we callground truth(地面实相), and here's the important point: in producing these machines, we are therefore teaching them a sense of our values.To that end, I trust an artificial intelligence the same, if not more, as a human who is well-trained.

But, you may ask, what aboutrogue(a man ot boy who behaves badly, but who you like in spite of this-often used humorously.)agents, some well-funded nongovernment organization? I do not fear an artificial intelligence in the hand of a lone wolf. Clearly, we cannot protect ourselves against all random acts of violence, but the reality is such a system requiressubstantial(large in amount or number.)training andsubtle(not easy to notice or understand unless you pay careful attention.)training far beyond the resources of an individual. And furthermore, it's far more than just injecting an internet virus to the world, where you push abutton(a small part or area of a machine that you press to make it do something.), all of a sudden it's in a million places and laptops startblowing up( If someone blows something up or if it blows up, it is destroyed by an explosion.)all over the place. Now, these kinds of substances are much larger, and we'll certainly see them coming.

Do I fear that such an artificial intelligence might threaten all of humanity? If you look at movies such as "The Matrix," "Metropolis(a very large city that is the most important city in a country or area.)," "TheTerminator(终结者)," shows such as "Westworld," they all speak of this kind of fear. Indeed, in the book "Superintelligence" by the philosopher Nick Bostrom, he picks up on this the meand observes that a superintelligence might not only be dangerous, it could represent anexistential(relating to the existence of humans or to existentialism.)threat to all of humanity. Dr. Bostrom's basic argument is that such systems will eventually have such aninsatiable(always wanting more and more of something.) thirst(a strong desire for knowledge etc.)for information that they will perhaps learn how to learn and eventually discover that they may have goals that are contrary to human needs. Dr. Bostrom has a number of followers. He is supported by people such as Elon Musk and Stephen Hawking. With all due respect to these brilliant minds, I believe that they are fundamentally wrong. Now, there are a lot of pieces of Dr. Bostrom's argument tounpack(to make an idea or problem easier to understand by considering all the parts of it separately.), and I don't have time to unpack them all, but very briefly, consider this: super knowing is very different than super doing. HAL was a threat to the Discovery crew onlyin so far as (在···的范围内)HALcommanded(If you command something such as respect or obedience, you obtain it because you are popular, famous, or important.)all aspects of the Discovery. So it would have to be with a superintelligence. It would have to havedominion(the power or right to rule people or control something.)over all of our world. This is the stuff of Skynet from the movie "The Terminator" in which we had a superintelligence that commanded human will, that directed every device that was in every corner of the world. Practically speaking, it ain't gonna happen. We are not building AIs that control the weather, that direct thetides(a large amount of something that is increasing and is diffcult to control.), that command uscapricious(likely to change your mind suddenly or behave in an unexpected way.),chaotic(a chaotic situation is one in which everything is happening in a confused way.)humans. And furthermore, if such an artificial intelligence existed, it would have to compete with human economies, and thereby compete for resources with us. And in the end — don't tell Siri this — we can alwaysunplug(If you unplug an electrical device or telephone, you pull a wire out of a socket so that it stops working.)them.

We are on an incredible journey ofcoevolutionwith our machines. The humans we are today are not the humans we will be then. To worry now about the rise of a superintelligence is in many ways a dangerousdistraction(something that stops you paying attention to what you are doing.)because the rise of computing itself brings to us a number of human and societal issues to which we must now attend. How shall I best organize society when the need for human labordiminishes(to become or make something become smaller or less.)?How can I bring understanding and education throughout the globe and still respect our differences?How might I extend andenhance(to improve something.)human life through cognitive healthcare? How might I use computing to help take us to the stars?

And that's the exciting thing. The opportunities to use computing to advance the human experience are within our reach, here and now, and we are just beginning.

你可能感兴趣的:(TED--Grady Booch: Don't fear superintelligent AI)