On Controlling AI 2/4 - 懂你英语 流利说 Level8 Unit1 Part2

Video 2: Can we build AI without losing control over it? II

It's as though we stand before two doors.

Behind door number one, we stop making progress in building intelligent machines.

Our computer hardware and software just stops getting better for some reason.

Now take a moment to consider why this might happen.

I mean, given how valuable intelligence and automation are, we will continue to improve our technology if we are all able to.

What could stop us from doing this?

A full-scale nuclear war?

A global pandemic?

An asteroid impact?

Justin Bieber becoming president of the United States?

The point is, something would have to destroy civilization as we know it.

You have to imagine how bad it would have to be, to prevent us from making improvements in our technology permanently, generation after generation.

Almost by definition, this is the worst thing that's ever happened in human history.

So the only alternative, and this is what lies behind door number two,

is that we continue to improve our intelligent machines year after year after year.

At a certain point, we will build machines that are smarter than we are,

and once we have machines that are smarter than we are, they will begin to improve themselves.

And then we risk what the mathematician IJ Good called an "intelligence explosion," that the process could get away from us.

Now, this is often caricatured, as I have here, as a fear that armies of malicious robots will attack us.

But that isn't the most likely scenario.

It's not that our machines will become spontaneously malevolent.

The concern is really that we will build machines that are so much more competent than we are that the slightest divergence between their goals and our own could destroy us.

Just think about how we relate to ants.

We don't hate them. We don't go out of our way to harm them.

In fact, sometimes we take pains not to harm them. We step over them on the sidewalk.

But whenever their presence seriously conflicts with one of our goals, let's say when constructing a building like this one,

we annihilate them without a qualm.

The concern is that we will one day build machines that, whether they're conscious or not, could treat us with similar disregard.


* What will happen if people continue to improve AI technology? Machines will become smarter than human beings. Why does Harris say there are only two possibilities moving forward? He thinks humanity will either be wiped out or continue to progress.



Now, I suspect this seems far-fetched to many of you.

I bet there are those of you who doubt that super intelligent AI is possible, much less inevitable.

But then you must find something wrong with one of the following assumptions.

And there are only three of them.

Intelligence is a matter of information processing in physical systems.

Actually, this is a little bit more than an assumption.

We have already built narrow intelligence into our machines, and many of these machines perform at a level of superhuman intelligence already.

And we know that mere matter can give rise to what is called "general intelligence," an ability to think flexibly across multiple domains,

because our brains have managed it.

there's just atoms in here, and as long as we continue to build systems of atoms that display more and more intelligent behavior,

we will eventually, unless we are interrupted, we will eventually build general intelligence into our machines.

It's crucial to realize that the rate of progress doesn't matter,

because any progress is enough to get us into the end zone.

We don't need Moore's law to continue.

We don't need exponential progress.

We just need to keep going.

The second assumption is that we will keep going.

We will continue to improve our intelligent machines.

And given the value of intelligence --

I mean, intelligence is either the source of everything we value or we need it to safeguard everything we value. It is our most valuable resource.

So we want to do this. We have problems that we desperately need to solve.

We want to cure diseases like Alzheimer's and cancer.

We want to understand economic systems. We want to improve our climate science.

So we will do this, if we can.

The train is already out of the station, and there's no brake to pull.

Finally, we don't stand on a peak of intelligence, or anywhere near it, likely.

And this really is the crucial insight. This is what makes our situation so precarious,

and this is what makes our intuitions about risk so unreliable.

Now, just consider the smartest person who has ever lived.

On almost everyone's shortlist here is John von Neumann.

I mean, the impression that von Neumann made on the people around him,

and this included the greatest mathematicians and physicists of his time, is fairly well-documented.

If only half the stories about him are half true, there's no question he's one of the smartest people who has ever lived.

So consider the spectrum of intelligence.

Here we have John von Neumann.

And then we have you and me.

And then we have a chicken.

Sorry, a chicken.

(There's) No reason for me to make this talk more depressing than it needs to be.

It seems overwhelmingly likely, however, that the spectrum of intelligence extends much further than we currently conceive,

and if we build machines that are more intelligent than we are,

they will very likely explore this spectrum in ways that we can't imagine, and exceed us in ways that we can't imagine.


你可能感兴趣的:(On Controlling AI 2/4 - 懂你英语 流利说 Level8 Unit1 Part2)