作者:Scott Aaronson
出版社: Cambridge University Press
发行时间:February 28th 2013
来源:下载的 pdf 版本
Goodreads:4.12 (568 Ratings)
豆瓣:9.3(16人评价)
The other thing I’m not going to do in this book is try to sell you on some favorite “interpretation” of quantum mechanics. You’re free to believe whatever interpretation your conscience dictates. (What’s my own view? Well, I agree with every interpretation to the extent it says there’s a problem, and disagree with every interpretation to the extent it claims to have solved the problem!)
See, just like we can classify religions as monotheistic and polytheistic, we can classify interpretations of quantum mechanics by where they come down on the “putting-yourself-in-coherentsuperposition” issue. On the one side, we’ve got the interpretations that enthusiastically sweep the issue under the rug: Copenhagen and its Bayesian and epistemic grandchildren. In these interpretations, you’ve got your quantum system, you’ve got your measuring device, and there’s a line between them. Sure, the line can shift from one experiment to the next, but for any given experiment, it’s gotta be somewhere. In principle, you can even imagine putting other people on the quantum side, but you yourself are always on the classical side. Why? Because a quantum state is just a representation of your knowledge – and you, by definition, are a classical being.
But what if you want to apply quantum mechanics to the whole universe, including yourself? The answer, in the epistemic-type interpretations, is simply that you don’t ask that sort of question! Incidentally, that was Bohr’s all-time favorite philosophical move, his WWF piledriver: “You’re not allowed to ask such a question!”
On the other side, we’ve got the interpretations that do try in different ways to make sense of putting yourself in superposition: many-worlds, Bohmian mechanics, etc.
Now, to hardheaded problem-solvers like ourselves, this might seem like a big dispute over words – why bother? I actually agree with that: if it were just a dispute over words, then we shouldn’t bother! But as David Deutsch pointed out in the late 1970s, we can conceive of experiments that would differentiate the first type of interpretation from the second type. The simplest experiment would just be to put yourself in coherent superposition and see what happens! Or if that’s too dangerous, put someone else in coherent superposition. The point being that, if human beings were regularly put into superposition, then the whole business of drawing a line between “classical observers” and the rest of the universe would become untenable.
But alright – human brains are wet, goopy, sloppy things, and maybe we won’t be able to maintain them in coherent superposition for 500 million years. So what’s the next best thing? Well, we could try to put a computer in superposition. The more sophisticated the computer was – the more it resembled something like a brain, like ourselves – the further up we would have pushed the “line” between quantum and classical. You can see how it’s only a minuscule step from here to the idea of quantum computing.
rules of first-order logic:
The rules all concern how to construct sentences that are valid – which, informally, means “tautologically true” (true for all possible settings of the variables), but which for now we can just think of as a combinatorial property of certain strings of symbols. I’ll write logical sentences in a typewriter font in order to distinguish them from the surrounding English.
Propositional tautologies: A or not A, not(A and not A), etc., are valid.
Modus ponens: If A is valid and A implies B is valid, then B is valid.
Equality rules: x=x, x=y implies y=x, x=y and y=z implies x=z, and x=y implies f(x)=f(y) are all valid.
Change of variables: Changing variable names leaves a statement valid.
Quantifier elimination: If For all x, A(x) is valid, then A(y) is valid for any y.
Quantifier addition: If A(y) is valid where y is an unrestricted variable, then For all x, A(x) is valid.
Quantifier rules: If not(For all x, A(x)) is valid, then There exists an x such that not(A(x)) is valid. Etc.
peano axioms for the nonnegative integers:
Zero exists: There exists a z such that for all x, S(x) is not equal to z. (This z is taken to be 0.)
Every integer has at most one predecessor: For all x,y, if S(x)=S(y) then x=y
The nonnegative integers themselves are called a model for the axioms: in logic, the word “model” just means any collection of objects and functions of those objects that satisfies the axioms. Interestingly, though, just as the axioms of group theory can be satisfied by many different groups, so too the nonnegative integers are not the only model of the Peano axioms. For example, you should check that you can get another valid model by adding extra, made-up integers that aren’t reachable from 0 – integers ‘beyond infinity,’ so to speak. Though once you add one such integer, you need to add infinitely many of them, since every integer needs a successor.
You can look at any of these examples – Deep Blue, the Robbins conjecture, Google, most recently Watson – and say, that’s not really AI. That’s just massive search, helped along by clever programming. Now, this kind of talk drives AI researchers up a wall. They say: if you told someone in the 1960s that in 30 years we’d be able to beat the world grandmaster at chess, and asked if that would count as AI, they’d say, of course it’s AI! But now that we know how to do it, it’s no longer AI – it’s just search. (Philosophers have a similar complaint: as soon as a branch of philosophy leads to anything concrete, it’s no longer called philosophy! It’s called math or science.)
There are two ways to teach quantum mechanics. The first way – which for most physicists today is still the only way – follows the historical order in which the ideas were discovered. So, you start with classical mechanics and electrodynamics, solving lots of grueling differential equations at every step. Then, you learn about the “blackbody paradox” and various strange experimental results, and the great crisis these things posed for physics. Next, you learn a complicated patchwork of ideas that physicists invented between 1900 and 1926 to try to make the crisis go away. Then, if you’re lucky, after years of study, you finally get around to the central conceptual point: that nature is described not by probabilities (which are always nonnegative), but by numbers called amplitudes that can be positive, negative, or even complex.
Look, obviously the physicists had their reasons for teaching quantum mechanics that way, and it works great for a certain kind of student. But the “historical” approach also has disadvantages, which in the quantum information age are becoming increasingly apparent. For example, I’ve had experts in quantum field theory – people who’ve spent years calculating path integrals of mind-boggling complexity – ask me to explain the Bell inequality to them, or other simple conceptual things like Grover’s algorithm. I felt as if Andrew Wiles had asked me to explain the Pythagorean Theorem.
As a direct result of what I think of as the “QWERTY” approach to explaining quantum mechanics – which you can see reflected in almost every popular book and article, down to the present – the subject acquired an unnecessary reputation for being complicated and hard. Educated people memorized the slogans – “light is both a wave and a particle,” “the cat is neither dead nor alive until you look,” “you can ask about the position or the momentum, but not both,” “one particle instantly learns the spin of the other through spooky action-at-a-distance,” etc. But they also learned that they shouldn’t even try to understand such things without years of painstaking work.
The second way to teach quantum mechanics eschews a blowby-blow account of its discovery, and instead starts directly from the conceptual core – namely, a certain generalization of the laws of probability to allow minus signs (and more generally, complex numbers). Once you understand that core, you can then sprinkle in physics to taste, and calculate the spectrum of whatever atom you want. This second approach is the one I’ll be following here.
So, what is quantum mechanics? Even though it was discovered by physicists, it’s not a physical theory in the same sense as electromagnetism or general relativity. In the usual “hierarchy of sciences” – with biology at the top, then chemistry, then physics, then math – quantum mechanics sits at a level between math and physics that I don’t know a good name for. Basically, quantum mechanics is the operating system that other physical theories run on as application software (with the exception of general relativity, which hasn’t yet been successfully ported to this particular OS). There’s even a word for taking a physical theory and porting it to this OS: “to quantize.”
But if quantum mechanics isn’t physics in the usual sense – if it’s not about matter, or energy, or waves, or particles – then what is it about? From my perspective, it’s about information and probabilities and observables, and how they relate to each other.
My contention in this chapter is the following: Quantum mechanics is what you would inevitably come up with if you started from probability theory, and then said, let’s try to generalize it so that the numbers we used to call “probabilities” can be negative numbers. As such, the theory could have been invented by mathematicians in the nineteenth century without any input from experiment. It wasn’t, but it could have been.