http://www.b-list.org/weblog/2008/dec/05/python-3000/
Let’s talk about Python 3.0
There’s an old joke, so old that I don’t even know for certain where it originated, that’s often used to explain why big corporations do things the way they do. It involves some monkeys, a cage, a banana and a fire hose.
You build a nice big room-sized cage, and in one end of it you put five monkeys. In the other end you put the banana. Then you stand by with the fire hose. Sooner or later one of the monkeys is going to go after the banana, and when it does you turn on the fire hose and spray the other monkeys with it. Replace the banana if needed, then repeat the process. Monkeys are pretty smart, so they’ll figure this out pretty quickly: “If anybody goes for the banana, the rest of us get the hose.” Soon they’ll attack any member of their group who tries to go to the banana.
Once this happens, you take one monkey out of the cage and bring in a new one. The new monkey will come in, try to make friends, then probably go for the banana. And the other monkeys, knowing what this means, will attack him to stop you from using the hose on them. Eventually the new monkey will get the message, and will even start joining in on the attack if somebody else goes for the banana. Once this happens, take another of the original monkeys out of the cage and bring in another new monkey.
After repeating this a few times, there will come a moment when none of the monkeys in the cage have ever been sprayed by the fire hose; in fact, they’ll never even have seen the hose. But they’ll attack any monkey who goes to get the banana. If the monkeys could speak English, and if you could ask them why they attack anyone who goes for the banana, their answer would almost certainly be: “Well, I don’t really know, but that’s how we’ve always done things around here.”
This is a startlingly good analogy for the way lots of corporations do things: once a particular process is entrenched (and especially after a couple rounds of employee turnover), there’s nobody left who remembers why the company does things this way. There’s nobody who stops to think about whether this is still a good way to do things, or whether it was even a good idea way back at the beginning. The process continues through nothing more than inertia, and anyone who suggests a change is likely to end up viciously attacked by monkeys.
But this is also a really good analogy for the way a lot of software works: a function or a class or a library was written, once upon a time, and maybe at the time it was a good idea. Maybe now it’s not such a good idea, and actually causes more problems than it solves, but hey, that’s the way we’ve always done things around here, and who are you to suggest a change? Should I go get the fire hose?
It’s rare that any large/established software project manages to overcome this inertia and actually take stock, figure out whether “the way we’ve always done it” is still a good way to do it, and then make changes in response. This week Python 3.0 was released, and it represents one of those rare instances: Python 3.0 was designed to clear up a lot of now-inertial legacy issues with the Python language and figure out good ways to do things now instead of unquestioningly sticking with what seemed like good ways (or, more often, the least painful ways) to do things five or ten years ago.
Of course, this is causing some people to ask whether it was a good idea; all other things being equal, it’s better to maintain compatibility than to break it, and if the break doesn’t seem to offer anything really major or impressive over the previous compatible version, then it’s natural to ask what, exactly, made this necessary. Jens Afke has rather notably posted some thoughts along those lines, and this post is an attempt to respond and explain, as clearly as I can, why I think Python 3.0 is and will be a good thing even though it’ll create a staggering amount of work for me, my co-workers and my friends and colleagues (since I deal with two large Python 2.x codebases on a daily basis, the migration is not going to be simple or short for me).
Death by a thousand cuts
I really like Python. It’s my language of choice for new projects, my language of choice for hacking up quick things to play with and the language I get to work with every day at my job. Python fits my brain in ways that no other programming language ever has, and I agree with pretty much all of the basic design philosophy behind it. And by and large, I think writing (and reading — something just as important, which too many other languages have neglected) Python is one of the more pleasant ways to code for a living.
But.
For as long as I’ve been using Python there have been little moments of pain. None of them in isolation is enough to make Python itself painful, but taken together and occasionally stumbled over, they definitely have an impact on the experience. There’s a passage in Good Omens that does a great job of approximating the effect this sort of thing has on a programmer, when compounded over a period of years. Some demons are meeting and discussing the evil things they’ve done — tempting a priest, corrupting a politician, etc. — and one of them proudly declares that he tied up a phone system for most of an hour:
What could he tell them? That twenty thousand people got bloody furious? That you could hear the arteries clanging shut all across the city? And that then they went back and took it out on their secretaries or traffic wardens or whatever, and they took it out on other people? In all kinds of vindictive little ways which, and here was the good bit, they thought up themselves? For the rest of the day. The pass-along effects were incalculable. Thousands and thousands of souls all got a faint patina of tarnish, and you hardly had to lift a finger.
Running into the warts in Python, from time to time, has much the same result: it diminishes the joy of programming in slight and subtle ways.
Working with Unicode, for example, has most likely taken years off the collective lives of Python programmers everywhere. The Universal Feed Parser, which is arguably the best feed-parsing library on the planet, pithily glosses over some of the pain in its documentation, but Beautiful Soup, which is almost certainly the best screen-scraping library on the planet, expresses the agony of Unicode handling in a way that may not comply with corporate coding style guides:
Beautiful Soup uses a class called
UnicodeDammit
to detect the encodings of documents you give it and convert them to Unicode, no matter what. If you need to do this for other documents (without using Beautiful Soup to parse them), you can useUnicodeDammit
by itself.
Meanwhile in the Django world, Malcolm, I’m sure, is now slightly crazier than he was before his heroic effort to make Unicode handling as painless as possible for Django applications.
Now, Unicode and character encoding in general constitute a genuinely brain-bustingly hard problem, but Python — that is, Python 2.x — did almost nothing to help with this. It had two types of strings: one for Unicode and one for strings in some particular encoding (remember: Unicode is not an encoding), and so lots of Python software which worked with text had to develop all sorts of heuristics and libraries and helpers to work out what, exactly, any given string really was and how to work with it. Even more Python software simply never bothered, which meant that you could quite easily pass a string into some module you were using and find a UnicodeDecodeError
staring back at you out of the abyss.
Python 3.0 fixes this, or at least insofar as it’s possible to “fix” character encoding and Unicode handling at the language level: there’s one string type and only one string type, and that type is Unicode. No longer must you guess whether a string is either of multiple types or in any of a bewildering number of encodings (some of which may not even be supported by your copy of Python); strings are strings are strings, and that’s that. If you need to interact with systems which want sequences of bytes in some particular encoding, there’s a separate type — bytes
— which lets you do that, and to go from str
to bytes
you have to encode the string, and to get weird non-default encodings you have to say that you’re going to use a weird non-default encoding.
If Python 3.0 introduced the new Unicode handling and no other changes whatsoever, I’d be willing to call it a win.
But wait, there’s more!
One of the great strengths of Python, perhaps one of its greatest strengths, is its philosophy of “batteries included”; though the Python core language is rather small, and the number of things in the built-in namespace is puny compared to some other languages, it comes with a large standard library of modules covering all sorts of things that Python programmers will need to do. But there’s a downside to this: to stay useful, good modules need to evolve over time, and sometimes need to be replaced outright, and modules which ship in the standard library can’t easily do this because of the need to preserve backwards compatibility.
As a result, the standard library for Python 2.x grew to include a strange mish-mash of oddball historical corner cases. For example:
- There were two modules for working with URLs:
urllib
(which came first) andurllib2
(which, obviously, came second). - There were multiple cases of modules which started out written in pure Python but then switched to, or were supplemented by, versions written in C for speed, leading to situations like
pickle
versuscPickle
,StringIO
versuscStringIO
, and so on. - There were modules which, though related in function, were developed or added at different times or in different ways, and so never could go into logical, topical package namespaces without breaking compatibility (see: six different top-level modules related to HTTP).
- There were modules for specific tasks which had to stay, for compatibility, even though they’d been superseded by more general or more generic implementations (for example,
md5
andsha
, both of which are now better handled byhashlib
).
Even though all these great modules were available, the fact that their organization was a bit haphazard and redundant introduced another layer of low-grade tarnish on the soul. Python 3.0 reorganized the standard library to make more sense, and even renamed a few things to fall more in line with common conventions. No longer do you first try to import a C-based version of a module and then fall back to a Python-based version. No longer do you try to import hashlib
and then fall back to md5
. No longer do you hunt through multiple separate but related top-level modules to find the one with the bits you need. Though small, these sorts of changes have a major long-term impact on the clarity and maintainability of any kind of significant Python codebase. And although this will involve a fair amount of migration work for large projects which use lots of libraries, it’s still a big win going forward.
Similarly, less-obvious bits are changing in significant ways; for example, Python has long had two types of classes, largely for historical reasons, and the older of the two is nowhere near as useful or flexible as the newer. For the curious, a new-style class inherits directly from object
, or inherits from some other class which ultimately derives from object
, while a “classic” class doesn’t. The Python 2.6 documentation covers this (and goes on, elsewhere, to list some very useful things which only work with “new-style” classes), and Python 3.0 does away with the last remaining backwards-compatible support for the old-style classes, bringing with it a logical unification and general cleanup which will be extremely welcome to this Python programmer.
Just the beginning
Although at the start I paid only the most cursory attention to the Python 3.0 development process, every time I looked into it in any detail I was struck by the amount of time and thought which went into the changes which have been made to the language. There are, so far as I can tell, no frivolous “we just did this because we liked it” differences between Python 2.x and Python 3.x; every breaking change seems to have been discussed to death, justified based on real-world problems and even then carefully considered and reconsidered just to see if a backwards-compatible way could prevail. Python 3.0 came out of a years-long process of development by people who were simultaneously actually using Python and taking notes on how it could be better, and it shows.
Jens states in the comments to his article that he feels like the changes in Python 3.0 are “academic”, by which I assume he means “irrelevant to real developers and practical concerns”. Based on some of the examples above (and plenty more which can be found by browsing the new documentation), I hope it’s now clear that this is simply incorrect. Python has, for as long as I’ve been using it, come under continual fire from people who felt it didn’t embody some theoretical notion of purity that they cared about — Python doesn’t make threads the One True Way to do concurrency, Python doesn’t force everything to be an explicit invocation of a method on a class, Python isn’t a pure functional programming language, etc. — and over that entire time Python has steadfastly resisted the idea of purity for purity’s sake (or, more derisively, an “academic” notion of purity). As the Zen of Python makes clear, “practicality beats purity”.
Python 3.0 is and was developed to be a practical language. The changes which break compatibility with Python 2.x are in many ways small and may seem insignificant at first glance, but small changes have big effects, and a cleaned-up, less-soul-tarnishing Python is, in my opinion, a goal which justifies a few breaks between major versions.
Besides, evolving the language over time in response to practical concerns is the way they’ve always done things around here.