原文出处:http://www.amk.ca/python/howto/regex/
原文作者:A.M. Kuchling ([email protected])
Abstract(摘要)
This document is an introductory tutorial to using regular expressions in Python with the re module. It provides a gentler introduction than the corresponding section in the Library Reference.
本文是通过Python的 re 模块来使用正则表达式的一个入门教程,和库参考手册的对应章节相比,更为浅显易懂、循序渐进。
This document is available from http://www.amk.ca/python/howto .
本文可以从 http://www.amk.ca/python/howto 捕获
Contents(目录)
目录
The re module was added in Python 1.5, and provides Perl-style regular expression patterns. Earlier versions of Python came with the regex module, which provides Emacs-style patterns. Emacs-style patterns are slightly less readable and don't provide as many features, so there's not much reason to use the regex module when writing new code, though you might encounter old code that uses it.
Python 自1.5版本起增加了re 模块,它提供 Perl 风格的正则表达式模式。Python 1.5之前版本则是通过 regex 模块提供 Emecs 风格的模式。Emacs 风格模式可读性稍差些,而且功能也不强,因此编写新代码时尽量不要再使用 regex 模块,当然偶尔你还是可能在老代码里发现其踪影。
Regular expressions (or REs) are essentially a tiny, highly specialized programming language embedded inside Python and made available through the re module. Using this little language, you specify the rules for the set of possible strings that you want to match; this set might contain English sentences, or e-mail addresses, or TeX commands, or anything you like. You can then ask questions such as "Does this string match the pattern?", or "Is there a match for the pattern anywhere in this string?". You can also use REs to modify a string or to split it apart in various ways.
就其本质而言,正则表达式(或 RE)是一种小型的、高度专业化的编程语言,(在Python中)它内嵌在Python中,并通过 re 模块实现。使用这个小型语言,你可以为想要匹配的相应字符串集指定规则;该字符串集可能包含英文语句、e-mail地址、TeX命令或任何你想搞定的东西。然后你可以问诸如“这个字符串匹配该模式吗?”或“在这个字符串中是否有部分匹配该模式呢?”。你也可以使用 RE 以各种方式来修改或分割字符串。
Regular expression patterns are compiled into a series of bytecodes which are then executed by a matching engine written in C. For advanced use, it may be necessary to pay careful attention to how the engine will execute a given RE, and write the RE in a certain way in order to produce bytecode that runs faster. Optimization isn't covered in this document, because it requires that you have a good understanding of the matching engine's internals.
正则表达式模式被编译成一系列的字节码,然后由用 C 编写的匹配引擎执行。在高级用法中,也许还要仔细留意引擎是如何执行给定 RE ,如何以特定方式编写 RE 以令生产的字节码运行速度更快。本文并不涉及优化,因为那要求你已充分掌握了匹配引擎的内部机制。
The regular expression language is relatively small and restricted, so not all possible string processing tasks can be done using regular expressions. There are also tasks that can be done with regular expressions, but the expressions turn out to be very complicated. In these cases, you may be better off writing Python code to do the processing; while Python code will be slower than an elaborate regular expression, it will also probably be more understandable.
正则表达式语言相对小型和受限(功能有限),因此并非所有字符串处理都能用正则表达式完成。当然也有些任务可以用正则表达式完成,不过最终表达式会变得异常复杂。碰到这些情形时,编写 Python 代码进行处理可能反而更好;尽管 Python 代码比一个精巧的正则表达式要慢些,但它更易理解。
We'll start by learning about the simplest possible regular expressions. Since regular expressions are used to operate on strings, we'll begin with the most common task: matching characters.
我们将从最简单的正则表达式学习开始。由于正则表达式常用于字符串操作,那我们就从最常见的任务:字符匹配下手。
For a detailed explanation of the computer science underlying regular expressions (deterministic and non-deterministic finite automata), you can refer to almost any textbook on writing compilers.
有关正则表达式底层的计算机科学上的详细解释(确定性和非确定性有限自动机),你可以查阅编写编译器相关的任何教科书。
Most letters and characters will simply match themselves. For example, the regular expression test will match the string "test" exactly. (You can enable a case-insensitive mode that would let this RE match "Test" or "TEST" as well; more about this later.)
大多数字母和字符一般都会和自身匹配。例如,正则表达式 test 会和字符串“test”完全匹配。(你也可以使用大小写不敏感模式,它还能让这个 RE 匹配“Test”或“TEST”;稍后会有更多解释。)
There are exceptions to this rule; some characters are special, and don't match themselves. Instead, they signal that some out-of-the-ordinary thing should be matched, or they affect other portions of the RE by repeating them. Much of this document is devoted to discussing various metacharacters and what they do.
这个规则当然会有例外;有些字符比较特殊,它们和自身并不匹配,而是会表明应和一些特殊的东西匹配,或者它们会影响到 RE 其它部分的重复次数。本文很大篇幅专门讨论了各种元字符及其作用。
Here's a complete list of the metacharacters; their meanings will be discussed in the rest of this HOWTO.
这里有一个元字符的完整列表;其含义会在本指南余下部分进行讨论。
. ^ $ * + ? { [ ] \ | ( )
The first metacharacters we'll look at are "[" and "]". They're used for specifying a character class, which is a set of characters that you wish to match. Characters can be listed individually, or a range of characters can be indicated by giving two characters and separating them by a "-". For example, [abc] will match any of the characters "a", "b", or "c"; this is the same as [a-c], which uses a range to express the same set of characters. If you wanted to match only lowercase letters, your RE would be [a-z].
我们首先考察的元字符是 "[" 和 "]"。它们常用来指定一个字符类别,所谓字符类别就是你想匹配的一个字符集。字符可以单个列出,也可以用“-”号分隔的两个给定字符来表示一个字符区间。例如,[abc] 将匹配"a", "b", 或 "c"中的任意一个字符;也可以用区间[a-c]来表示同一字符集,和前者效果一致。如果你只想匹配小写字母,那么 RE 应写成 [a-z]。
Metacharacters are not active inside classes. For example, [akm$] will match any of the characters "a", "k", "m", or "$"; "$" is usually a metacharacter, but inside a character class it's stripped of its special nature.
元字符在类别里并不起作用。例如,[akm$]将匹配字符"a", "k", "m", 或 "$" 中的任意一个;"$"通常用作元字符,但在字符类别里,其特性被除去,恢复成普通字符。
You can match the characters not within a range by complementing the set. This is indicated by including a "" as the first character of the class; `"" elsewhere will simply match the ""` character. For example, [5] will match any character except "5".
你可以用补集来匹配不在区间范围内的字符。其做法是把"^"作为类别的首个字符;其它地方的"^"只会简单匹配 "^" 字符本身。例如,[^5] 将匹配除 "5" 之外的任意字符。
Perhaps the most important metacharacter is the backslash, "\". As in Python string literals, the backslash can be followed by various characters to signal various special sequences. It's also used to escape all the metacharacters so you can still match them in patterns; for example, if you need to match a "[" or "\", you can precede them with a backslash to remove their special meaning: \[ or \\.
也许最重要的元字符是反斜杠"\"。 做为 Python 中的字符串字母,反斜杠后面可以加不同的字符以表示不同特殊意义。它也可以用于取消所有的元字符,这样你就可以在模式中匹配它们了。举个例子,如果你需要匹配字符 "[" 或 "\",你可以在它们之前用反斜杠来取消它们的特殊意义: \[ 或 \\。
Some of the special sequences beginning with "\" represent predefined sets of characters that are often useful, such as the set of digits, the set of letters, or the set of anything that isn't whitespace. The following predefined special sequences are available:
一些用 "\" 开始的特殊字符所表示的预定义字符集通常是很有用的,象数字集,字母集,或其它非空字符集。下列是可用的预设特殊字符:
\d
Matches any decimal digit; this is equivalent to the class [0-9].
匹配任何十进制数;它相当于类 [0-9]。
\D
Matches any non-digit character; this is equivalent to the class [^0-9].
匹配任何非数字字符;它相当于类 [^0-9]。
\s
Matches any whitespace character; this is equivalent to the class [ \t\n\r\f\v].
匹配任何空白字符;它相当于类 [ \t\n\r\f\v]。
\S
Matches any non-whitespace character; this is equivalent to the class [^ \t\n\r\f\v].
匹配任何非空白字符;它相当于类 [^ \t\n\r\f\v]。
\w
Matches any alphanumeric character; this is equivalent to the class [a-zA-Z0-9_].
匹配任何字母数字字符;它相当于类 [a-zA-Z0-9_]。
\W
Matches any non-alphanumeric character; this is equivalent to the class [^a-zA-Z0-9_].
匹配任何非字母数字字符;它相当于类 [^a-zA-Z0-9_]。
These sequences can be included inside a character class. For example, [\s,.] is a character class that will match any whitespace character, or "," or ".".
这样特殊字符都可以包含在一个字符类中。如,[\s,.]字符类将匹配任何空白字符或","或"."。
The final metacharacter in this section is .. It matches anything except a newline character, and there's an alternate mode (re.DOTALL) where it will match even a newline. "." is often used where you want to match any character.
本节最后一个元字符是 . 。它匹配除了换行字符外的任何字符,在 alternate 模式(re.DOTALL)下它甚至可以匹配换行。"." 通常被用于你想匹配“任何字符”的地方。
Being able to match varying sets of characters is the first thing regular expressions can do that isn't already possible with the methods available on strings. However, if that was the only additional capability of regexes, they wouldn't be much of an advance. Another capability is that you can specify that portions of the RE must be repeated a certain number of times.
正则表达式第一件能做的事是能够匹配不定长的字符集,而这是其它能作用在字符串上的方法所不能做到的。不过,如果那是正则表达式唯一的附加功能的话,那么它们也就不那么优秀了。它们的另一个功能就是你可以指定正则表达式的一部分的重复次数。
The first metacharacter for repeating things that we'll look at is *. * doesn't match the literal character "*"; instead, it specifies that the previous character can be matched zero or more times, instead of exactly once.
我们讨论的第一个重复功能的元字符是 *。* 并不匹配字母字符 "*";相反,它指定前一个字符可以被匹配零次或更多次,而不是只有一次。
For example, ca*t will match "ct" (0 "a"characters), "cat" (1 "a"), "caaat" (3 "a"characters), and so forth. The RE engine has various internal limitations stemming from the size of C's int type, that will prevent it from matching over 2 billion "a" characters; you probably don't have enough memory to construct a string that large, so you shouldn't run into that limit.
举个例子,ca*t 将匹配 "ct" (0 个 "a" 字符), "cat" (1 个 "a"), "caaat" (3 个 "a" 字符)等等。RE 引擎有各种来自 C 的整数类型大小的内部限制,以防止它匹配超过2亿个 "a" 字符;你也许没有足够的内存去建造那么大的字符串,所以将不会累计到那个限制。
Repetitions such as * are greedy; when repeating a RE, the matching engine will try to repeat it as many times as possible. If later portions of the pattern don't match, the matching engine will then back up and try again with few repetitions.
象 * 这样地重复是“贪婪的”;当重复一个 RE 时,匹配引擎会试着重复尽可能多的次数。如果模式的后面部分没有被匹配,匹配引擎将退回并再次尝试更小的重复。
A step-by-step example will make this more obvious. Let's consider the expression a[bcd]*b. This matches the letter "a", zero or more letters from the class [bcd], and finally ends with a "b". Now imagine matching this RE against the string "abcbd".
一步步的示例可以使它更加清晰。让我们考虑表达式 a[bcd]*b。它匹配字母 "a",零个或更多个来自类 [bcd]中的字母,最后以 "b" 结尾。现在想一想该 RE 对字符串 "abcbd" 的匹配。
Step |
Matched |
Explanation |
1 |
a |
The a in the RE matches. |
2 |
abcbd |
The engine matches [bcd]*, going as far as it can, which is to the end of the string. |
3 |
Failure |
The engine tries to match b, but the current position is at the end of the string, so it fails. |
4 |
abcb |
Back up, so that [bcd]* matches one less character. |
5 |
Failure |
Try b again, but the current position is at the last character, which is a "d". |
6 |
abc |
Back up again, so that [bcd]* is only matching "bc". |
7 |
abcb |
Try b again. This time but the character at the current position is "b", so it succeeds. |
The end of the RE has now been reached, and it has matched "abcb". This demonstrates how the matching engine goes as far as it can at first, and if no match is found it will then progressively back up and retry the rest of the RE again and again. It will back up until it has tried zero matches for [bcd]*, and if that subsequently fails, the engine will conclude that the string doesn't match the RE at all.
RE 的结尾部分现在可以到达了,它匹配 "abcb"。这证明了匹配引擎一开始会尽其所能进行匹配,如果没有匹配然后就逐步退回并反复尝试 RE 剩下来的部分。直到它退回尝试匹配 [bcd] 到零次为止,如果随后还是失败,那么引擎就会认为该字符串根本无法匹配 RE 。
Another repeating metacharacter is +, which matches one or more times. Pay careful attention to the difference between * and +; * matches zero or more times, so whatever's being repeated may not be present at all, while + requires at least one occurrence. To use a similar example, ca+t will match "cat" (1 "a"), "caaat" (3 "a"'s), but won't match "ct".
另一个重复元字符是 +,表示匹配一或更多次。请注意 * 和 + 之间的不同;* 匹配零或更多次,所以根本就可以不出现,而 + 则要求至少出现一次。用同一个例子,ca+t 就可以匹配 "cat" (1 个 "a"), "caaat" (3 个 "a"), 但不能匹配 "ct"。
There are two more repeating qualifiers. The question mark character, ?, matches either once or zero times; you can think of it as marking something as being optional. For example, home-?brew matches either "homebrew" or "home-brew".
还有更多的限定符。问号 ? 匹配一次或零次;你可以认为它用于标识某事物是可选的。例如:home-?brew 匹配 "homebrew" 或 "home-brew"。
The most complicated repeated qualifier is {m,n}, where m and n are decimal integers. This qualifier means there must be at least m repetitions, and at most n. For example, a/{1,3}b will match "a/b", "a//b", and "a///b". It won't match "ab", which has no slashes, or "ab", which has four.
最复杂的重复限定符是 {m,n},其中 m 和 n 是十进制整数。该限定符的意思是至少有 m 个重复,至多到 n 个重复。举个例子,a/{1,3}b 将匹配 "a/b","a//b" 和 "a///b"。它不能匹配 "ab" 因为没有斜杠,也不能匹配 "ab" ,因为有四个。
You can omit either m or n; in that case, a reasonable value is assumed for the missing value. Omitting m is interpreted as a lower limit of 0, while omitting n results in an upper bound of infinity -- actually, the 2 billion limit mentioned earlier, but that might as well be infinity.
你可以忽略 m 或 n;因为会为缺失的值假设一个合理的值。忽略 m 会认为下边界是 0,而忽略 n 的结果将是上边界为无穷大 -- 实际上是先前我们提到的 2 兆,但这也许同无穷大一样。
Readers of a reductionist bent may notice that the three other qualifiers can all be expressed using this notation. {0,} is the same as *, {1,} is equivalent to +, and {0,1} is the same as ?. It's better to use *, +, or ? when you can, simply because they're shorter and easier to read.
细心的读者也许注意到其他三个限定符都可以用这样方式来表示。 {0,} 等同于 *,{1,} 等同于 +,而{0,1}则与 ? 相同。如果可以的话,最好使用 *,+,或?。很简单因为它们更短也再容易懂。
Now that we've looked at some simple regular expressions, how do we actually use them in Python? The re module provides an interface to the regular expression engine, allowing you to compile REs into objects and then perform matches with them.
现在我们已经看了一些简单的正则表达式,那么我们实际在 Python 中是如何使用它们的呢? re 模块提供了一个正则表达式引擎的接口,可以让你将 REs 编译成对象并用它们来进行匹配。
Regular expressions are compiled into RegexObject instances, which have methods for various operations such as searching for pattern matches or performing string substitutions.
正则表达式被编译成 RegexObject 实例,可以为不同的操作提供方法,如模式匹配搜索或字符串替换。
1 >>> import re 2 >>> p = re.compile('ab*') 3 >>> print p 4 <re.RegexObject instance at 80b4150>
re.compile() also accepts an optional flags argument, used to enable various special features and syntax variations. We'll go over the available settings later, but for now a single example will do:
re.compile() 也接受可选的标志参数,常用来实现不同的特殊功能和语法变更。我们稍后将查看所有可用的设置,但现在只举一个例子:
1 >>> p = re.compile('ab*', re.IGNORECASE)
The RE is passed to re.compile() as a string. REs are handled as strings because regular expressions aren't part of the core Python language, and no special syntax was created for expressing them. (There are applications that don't need REs at all, so there's no need to bloat the language specification by including them.) Instead, the re module is simply a C extension module included with Python, just like the socket or zlib module.
RE 被做为一个字符串发送给 re.compile()。REs 被处理成字符串是因为正则表达式不是 Python 语言的核心部分,也没有为它创建特定的语法。(应用程序根本就不需要 REs,因此没必要包含它们去使语言说明变得臃肿不堪。)而 re 模块则只是以一个 C 扩展模块的形式来被 Python 包含,就象 socket 或 zlib 模块一样。
Putting REs in strings keeps the Python language simpler, but has one disadvantage which is the topic of the next section.
将 REs 作为字符串以保证 Python 语言的简洁,但这样带来的一个麻烦就是象下节标题所讲的。
As stated earlier, regular expressions use the backslash character ("\") to indicate special forms or to allow special characters to be used without invoking their special meaning. This conflicts with Python's usage of the same character for the same purpose in string literals.
在早期规定中,正则表达式用反斜杠字符 ("\") 来表示特殊格式或允许使用特殊字符而不调用它的特殊用法。这就与 Python 在字符串中的那些起相同作用的相同字符产生了冲突。
Let's say you want to write a RE that matches the string "\section", which might be found in a LATEX file. To figure out what to write in the program code, start with the desired string to be matched. Next, you must escape any backslashes and other metacharacters by preceding them with a backslash, resulting in the string "\\section". The resulting string that must be passed to re.compile() must be \\section. However, to express this as a Python string literal, both backslashes must be escaped again.
让我们举例说明,你想写一个 RE 以匹配字符串 "\section",可能是在一个 LATEX 文件查找。为了要在程序代码中判断,首先要写出想要匹配的字符串。接下来你需要在所有反斜杠和元字符前加反斜杠来取消其特殊意义。
Characters(字符) |
Stage(阶段) |
\section |
Text string to be matched(要匹配的字符串) |
\\section |
Escaped backslash for re.compile(为 re.compile 取消反斜杠的特殊意义) |
"\\\\section" |
Escaped backslashes for a string literal(为字符串取消反斜杠) |
In short, to match a literal backslash, one has to write '\\\\' as the RE string, because the regular expression must be "\\", and each backslash must be expressed as "\\" inside a regular Python string literal. In REs that feature backslashes repeatedly, this leads to lots of repeated backslashes and makes the resulting strings difficult to understand.
简单地说,为了匹配一个反斜杠,不得不在 RE 字符串中写 '\\\\',因为正则表达式中必须是 "\\",而每个反斜杠按 Python 字符串字母表示的常规必须表示成 "\\"。在 REs 中反斜杠的这个重复特性会导致大量重复的反斜杠,而且所生成的字符串也很难懂。
The solution is to use Python's raw string notation for regular expressions; backslashes are not handled in any special way in a string literal prefixed with "r", so r"\n" is a two-character string containing "\" and "n", while "\n" is a one-character string containing a newline. Frequently regular expressions will be expressed in Python code using this raw string notation.
解决的办法就是为正则表达式使用 Python 的 raw 字符串表示;在字符串前加个 "r" 反斜杠就不会被任何特殊方式处理,所以 r"\n" 就是包含"\" 和 "n" 的两个字符,而 "\n" 则是一个字符,表示一个换行。正则表达式通常在 Python 代码中都是用这种 raw 字符串表示。
Regular String(常规字符串) |
Raw string(Raw 字符串) |
"ab*" |
r"ab*" |
"\\\\section" |
r"\\section" |
"\\w+\\s+\\1" |
r"\w+\s+\1" |
Once you have an object representing a compiled regular expression, what do you do with it? RegexObject instances have several methods and attributes. Only the most significant ones will be covered here; consult the Library Reference for a complete listing.
一旦你有了已经编译了的正则表达式的对象,你要用它做什么呢?RegexObject 实例有一些方法和属性。这里只显示了最重要的几个,如果要看完整的列表请查阅 Library Refference。
Method/Attribute(方法/属性) |
Purpose(作用) |
match() |
Determine if the RE matches at the beginning of the string. |
search() |
Scan through a string, looking for any location where this RE matches. |
findall() |
Find all substrings where the RE matches, and returns them as a list. |
finditer() |
Find all substrings where the RE matches, and returns them as an iterator. |
match() and search() return None if no match can be found. If they're successful, a MatchObject instance is returned, containing information about the match: where it starts and ends, the substring it matched, and more.
如果没有匹配到的话,match() 和 search() 将返回 None。如果成功的话,就会返回一个 MatchObject 实例,其中有这次匹配的信息:它是从哪里开始和结束,它所匹配的子串等等。
You can learn about this by interactively experimenting with the re module. If you have Tkinter available, you may also want to look at Tools/scripts/redemo.py, a demonstration program included with the Python distribution. It allows you to enter REs and strings, and displays whether the RE matches or fails. redemo.py can be quite useful when trying to debug a complicated RE. Phil Schwartz's Kodos is also an interactive tool for developing and testing RE patterns. This HOWTO will use the standard Python interpreter for its examples.
你可以用采用人机对话并用 re 模块实验的方式来学习它。如果你有 Tkinter 的话,你也许可以考虑参考一下 Tools/scripts/redemo.py,一个包含在 Python 发行版里的示范程序。
First, run the Python interpreter, import the re module, and compile a RE:
首先,运行 Python 解释器,导入 re 模块并编译一个 RE:
1 Python 2.2.2 (#1, Feb 10 2003, 12:57:01) 2 >>> import re 3 >>> p = re.compile('[a-z]+') 4 >>> p 5 <_sre.SRE_Pattern object at 80c3c28> ERROR: EOF in multi-line statement
Now, you can try matching various strings against the RE [a-z]+. An empty string shouldn't match at all, since + means 'one or more repetitions'. match() should return None in this case, which will cause the interpreter to print no output. You can explicitly print the result of match() to make this clear.
现在,你可以试着用 RE 的 [a-z]+ 去匹配不同的字符串。一个空字符串将根本不能匹配,因为 + 的意思是 “一个或更多的重复次数”。 在这种情况下 match() 将返回 None,因为它使解释器没有输出。你可以明确地打印出 match() 的结果来弄清这一点。
1 >>> p.match("") 2 >>> print p.match("") 3 None
Now, let's try it on a string that it should match, such as "tempo". In this case, match() will return a MatchObject, so you should store the result in a variable for later use.
现在,让我们试着用它来匹配一个字符串,如 "tempo"。这时,match() 将返回一个 MatchObject。因此你可以将结果保存在变量里以便后面使用。
1 >>> m = p.match( 'tempo') 2 >>> print m 3 <_sre.SRE_Match object at 80c4f68>
Now you can query the MatchObject for information about the matching string. MatchObject instances also have several methods and attributes; the most important ones are:
现在你可以查询 MatchObject 关于匹配字符串的相关信息了。MatchObject 实例也有几个方法和属性;最重要的那些如下所示:
Method/Attribute(方法/属性) |
Purpose(作用) |
group() |
Return the string matched by the RE |
start() |
Return the starting position of the match |
end() |
Return the ending position of the match |
span() |
Return a tuple containing the (start, end) positions of the match |
Trying these methods will soon clarify their meaning:
试试这些方法不久就会清楚它们的作用了:
1 >>> m.group() 2 'tempo' 3 >>> m.start(), m.end() 4 (0, 5) 5 >>> m.span() 6 (0, 5)
group() returns the substring that was matched by the RE. start() and end() return the starting and ending index of the match. span() returns both start and end indexes in a single tuple. Since the match method only checks if the RE matches at the start of a string, start() will always be zero. However, the search method of RegexObject instances scans through the string, so the match may not start at zero in that case.
group() 返回 RE 匹配的子串。start() 和 end() 返回匹配开始和结束时的索引。span() 则用单个元组把开始和结束时的索引一起返回。因为匹配方法检查到如果 RE 在字符串开始处开始匹配,那么 start() 将总是为零。然而, RegexObject 实例的 search 方法扫描下面的字符串的话,在这种情况下,匹配开始的位置就也许不是零了。
1 >>> print p.match('::: message') 2 None 3 >>> m = p.search('::: message') ; print m 4 <re.MatchObject instance at 80c9650> 5 >>> m.group() 6 'message' 7 >>> m.span() 8 (4, 11)
In actual programs, the most common style is to store the MatchObject in a variable, and then check if it was None. This usually looks like:
在实际程序中,最常见的作法是将 MatchObject 保存在一个变量里,然后检查它是否为 None,通常如下所示:
1 p = re.compile( ... ) 2 m = p.match( 'string goes here' ) 3 if m: 4 print 'Match found: ', m.group() 5 else: 6 print 'No match'
Two RegexObject methods return all of the matches for a pattern. findall() returns a list of matching strings:
两个 RegexObject 方法返回所有匹配模式的子串。findall()返回一个匹配字符串列表:
1 >>> p = re.compile('\d+') 2 >>> p.findall('12 drummers drumming, 11 pipers piping, 10 lords a-leaping') 3 ['12', '11', '10']
findall() has to create the entire list before it can be returned as the result. In Python 2.2, the finditer() method is also available, returning a sequence of MatchObject instances as an iterator.
findall() 在它返回结果时不得不创建一个列表。在 Python 2.2中,也可以用 finditer() 方法。
1 >>> iterator = p.finditer('12 drummers drumming, 11 ... 10 ...') 2 >>> iterator 3 <callable-iterator object at 0x401833ac> 4 >>> for match in iterator: 5 ... print match.span() 6 ... 7 (0, 2) 8 (22, 24) 9 (29, 31)
You don't have to produce a RegexObject and call its methods; the re module also provides top-level functions called match(), search(), sub(), and so forth. These functions take the same arguments as the corresponding RegexObject method, with the RE string added as the first argument, and still return either None or a MatchObject instance.
你不一定要产生一个 RegexObject 对象然后再调用它的方法;re 模块也提供了顶级函数调用如 match()、search()、sub() 等等。这些函数使用 RE 字符串作为第一个参数,而后面的参数则与相应 RegexObject 的方法参数相同,返回则要么是 None 要么就是一个 MatchObject 的实例。
1 >>> print re.match(r'From\s+', 'Fromage amk') 2 None 3 >>> re.match(r'From\s+', 'From amk Thu May 14 19:12:10 1998') 4 <re.MatchObject instance at 80c5978>
Under the hood, these functions simply produce a RegexObject for you and call the appropriate method on it. They also store the compiled object in a cache, so future calls using the same RE are faster.
Under the hood, 这些函数简单地产生一个 RegexOject 并在其上调用相应的方法。它们也在缓存里保存编译后的对象,因此在将来调用用到相同 RE 时就会更快。
Should you use these module-level functions, or should you get the RegexObject and call its methods yourself? That choice depends on how frequently the RE will be used, and on your personal coding style. If a RE is being used at only one point in the code, then the module functions are probably more convenient. If a program contains a lot of regular expressions, or re-uses the same ones in several locations, then it might be worthwhile to collect all the definitions in one place, in a section of code that compiles all the REs ahead of time. To take an example from the standard library, here's an extract from xmllib.py: 你将使用这些模块级函数,还是先得到一个 RegexObject 再调用它的方法呢?如何选择依赖于怎样用 RE 更有效率以及你个人编码风格。如果一个 RE 在代码中只做用一次的话,那么模块级函数也许更方便。如果程序包含很多的正则表达式,或在多处复用同一个的话,那么将全部定义放在一起,在一段代码中提前编译所有的 REs 更有用。从标准库中看一个例子,这是从 xmllib.py 文件中提取出来的:
1 ref = re.compile( ... ) 2 entityref = re.compile( ... ) 3 charref = re.compile( ... ) 4 starttagopen = re.compile( ... )
I generally prefer to work with the compiled object, even for one-time uses, but few people will be as much of a purist about this as I am.
我通常更喜欢使用编译对象,甚至它只用一次,but few people will be as much of a purist about this as I am。
Compilation flags let you modify some aspects of how regular expressions work. Flags are available in the re module under two names, a long name such as IGNORECASE, and a short, one-letter form such as I. (If you're familiar with Perl's pattern modifiers, the one-letter forms use the same letters; the short form of re.VERBOSE is re.X, for example.) Multiple flags can be specified by bitwise OR-ing them; re.I | re.M sets both the I and M flags, for example.
编译标志让你可以修改正则表达式的一些运行方式。在 re 模块中标志可以使用两个名字,一个是全名如 IGNORECASE,一个是缩写,一字母形式如 I。(如果你熟悉 Perl 的模式修改,一字母形式使用同样的字母;例如 re.VERBOSE的缩写形式是 re.X。)多个标志可以通过按位 OR-ing 它们来指定。如 re.I | re.M 被设置成 I 和 M 标志:
Here's a table of the available flags, followed by a more detailed explanation of each one.
这有个可用标志表,对每个标志后面都有详细的说明。
Flag(标志) |
Meaning(含义) |
DOTALL, S |
Make . match any character, including newlines |
IGNORECASE, I |
Do case-insensitive matches |
LOCALE, L |
Do a locale-aware match |
MULTILINE, M |
Multi-line matching, affecting and $[[BR]]多行匹配,影响 和 $ |
VERBOSE, X |
Enable verbose REs, which can be organized more cleanly and understandably. |
I
IGNORECASE
Perform case-insensitive matching; character class and literal strings will match letters by ignoring case. For example, [A-Z] will match lowercase letters, too, and Spam will match "Spam", "spam", or "spAM". This lowercasing doesn't take the current locale into account; it will if you also set the LOCALE flag.
使匹配对大小写不敏感;字符类和字符串匹配字母时忽略大小写。举个例子,[A-Z]也可以匹配小写字母,Spam 可以匹配 "Spam", "spam", 或 "spAM"。这个小写字母并不考虑当前位置。
L
LOCALE
Make \w, \W, \b, and \B, dependent on the current locale.
影响 \w, \W, \b, 和 \B,这取决于当前的本地化设置。
Locales are a feature of the C library intended to help in writing programs that take account of language differences. For example, if you're processing French text, you'd want to be able to write \w+ to match words, but \w only matches the character class [A-Za-z]; it won't match "é" or "ç". If your system is configured properly and a French locale is selected, certain C functions will tell the program that "é" should also be considered a letter. Setting the LOCALE flag when compiling a regular expression will cause the resulting compiled object to use these C functions for \w; this is slower, but also enables \w+ to match French words as you'd expect.
locales 是 C 语言库中的一项功能,是用来为需要考虑不同语言的编程提供帮助的。举个例子,如果你正在处理法文文本,你想用 \w+ 来匹配文字,但 \w 只匹配字符类 [A-Za-z];它并不能匹配 "é" 或 "ç"。如果你的系统配置适当且本地化设置为法语,那么内部的 C 函数将告诉程序 "é" 也应该被认为是一个字母。当在编译正则表达式时使用 LOCALE 标志会得到用这些 C 函数来处理 \w 后的编译对象;这会更慢,但也会象你希望的那样可以用 \w+ 来匹配法文文本。
M
MULTILINE
(^ and $ haven't been explained yet; they'll be introduced in section 4.1.)
(此时 ^ 和 $ 不会被解释; 它们将在 4.1 节被介绍.)
Usually matches only at the beginning of the string, and $ matches only at the end of the string and immediately before the newline (if any) at the end of the string. When this flag is specified, matches at the beginning of the string and at the beginning of each line within the string, immediately following each newline. Similarly, the $ metacharacter matches either at the end of the string and at the end of each line (immediately preceding each newline).
使用 只匹配字符串的开始,而 $ 则只匹配字符串的结尾和直接在换行前(如果有的话)的字符串结尾。当本标志指定后, 匹配字符串的开始和字符串中每行的开始。同样的, $ 元字符匹配字符串结尾和字符串中每行的结尾(直接在每个换行之前)。
S
DOTALL
Makes the "." special character match any character at all, including a newline; without this flag, "." will match anything except a newline.
使 "." 特殊字符完全匹配任何字符,包括换行;没有这个标志, "." 匹配除了换行外的任何字符。
X
VERBOSE
This flag allows you to write regular expressions that are more readable by granting you more flexibility in how you can format them. When this flag has been specified, whitespace within the RE string is ignored, except when the whitespace is in a character class or preceded by an d backslash; this lets you organize and indent the RE more clearly. It also enables you to put comments within a RE that will be ignored by the engine; comments are marked by a "#" that's neither in a character class or preceded by an d backslash.
该标志通过给予你更灵活的格式以便你将正则表达式写得更易于理解。当该标志被指定时,在 RE 字符串中的空白符被忽略,除非该空白符在字符类中或在反斜杠之后;这可以让你更清晰地组织和缩进 RE。它也可以允许你将注释写入 RE,这些注释会被引擎忽略;注释用 "#"号 来标识,不过该符号不能在字符串或反斜杠之后。
For example, here's a RE that uses re.VERBOSE; see how much easier it is to read?
举个例子,这里有一个使用 re.VERBOSE 的 RE;看看读它轻松了多少?
1 charref = re.compile(r""" 2 &[#] # Start of a numeric entity reference 3 ( 4 [0-9]+[^0-9] # Decimal form 5 | 0[0-7]+[^0-7] # Octal form 6 | x[0-9a-fA-F]+[^0-9a-fA-F] # Hexadecimal form 7 ) 8 """, re.VERBOSE)
Without the verbose setting, the RE would look like this:
没有 verbose 设置, RE 会看起来象这样:
1 charref = re.compile("([0-9]+[^0-9]" 2 "|0[0-7]+[^0-7]" 3 "|x[0-9a-fA-F]+[^0-9a-fA-F])")
In the above example, Python's automatic concatenation of string literals has been used to break up the RE into smaller pieces, but it's still more difficult to understand than the version using re.VERBOSE.
在上面的例子里,Python 的字符串自动连接可以用来将 RE 分成更小的部分,但它比用 re.VERBOSE 标志时更难懂。
So far we've only covered a part of the features of regular expressions. In this section, we'll cover some new metacharacters, and how to use groups to retrieve portions of the text that was matched.
到目前为止,我们只展示了正则表达式的一部分功能。在本节,我们将展示一些新的元字符和如何使用组来检索被匹配的文本部分。
There are some metacharacters that we haven't covered yet. Most of them will be covered in this section.
还有一些我们还没展示的元字符,其中的大部分将在本节展示。
Some of the remaining metacharacters to be discussed are zero-width assertions. They don't cause the engine to advance through the string; instead, they consume no characters at all, and simply succeed or fail. For example, \b is an assertion that the current position is located at a word boundary; the position isn't changed by the \b at all. This means that zero-width assertions should never be repeated, because if they match once at a given location, they can obviously be matched an infinite number of times.
剩下来要讨论的一部分元字符是零宽界定符(zero-width assertions)。它们并不会使引擎在处理字符串时更快;相反,它们根本就没有对应任何字符,只是简单的成功或失败。举个例子, \b 是一个在单词边界定位当前位置的界定符(assertions),这个位置根本就不会被 \b 改变。这意味着零宽界定符(zero-width assertions)将永远不会被重复,因为如果它们在给定位置匹配一次,那么它们很明显可以被匹配无数次。
|
Alternation, or the "or" operator. If A and B are regular expressions, A|B will match any string that matches either "A" or "B". | has very low precedence in order to make it work reasonably when you're alternating multi-character strings. Crow|Servo will match either "Crow" or "Servo", not "Cro", a "w" or an "S", and "ervo".
可选项,或者 "or" 操作符。如果 A 和 B 是正则表达式,A|B 将匹配任何匹配了 "A" 或 "B" 的字符串。| 的优先级非常低,是为了当你有多字符串要选择时能适当地运行。Crow|Servo 将匹配"Crow" 或 "Servo", 而不是 "Cro", 一个 "w" 或 一个 "S", 和 "ervo"。
To match a literal "|", use \|, or enclose it inside a character class, as in [|].
为了匹配字母 "|",可以用 \|,或将其包含在字符类中,如[|]。
^
Matches at the beginning of lines. Unless the MULTILINE flag has been set, this will only match at the beginning of the string. In MULTILINE mode, this also matches immediately after each newline within the string.
匹配行首。除非设置 MULTILINE 标志,它只是匹配字符串的开始。在 MULTILINE 模式里,它也可以直接匹配字符串中的每个换行。
For example, if you wish to match the word "From" only at the beginning of a line, the RE to use is ^From.
例如,如果你只希望匹配在行首单词 "From",那么 RE 将用 ^From。
1 >>> print re.search('^From', 'From Here to Eternity') 2 <re.MatchObject instance at 80c1520> 3 >>> print re.search('^From', 'Reciting From Memory') 4 None
$
Matches at the end of a line, which is defined as either the end of the string, or any location followed by a newline character.
匹配行尾,行尾被定义为要么是字符串尾,要么是一个换行字符后面的任何位置。
1 >>> print re.search('}$', '{block}') 2 <re.MatchObject instance at 80adfa8> 3 >>> print re.search('}$', '{block} ') 4 None 5 >>> print re.search('}$', '{block}\n') 6 <re.MatchObject instance at 80adfa8>
To match a literal "$", use \$ or enclose it inside a character class, as in [$].
匹配一个 "$",使用 \$ 或将其包含在字符类中,如[$]。
\A
Matches only at the start of the string. When not in MULTILINE mode, \A and are effectively the same. In MULTILINE mode, however, they're different; \A still matches only at the beginning of the string, but may match at any location inside the string that follows a newline character.
只匹配字符串首。当不在 MULTILINE 模式,\A 和 实际上是一样的。然而,在 MULTILINE 模式里它们是不同的;\A 只是匹配字符串首,而 还可以匹配在换行符之后字符串的任何位置。
\Z
Matches only at the end of the string.
只匹配字符串尾。
\b
Word boundary. This is a zero-width assertion that matches only at the beginning or end of a word. A word is defined as a sequence of alphanumeric characters, so the end of a word is indicated by whitespace or a non-alphanumeric character.
单词边界。这是个零宽界定符(zero-width assertions)只用以匹配单词的词首和词尾。单词被定义为一个字母数字序列,因此词尾就是用空白符或非字母数字符来标示的。
The following example matches "class" only when it's a complete word; it won't match when it's contained inside another word.
下面的例子只匹配 "class" 整个单词;而当它被包含在其他单词中时不匹配。
1 >>> p = re.compile(r'\bclass\b') 2 >>> print p.search('no class at all') 3 <re.MatchObject instance at 80c8f28> 4 >>> print p.search('the declassified algorithm') 5 None 6 >>> print p.search('one subclass is') 7 None
There are two subtleties you should remember when using this special sequence. First, this is the worst collision between Python's string literals and regular expression sequences. In Python's string literals, "\b" is the backspace character, ASCII value 8. If you're not using raw strings, then Python will convert the "\b" to a backspace, and your RE won't match as you expect it to. The following example looks the same as our previous RE, but omits the "r" in front of the RE string.
当用这个特殊序列时你应该记住这里有两个微妙之处。第一个是 Python 字符串和正则表达式之间最糟的冲突。在 Python 字符串里,"\b" 是反斜杠字符,ASCII值是8。如果你没有使用 raw 字符串时,那么 Python 将会把 "\b" 转换成一个回退符,你的 RE 将无法象你希望的那样匹配它了。下面的例子看起来和我们前面的 RE 一样,但在 RE 字符串前少了一个 "r" 。
1 >>> p = re.compile('\bclass\b') 2 >>> print p.search('no class at all') 3 None 4 >>> print p.search('\b' + 'class' + '\b') 5 <re.MatchObject instance at 80c3ee0>
Second, inside a character class, where there's no use for this assertion, \b represents the backspace character, for compatibility with Python's string literals.
第二个在字符类中,这个限定符(assertion)不起作用,\b 表示回退符,以便与 Python 字符串兼容。
\B
Another zero-width assertion, this is the opposite of \b, only matching when the current position is not at a word boundary.
另一个零宽界定符(zero-width assertions),它正好同 \b 相反,只在当前位置不在单词边界时匹配。
Frequently you need to obtain more information than just whether the RE matched or not. Regular expressions are often used to dissect strings by writing a RE divided into several subgroups which match different components of interest. For example, an RFC-822 header line is divided into a header name and a value, separated by a ":". This can be handled by writing a regular expression which matches an entire header line, and has one group which matches the header name, and another group which matches the header's value.
你经常需要得到比 RE 是否匹配还要多的信息。正则表达式常常用来分析字符串,编写一个 RE 匹配感兴趣的部分并将其分成几个小组。举个例子,一个 RFC-822 的头部用 ":" 隔成一个头部名和一个值,这就可以通过编写一个正则表达式匹配整个头部,用一组匹配头部名,另一组匹配头部值的方式来处理。
Groups are marked by the "(", ")" metacharacters. "(" and ")" have much the same meaning as they do in mathematical expressions; they group together the expressions contained inside them. For example, you can repeat the contents of a group with a repeating qualifier, such as *, +, ?, or {m,n}. For example, (ab)* will match zero or more repetitions of "ab".
组是通过 "(" 和 ")" 元字符来标识的。 "(" 和 ")" 有很多在数学表达式中相同的意思;它们一起把在它们里面的表达式组成一组。举个例子,你可以用重复限制符,象 *, +, ?, 和 {m,n},来重复组里的内容,比如说(ab)* 将匹配零或更多个重复的 "ab"。
1 >>> p = re.compile('(ab)*') 2 >>> print p.match('ababababab').span() 3 (0, 10)
Groups indicated with "(", ")" also capture the starting and ending index of the text that they match; this can be retrieved by passing an argument to group(), start(), end(), and span(). Groups are numbered starting with 0. Group 0 is always present; it's the whole RE, so MatchObject methods all have group 0 as their default argument. Later we'll see how to express groups that don't capture the span of text that they match.
组用 "(" 和 ")" 来指定,并且得到它们匹配文本的开始和结尾索引;这就可以通过一个参数用 group()、start()、end() 和 span() 来进行检索。组是从 0 开始计数的。组 0 总是存在;它就是整个 RE,所以 MatchObject 的方法都把组 0 作为它们缺省的参数。稍后我们将看到怎样表达不能得到它们所匹配文本的 span。
1 >>> p = re.compile('(a)b') 2 >>> m = p.match('ab') 3 >>> m.group() 4 'ab' 5 >>> m.group(0) 6 'ab'
Subgroups are numbered from left to right, from 1 upward. Groups can be nested; to determine the number, just count the opening parenthesis characters, going from left to right.
小组是从左向右计数的,从1开始。组可以被嵌套。计数的数值可以能过从左到右计算打开的括号数来确定。
1 >>> p = re.compile('(a(b)c)d') 2 >>> m = p.match('abcd') 3 >>> m.group(0) 4 'abcd' 5 >>> m.group(1) 6 'abc' 7 >>> m.group(2) 8 'b'
group() can be passed multiple group numbers at a time, in which case it will return a tuple containing the corresponding values for those groups.
group() 可以一次输入多个组号,在这种情况下它将返回一个包含那些组所对应值的元组。
1 >>> m.group(2,1,2) 2 ('b', 'abc', 'b')
The groups() method returns a tuple containing the strings for all the subgroups, from 1 up to however many there are.
>>> m.groups() ('abc', 'b')
Backreferences in a pattern allow you to specify that the contents of an earlier capturing group must also be found at the current location in the string. For example, \1 will succeed if the exact contents of group 1 can be found at the current position, and fails otherwise. Remember that Python's string literals also use a backslash followed by numbers to allow including arbitrary characters in a string, so be sure to use a raw string when incorporating backreferences in a RE.
For example, the following RE detects doubled words in a string.
>>> p = re.compile(r'(\b\w+)\s+\1') >>> p.search('Paris in the the spring').group() 'the the'
Backreferences like this aren't often useful for just searching through a string -- there are few text formats which repeat data in this way -- but you'll soon find out that they're very useful when performing string substitutions.
Elaborate REs may use many groups, both to capture substrings of interest, and to group and structure the RE itself. In complex REs, it becomes difficult to keep track of the group numbers. There are two features which help with this problem. Both of them use a common syntax for regular expression extensions, so we'll look at that first.
Perl 5 added several additional features to standard regular expressions, and the Python re module supports most of them. It would have been difficult to choose new single-keystroke metacharacters or new special sequences beginning with "\" to represent the new features without making Perl's regular expressions confusingly different from standard REs. If you chose "&" as a new metacharacter, for example, old expressions would be assuming that "&" was a regular character and wouldn't have escaped it by writing \& or [&].
The solution chosen by the Perl developers was to use (?...) as the extension syntax. "?" immediately after a parenthesis was a syntax error because the "?" would have nothing to repeat, so this didn't introduce any compatibility problems. The characters immediately after the "?" indicate what extension is being used, so (?=foo) is one thing (a positive lookahead assertion) and (?:foo) is something else (a non-capturing group containing the subexpression foo).
Python adds an extension syntax to Perl's extension syntax. If the first character after the question mark is a "P", you know that it's an extension that's specific to Python. Currently there are two such extensions: (?P
Now that we've looked at the general extension syntax, we can return to the features that simplify working with groups in complex REs. Since groups are numbered from left to right and a complex expression may use many groups, it can become difficult to keep track of the correct numbering, and modifying such a complex RE is annoying. Insert a new group near the beginning, and you change the numbers of everything that follows it.
First, sometimes you'll want to use a group to collect a part of a regular expression, but aren't interested in retrieving the group's contents. You can make this fact explicit by using a non-capturing group: (?:...), where you can put any other regular expression inside the parentheses.
>>> m = re.match("([abc])+", "abc") >>> m.groups() ('c',) >>> m = re.match("(?:[abc])+", "abc") >>> m.groups() ()
Except for the fact that you can't retrieve the contents of what the group matched, a non-capturing group behaves exactly the same as a capturing group; you can put anything inside it, repeat it with a repetition metacharacter such as "*", and nest it within other groups (capturing or non-capturing). (?:...) is particularly useful when modifying an existing group, since you can add new groups without changing how all the other groups are numbered. It should be mentioned that there's no performance difference in searching between capturing and non-capturing groups; neither form is any faster than the other.
The second, and more significant, feature is named groups; instead of referring to them by numbers, groups can be referenced by a name.
The syntax for a named group is one of the Python-specific extensions: (?P
>>> p = re.compile(r'(?P\b\w+\b)') >>> m = p.search( '(((( Lots of punctuation )))' ) >>> m.group('word') 'Lots' >>> m.group(1) 'Lots'
Named groups are handy because they let you use easily-remembered names, instead of having to remember numbers. Here's an example RE from the imaplib module:
InternalDate = re.compile(r'INTERNALDATE "' r'(?P[ 123][0-9])-(?P [A-Z][a-z][a-z])-' r'(?P [0-9][0-9][0-9][0-9])' r' (?P [0-9][0-9]):(?P [0-9][0-9]):(?P [0-9][0-9])' r' (?P [-+])(?P [0-9][0-9])(?P [0-9][0-9])' r'"')
It's obviously much easier to retrieve m.group('zonem')
, instead of having to remember to retrieve group 9.
Since the syntax for backreferences, in an expression like (...)\1, refers to the number of the group there's naturally a variant that uses the group name instead of the number. This is also a Python extension: (?P=name) indicates that the contents of the group called name should again be found at the current point. The regular expression for finding doubled words, (\b\w+)\s+\1 can also be written as (?P
>>> p = re.compile(r'(?P\b\w+)\s+(?P=word)') >>> p.search('Paris in the the spring').group() 'the the'
Another zero-width assertion is the lookahead assertion. Lookahead assertions are available in both positive and negative form, and look like this:
...
, successfully matches at the current location, and fails otherwise. But, once the contained expression has been tried, the matching engine doesn't advance at all; the rest of the pattern is tried right where the assertion started.
An example will help make this concrete by demonstrating a case where a lookahead is useful. Consider a simple pattern to match a filename and split it apart into a base name and an extension, separated by a ".". For example, in "news.rc", "news"is the base name, and "rc" is the filename's extension.
The pattern to match this is quite simple:
.*[.].*$
Notice that the "." needs to be treated specially because it's a metacharacter; I've put it inside a character class. Also notice the trailing $; this is added to ensure that all the rest of the string must be included in the extension. This regular expression matches "foo.bar" and "autoexec.bat" and "sendmail.cf" and "printers.conf".
Now, consider complicating the problem a bit; what if you want to match filenames where the extension is not "bat"? Some incorrect attempts:
.*[.][^b].*$
The first attempt above tries to exclude "bat" by requiring that the first character of the extension is not a "b". This is wrong, because the pattern also doesn't match "foo.bar".
.*[.]([^b]..|.[^a].|..[^t])$
The expression gets messier when you try to patch up the first solution by requiring one of the following cases to match: the first character of the extension isn't "b"; the second character isn't "a"; or the third character isn't "t". This accepts "foo.bar" and rejects "autoexec.bat", but it requires a three-letter extension and won't accept a filename with a two-letter extension such as "sendmail.cf". We'll complicate the pattern again in an effort to fix it.
.*[.]([^b].?.?|.[^a]?.?|..?[^t]?)$
In the third attempt, the second and third letters are all made optional in order to allow matching extensions shorter than three characters, such as "sendmail.cf".
The pattern's getting really complicated now, which makes it hard to read and understand. Worse, if the problem changes and you want to exclude both "bat" and "exe" as extensions, the pattern would get even more complicated and confusing.
A negative lookahead cuts through all this:
.*[.](?!bat$).*$
The lookahead means: if the expression bat doesn't match at this point, try the rest of the pattern; if bat$ does match, the whole pattern will fail. The trailing $ is required to ensure that something like "sample.batch", where the extension only starts with "bat", will be allowed.
Excluding another filename extension is now easy; simply add it as an alternative inside the assertion. The following pattern excludes filenames that end in either "bat" or "exe":
.*[.](?!bat$|exe$).*$
Up to this point, we've simply performed searches against a static string. Regular expressions are also commonly used to modify a string in various ways, using the following RegexObject methods:
Method/Attribute | Purpose |
---|---|
split() |
Split the string into a list, splitting it wherever the RE matches |
sub() |
Find all substrings where the RE matches, and replace them with a different string |
subn() |
Does the same thing as sub(), but returns the new string and the number of replacements |
The split() method of a RegexObject splits a string apart wherever the RE matches, returning a list of the pieces. It's similar to the split() method of strings but provides much more generality in the delimiters that you can split by; split() only supports splitting by whitespace or by a fixed string. As you'd expect, there's a module-level re.split() function, too.
split( | string [, maxsplit = 0 ]) |
You can limit the number of splits made, by passing a value for maxsplit. When maxsplit is nonzero, at most maxsplit splits will be made, and the remainder of the string is returned as the final element of the list. In the following example, the delimiter is any sequence of non-alphanumeric characters.
>>> p = re.compile(r'\W+') >>> p.split('This is a test, short and sweet, of split().') ['This', 'is', 'a', 'test', 'short', 'and', 'sweet', 'of', 'split', ''] >>> p.split('This is a test, short and sweet, of split().', 3) ['This', 'is', 'a', 'test, short and sweet, of split().']
Sometimes you're not only interested in what the text between delimiters is, but also need to know what the delimiter was. If capturing parentheses are used in the RE, then their values are also returned as part of the list. Compare the following calls:
>>> p = re.compile(r'\W+') >>> p2 = re.compile(r'(\W+)') >>> p.split('This... is a test.') ['This', 'is', 'a', 'test', ''] >>> p2.split('This... is a test.') ['This', '... ', 'is', ' ', 'a', ' ', 'test', '.', '']
The module-level function re.split() adds the RE to be used as the first argument, but is otherwise the same.
>>> re.split('[\W]+', 'Words, words, words.') ['Words', 'words', 'words', ''] >>> re.split('([\W]+)', 'Words, words, words.') ['Words', ', ', 'words', ', ', 'words', '.', ''] >>> re.split('[\W]+', 'Words, words, words.', 1) ['Words', 'words, words.']
Another common task is to find all the matches for a pattern, and replace them with a different string. The sub() method takes a replacement value, which can be either a string or a function, and the string to be processed.
sub( | replacement, string[, count = 0 ]) |
The optional argument count is the maximum number of pattern occurrences to be replaced; count must be a non-negative integer. The default value of 0 means to replace all occurrences.
Here's a simple example of using the sub() method. It replaces colour names with the word "colour":
>>> p = re.compile( '(blue|white|red)') >>> p.sub( 'colour', 'blue socks and red shoes') 'colour socks and colour shoes' >>> p.sub( 'colour', 'blue socks and red shoes', count=1) 'colour socks and red shoes'
The subn() method does the same work, but returns a 2-tuple containing the new string value and the number of replacements that were performed:
>>> p = re.compile( '(blue|white|red)') >>> p.subn( 'colour', 'blue socks and red shoes') ('colour socks and colour shoes', 2) >>> p.subn( 'colour', 'no colours at all') ('no colours at all', 0)
Empty matches are replaced only when they're not adjacent to a previous match.
>>> p = re.compile('x*') >>> p.sub('-', 'abxd') '-a-b-d-'
If replacement is a string, any backslash escapes in it are processed. That is, "\n" is converted to a single newline character, "\r" is converted to a carriage return, and so forth. Unknown escapes such as "\j" are left alone. Backreferences, such as "\6", are replaced with the substring matched by the corresponding group in the RE. This lets you incorporate portions of the original text in the resulting replacement string.
This example matches the word "section" followed by a string enclosed in "{", "}", and changes "section" to "subsection":
>>> p = re.compile('section{ ( [^}]* ) }', re.VERBOSE) >>> p.sub(r'subsection{\1}','section{First} section{second}') 'subsection{First} subsection{second}'
There's also a syntax for referring to named groups as defined by the (?P
>>> p = re.compile('section{ (?P[^}]* ) }', re.VERBOSE) >>> p.sub(r'subsection{\1}','section{First}') 'subsection{First}' >>> p.sub(r'subsection{\g<1>}','section{First}') 'subsection{First}' >>> p.sub(r'subsection{\g }','section{First}') 'subsection{First}'
replacement can also be a function, which gives you even more control. If replacement is a function, the function is called for every non-overlapping occurrence of pattern. On each call, the function is passed a MatchObject argument for the match and can use this information to compute the desired replacement string and return it.
In the following example, the replacement function translates decimals into hexadecimal:
>>> def hexrepl( match ): ... "Return the hex string for a decimal number" ... value = int( match.group() ) ... return hex(value) ... >>> p = re.compile(r'\d+') >>> p.sub(hexrepl, 'Call 65490 for printing, 49152 for user code.') 'Call 0xffd2 for printing, 0xc000 for user code.'
When using the module-level re.sub() function, the pattern is passed as the first argument. The pattern may be a string or a RegexObject; if you need to specify regular expression flags, you must either use a RegexObject as the first parameter, or use embedded modifiers in the pattern, e.g. sub("(?i)b+", "x", "bbbb BBBB")
returns 'x x'
.
Regular expressions are a powerful tool for some applications, but in some ways their behaviour isn't intuitive and at times they don't behave the way you may expect them to. This section will point out some of the most common pitfalls.
Sometimes using the re module is a mistake. If you're matching a fixed string, or a single character class, and you're not using any re features such as the IGNORECASE flag, then the full power of regular expressions may not be required. Strings have several methods for performing operations with fixed strings and they're usually much faster, because the implementation is a single small C loop that's been optimized for the purpose, instead of the large, more generalized regular expression engine.
One example might be replacing a single fixed string with another one; for example, you might replace "word"with "deed". re.sub()
seems like the function to use for this, but consider the replace() method. Note that replace() will also replace "word" inside words, turning "swordfish" into "sdeedfish", but the naïve RE word would have done that, too. (To avoid performing the substitution on parts of words, the pattern would have to be \bword\b, in order to require that "word" have a word boundary on either side. This takes the job beyond replace's abilities.)
Another common task is deleting every occurrence of a single character from a string or replacing it with another single character. You might do this with something like re.sub('\n', ' ', S)
, but translate() is capable of doing both tasks and will be faster that any regular expression operation can be.
In short, before turning to the re module, consider whether your problem can be solved with a faster and simpler string method.
The match() function only checks if the RE matches at the beginning of the string while search() will scan forward through the string for a match. It's important to keep this distinction in mind. Remember, match() will only report a successful match which will start at 0; if the match wouldn't start at zero, match() will not report it.
>>> print re.match('super', 'superstition').span() (0, 5) >>> print re.match('super', 'insuperable') None
On the other hand, search() will scan forward through the string, reporting the first match it finds.
>>> print re.search('super', 'superstition').span() (0, 5) >>> print re.search('super', 'insuperable').span() (2, 7)
Sometimes you'll be tempted to keep using re.match(), and just add .* to the front of your RE. Resist this temptation and use re.search() instead. The regular expression compiler does some analysis of REs in order to speed up the process of looking for a match. One such analysis figures out what the first character of a match must be; for example, a pattern starting with Crow must match starting with a "C". The analysis lets the engine quickly scan through the string looking for the starting character, only trying the full match if a "C" is found.
Adding .* defeats this optimization, requiring scanning to the end of the string and then backtracking to find a match for the rest of the RE. Use re.search() instead.
When repeating a regular expression, as in a*, the resulting action is to consume as much of the pattern as possible. This fact often bites you when you're trying to match a pair of balanced delimiters, such as the angle brackets surrounding an HTML tag. The naïve pattern for matching a single HTML tag doesn't work because of the greedy nature of .*.
>>> s = 'Title ' >>> len(s) 32 >>> print re.match('<.*>', s).span() (0, 32) >>> print re.match('<.*>', s).group()Title
The RE matches the "<" in "", and the .* consumes the rest of the string. There's still more left in the RE, though, and the > can't match at the end of the string, so the regular expression engine has to backtrack character by character until it finds a match for the >. The final match extends from the "<" in ""to the ">" in "", which isn't what you want.
In this case, the solution is to use the non-greedy qualifiers *?, +?, ??, or {m,n}?, which match as little text as possible. In the above example, the ">" is tried immediately after the first "<" matches, and when it fails, the engine advances a character at a time, retrying the ">" at every step. This produces just the right result:
>>> print re.match('<.*?>', s).group()
(Note that parsing HTML or XML with regular expressions is painful. Quick-and-dirty patterns will handle common cases, but HTML and XML have special cases that will break the obvious regular expression; by the time you've written a regular expression that handles all of the possible cases, the patterns will be very complicated. Use an HTML or XML parser module for such tasks.)
By now you've probably noticed that regular expressions are a very compact notation, but they're not terribly readable. REs of moderate complexity can become lengthy collections of backslashes, parentheses, and metacharacters, making them difficult to read and understand.
For such REs, specifying the re.VERBOSE
flag when compiling the regular expression can be helpful, because it allows you to format the regular expression more clearly.
The re.VERBOSE
flag has several effects. Whitespace in the regular expression that isn't inside a character class is ignored. This means that an expression such as dog | cat is equivalent to the less readable dog|cat, but [a b] will still match the characters "a", "b", or a space. In addition, you can also put comments inside a RE; comments extend from a "#" character to the next newline. When used with triple-quoted strings, this enables REs to be formatted more neatly:
pat = re.compile(r""" \s* # Skip leading whitespace (?P[^:]+) # Header name \s* : # Whitespace, and a colon (?P .*?) # The header's value -- *? used to # lose the following trailing whitespace \s*$ # Trailing whitespace to end-of-line """, re.VERBOSE)
This is far more readable than:
pat = re.compile(r"\s*(?P[^:]+)\s*:(?P .*?)\s*$")
Regular expressions are a complicated topic. Did this document help you understand them? Were there parts that were unclear, or Problems you encountered that weren't covered here? If so, please send suggestions for improvements to the author.
The most complete book on regular expressions is almost certainly Jeffrey Friedl's Mastering Regular Expressions, published by O'Reilly. Unfortunately, it exclusively concentrates on Perl and Java's flavours of regular expressions, and doesn't contain any Python material at all, so it won't be useful as a reference for programming in Python. (The first edition covered Python's now-obsolete regex module, which won't help you much.) Consider checking it out from your library.
Regular Expression HOWTO
This document was generated using the LaTeX2HTML translator.
LaTeX2HTML is Copyright © 1993, 1994, 1995, 1996, 1997, Nikos Drakos, Computer Based Learning Unit, University of Leeds, and Copyright © 1997, 1998, Ross Moore, Mathematics Department, Macquarie University, Sydney.
The application of LaTeX2HTML to the Python documentation has been heavily tailored by Fred L. Drake, Jr. Original navigation icons were contributed by Christopher Petrilli.