BK-Trees

BK-Trees, or Burkhard-Keller Trees are a tree-based data structure engineered for quickly finding near-matches to a string, for example, as used by a spelling checker, or when doing a 'fuzzy' search for a term. The aim is to return, for example, "seek" and "peek" if I search for "aeek". What makes BK-Trees so cool is that they take a problem which has no obvious solution besides brute-force search, and present a simple and elegant method for speeding up searches substantially. 

BK-Trees were first proposed by Burkhard and Keller in 1973, in their paper "Some approaches to best match file searching". The only copy of this online seems to be in the ACM archive, which is subscription only. Further details, however, are provided in the excellent paper "Fast Approximate String Matching in a Dictionary". 

Before we can define BK-Trees, we need to define a couple of preliminaries. In order to index and search our dictionary, we need a way to compare strings. The canonical method for this is the Levenshtein Distance, which takes two strings, and returns a number representing the minimum number of insertions, deletions and replacements required to translate one string into the other. Other string functions are also acceptable (for example, one incorportating the concept of transpositions as an atomic operation could be used), as long as they meet the criteria defined below. 

Now we can make a particularly useful observation about the Levenshtein Distance: It forms a Metric Space. Put simply, a metric space is any relationship that adheres to three basic criteria: 

d(x,y) = 0 <-> x = y (If the distance between x and y is 0, then x = y)

d(x,y) = d(y,x) (The distance from x to y is the same as the distance from y to x)

d(x,y) + d(y,z) >= d(x,z)

The last of these criteria is called the Triangle Inequality. The Triangle Inequality states that the path from x to z must be no longer than any path that goes through another intermediate point (the path from x to y to z). Look at a triangle, for example: it's not possible to draw a triangle such that it's quicker to get from one point to another by going along two sides than it is by going along the other side. 

These three criteria, basic as they are, are all that's required for something such as the Levenshtein Distance to qualify as a Metric Space. Note that this is far more general than, for example, a Euclidian Space - a Euclidian Space is metric, but many Metric Spaces (such as the Levenshtein Distance) are not Euclidian. Now that we know that the Levenshtein Distance (and other similar string distance functions) embodies a Metric Space, we come to the key observation of Burkhard and Keller. 

Assume for a moment we have two parameters, query, the string we are using in our search, and n the maximum distance a string can be from query and still be returned. Say we take an arbitary string, test and compare it to query. Call the resultant distance d. Because we know the triangle inequality holds, all our results must have at most distance d+n and at least distance d-n from test. 

From here, the construction of a BK-Tree is simple: Each node has a arbitrary number of children, and each edge has a number corresponding to a Levenshtein distance. All the subnodes on the edge numbered n have a Levenshtein distance of exactly n to the parent node. So, for example, if we have a tree with parent node "book" and two child nodes "rook" and "nooks", the edge from "book" to "rook" is numbered 1, and the edge from "book" to "nooks" is numbered 2. 

To build the tree from a dictionary, take an arbitrary word and make it the root of your tree. Whenever you want to insert a word, take the Levenshtein distance between your word and the root of the tree, and find the edge with number d(newword,root). Recurse, comparing your query with the child node on that edge, and so on, until there is no child node, at which point you create a new child node and store your new word there. For example, to insert "boon" into the example tree above, we would examine the root, find that d("book", "boon") = 1, and so examine the child on the edge numbered 1, which is the word "rook". We would then calculate the distance d("rook", "boon"), which is 2, and so insert the new word under "rook", with an edge numbered 2. 

To query the tree, take the Levenshtein distance from your term to the root, and recursively query every child node numbered between d-n and d+n (inclusive). If the node you are examining is within d of your search term, return it and continue your query. 

For example, if we have the following list of words: ["cat", "cut", "hat", "man", "hit"], we will start by creating a “cat” node with no children. To add “cut” we calculate the Levensteinh distance to be one and insert it under the “cat” node.

BK-Trees_第1张图片

1) Cut is inserted under cat with a branch of length one

BK-Trees_第2张图片

2) The Levensteinh distance between “cat” and “man” is two, so “man” is connected with a branch of length two

BK-Trees_第3张图片

3) d(“hat”,“cat”) = 1, so the insertion operation is done on the “cut” node, and “hat” is connected to “cut” with a branch of length two (d(“cut”,“hat”)=2)

BK-Trees_第4张图片

4) d(“cat”,“man”) = 2, so the insertion operation is done on the “man” node, and “hit” is connected to “man” with a branch of length three (d(“man”,“hit”)=3)

The lookup algorithm is also quite simple: We find the distance from the query word to the root node and then find all the child nodes in the range [(d-n),(d+n)] where d is the distance and n is the maximum distance we will allow. We then recursively query the nodes that matched. The result is a list of words satisfying d(query_word, word) ≤ n.

From: 

http://blog.notdot.net/2007/4/Damn-Cool-Algorithms-Part-1-BK-Trees

http://hamberg.no/erlend/posts/2012-01-17-BK-trees.html


你可能感兴趣的:(二叉树,(binary,tree))