Suppose you have a list in python that looks like this:
and you want to remove all duplicates so you get this result:
How do you do that? ...the fastest way? I wrote a couple of alternative implementations and did a quick benchmark loop on the various implementations to find out which way was the fastest. (I haven't looked at memory usage). The slowest function was 78 times slower than the fastest function.
However, there's one very important difference between the various functions. Some are order preserving and some are not. For example, in an order preserving function, apart from the duplicates, the order is guaranteed to be the same as it was inputted. Eg, uniqify([1,2,2,3])==[1,2,3]
Here are the functions:
def f1(seq): # not order preserving set = {} map(set.__setitem__, seq, []) return set.keys() def f2(seq): # order preserving checked = [] for e in seq: if e not in checked: checked.append(e) return checked def f3(seq): # Not order preserving keys = {} for e in seq: keys[e] = 1 return keys.keys() def f4(seq): # order preserving noDupes = [] [noDupes.append(i) for i in seq if not noDupes.count(i)] return noDupes def f5(seq, idfun=None): # order preserving if idfun is None: def idfun(x): return x seen = {} result = [] for item in seq: marker = idfun(item) # in old Python versions: # if seen.has_key(marker) # but in new ones: if marker in seen: continue seen[marker] = 1 result.append(item) return result def f6(seq): # Not order preserving set = Set(seq) return list(set)
And what you've all been waiting for (if you're still reading). Here are the results:
Clearly f5
is the "best" solution. Not only is it really really fast; it's also order preserving and supports an optional transform function which makes it possible to do this:
>>> a = list ( 'ABeeE' )
>>> f5 ( a )
[ 'A' , 'B' , 'e' , 'E' ]
>>> f5 ( a , lambda x : x . lower ())
[ 'A' , 'B' , 'e' ]