Matching features across different images in a common problem in computer vision. When all images are similar in nature (same scale, orientation, etc) simple corner detectors can work. But when you have images of different scales and rotations, you need to use the Scale Invariant Feature Transform.
SIFT isn’t just scale invariant. You can change the following, and still get good results:
Here’s an example. We’re looking for these:
And we want to find these objects in this scene:
Here’s the result:
Now that’s some real robust image matching going on. The big rectangles mark matched images. The smaller squares are for individual features in those regions. Note how the big rectangles are skewed. They follow the orientation and perspective of the object in the scene.
SIFT is quite an involved algorithm. It has a lot going on and can become confusing, So I’ve split up the entire algorithm into multiple parts. Here’s an outline of what happens in SIFT.
That was an overview of the entire algorithm. Over the next few days, I’ll go through each step in detail. Finally, I’ll show you how to implement SIFT!
download the OpenSource of SIFT
After you run through the algorithm, you’ll have SIFT features for your image. Once you have these, you can do whatever you want.
Track images, detect and identify objects (which can be partly hidden as well), or whatever you can think of. We’ll get into this later as well.
But the catch is, this algorithm is patented.
So, it’s good enough for academic purposes. But if you’re looking to make something commercial, look for something else
References: http://blog.csdn.net/linyunzju/article/details/6342777