Google、Baidu 等搜索引擎相继推出了以图搜图的功能,测试了下效果还不错~ 那这种技术的原理是什么呢?计算机怎么知道两张图片相似呢?
根据Neal Krawetz博士的解释,原理非常简单易懂。我们可以用一个快速算法,就达到基本的效果。
这里的关键技术叫做感知哈希算法(Perceptual hash algorithm),它的作用是对每张图片生成一个指纹(fingerprint)字符串,然后比较不同图片的指纹。结果越接近,就说明图片越相似。
下面是一个最简单的实现:
第一步,缩小尺寸。
将图片缩小到8x8的尺寸,总共64个像素。这一步的作用是去除图片的细节,只保留结构、明暗等基本信息,摒弃不同尺寸、比例带来的图片差异。
第二步,简化色彩。
将缩小后的图片,转为64级灰度。也就是说,所有像素点总共只有64种颜色。
第三步,计算平均值。
计算所有64个像素的灰度平均值。
第四步,比较像素的灰度。
将每个像素的灰度,与平均值进行比较。大于或等于平均值,记为1;小于平均值,记为0。
第五步,计算哈希值。
将上一步的比较结果,组合在一起,就构成了一个64位的整数,这就是这张图片的指纹。组合的次序并不重要,只要保证所有图片都采用同样次序就行了。
= = 8f373714acfcf4d0
得到指纹以后,就可以对比不同的图片,看看64位中有多少位是不一样的。在理论上,这等同于计算汉明距离(Hamming distance)。如果不相同的数据位不超过5,就说明两张图片很相似;如果大于10,就说明这是两张不同的图片。
具体的代码实现,可以参见Wote用Python语言写的imgHash.py。代码很短,只有53行。使用的时候,第一个参数是基准图片,第二个参数是用来比较的其他图片所在的目录,返回结果是两张图片之间不相同的数据位数量(汉明距离)。
这种算法的优点是简单快速,不受图片大小缩放的影响,缺点是图片的内容不能变更。如果在图片上加几个文字,它就认不出来了。所以,它的最佳用途是根据缩略图,找出原图。
实际应用中,往往采用更强大的pHash算法和SIFT算法,它们能够识别图片的变形。只要变形程度不超过25%,它们就能匹配原图。这些算法虽然更复杂,但是原理与上面的简便算法是一样的,就是先将图片转化成Hash字符串,然后再进行比较。
下面我们来看下上述理论用Java来做一个DEMO版的具体实现:
package reyo.sdk.utils.ai.pic;
import java.awt.Graphics2D;
import java.awt.color.ColorSpace;
import java.awt.image.BufferedImage;
import java.awt.image.ColorConvertOp;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.InputStream;
import javax.imageio.ImageIO;
/*
*
* 汉明距离越大表明图片差异越大,如果不相同的数据位不超过5,就说明两张图片很相似;如果大于10,就说明这是两张不同的图片。
*
* pHash-like image hash.
* Author: Elliot Shepherd ([email protected]
* Based On: http://www.hackerfactor.com/blog/index.php?/archives/432-Looks-Like-It.html
*/
public class ImagePHash {
// 项目根目录路径
public static final String path = System.getProperty("user.dir");
private int size = 32;
private int smallerSize = 8;
public ImagePHash() {
initCoefficients();
}
public ImagePHash(int size, int smallerSize) {
this.size = size;
this.smallerSize = smallerSize;
initCoefficients();
}
public int distance(String s1, String s2) {
int counter = 0;
for (int k = 0; k < s1.length(); k++) {
if (s1.charAt(k) != s2.charAt(k)) {
counter++;
}
}
return counter;
}
// Returns a 'binary string' (like. 001010111011100010) which is easy to do
// a hamming distance on.
public String getHash(InputStream is) throws Exception {
BufferedImage img = ImageIO.read(is);
/*
* 1. Reduce size. Like Average Hash, pHash starts with a small image.
* However, the image is larger than 8x8; 32x32 is a good size. This is
* really done to simplify the DCT computation and not because it is
* needed to reduce the high frequencies.
*/
img = resize(img, size, size);
/*
* 2. Reduce color. The image is reduced to a grayscale just to further
* simplify the number of computations.
*/
img = grayscale(img);
double[][] vals = new double[size][size];
for (int x = 0; x < img.getWidth(); x++) {
for (int y = 0; y < img.getHeight(); y++) {
vals[x][y] = getBlue(img, x, y);
}
}
/*
* 3. Compute the DCT. The DCT separates the image into a collection of
* frequencies and scalars. While JPEG uses an 8x8 DCT, this algorithm
* uses a 32x32 DCT.
*/
long start = System.currentTimeMillis();
double[][] dctVals = applyDCT(vals);
System.out.println("DCT: " + (System.currentTimeMillis() - start));
/*
* 4. Reduce the DCT. This is the magic step. While the DCT is 32x32,
* just keep the top-left 8x8. Those represent the lowest frequencies in
* the picture.
*/
/*
* 5. Compute the average value. Like the Average Hash, compute the mean
* DCT value (using only the 8x8 DCT low-frequency values and excluding
* the first term since the DC coefficient can be significantly
* different from the other values and will throw off the average).
*/
double total = 0;
for (int x = 0; x < smallerSize; x++) {
for (int y = 0; y < smallerSize; y++) {
total += dctVals[x][y];
}
}
total -= dctVals[0][0];
double avg = total / (double) ((smallerSize * smallerSize) - 1);
/*
* 6. Further reduce the DCT. This is the magic step. Set the 64 hash
* bits to 0 or 1 depending on whether each of the 64 DCT values is
* above or below the average value. The result doesn't tell us the
* actual low frequencies; it just tells us the very-rough relative
* scale of the frequencies to the mean. The result will not vary as
* long as the overall structure of the image remains the same; this can
* survive gamma and color histogram adjustments without a problem.
*/
String hash = "";
for (int x = 0; x < smallerSize; x++) {
for (int y = 0; y < smallerSize; y++) {
if (x != 0 && y != 0) {
hash += (dctVals[x][y] > avg ? "1" : "0");
}
}
}
return hash;
}
private BufferedImage resize(BufferedImage image, int width, int height) {
BufferedImage resizedImage = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
Graphics2D g = resizedImage.createGraphics();
g.drawImage(image, 0, 0, width, height, null);
g.dispose();
return resizedImage;
}
private ColorConvertOp colorConvert = new ColorConvertOp(ColorSpace.getInstance(ColorSpace.CS_GRAY), null);
private BufferedImage grayscale(BufferedImage img) {
colorConvert.filter(img, img);
return img;
}
private static int getBlue(BufferedImage img, int x, int y) {
return (img.getRGB(x, y)) & 0xff;
}
// DCT function stolen from
// http://stackoverflow.com/questions/4240490/problems-with-dct-and-idct-algorithm-in-java
private double[] c;
private void initCoefficients() {
c = new double[size];
for (int i = 1; i < size; i++) {
c[i] = 1;
}
c[0] = 1 / Math.sqrt(2.0);
}
private double[][] applyDCT(double[][] f) {
int N = size;
double[][] F = new double[N][N];
for (int u = 0; u < N; u++) {
for (int v = 0; v < N; v++) {
double sum = 0.0;
for (int i = 0; i < N; i++) {
for (int j = 0; j < N; j++) {
sum += Math.cos(((2 * i + 1) / (2.0 * N)) * u * Math.PI)
* Math.cos(((2 * j + 1) / (2.0 * N)) * v * Math.PI) * (f[i][j]);
}
}
sum *= ((c[u] * c[v]) / 4.0);
F[u][v] = sum;
}
}
return F;
}
public static void main(String[] args) {
// 项目根目录路径
String filename = ImagePHash.path + "\\images\\";
ImagePHash p = new ImagePHash();
String image1;
String image2;
try {
for (int i = 0; i < 10; i++) {
image1 = p.getHash(new FileInputStream(new File(filename + "example" + (i + 1) + ".jpg")));
image2 = p.getHash(new FileInputStream(new File(filename + "source.jpg")));
System.out.println("example" + (i + 1) + ".jpg:source.jpg Score is " + p.distance(image1, image2));
}
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (Exception e) {
e.printStackTrace();
}
}
}
运行结果为:
DCT: 249
DCT: 237
example1.jpg:source.jpg Score is 25
DCT: 102
DCT: 103
example2.jpg:source.jpg Score is 16
DCT: 103
DCT: 104
example3.jpg:source.jpg Score is 17
DCT: 104
DCT: 103
example4.jpg:source.jpg Score is 2
DCT: 103
DCT: 103
example5.jpg:source.jpg Score is 0
DCT: 104
DCT: 104
example6.jpg:source.jpg Score is 10
DCT: 105
DCT: 104
example7.jpg:source.jpg Score is 25
DCT: 103
DCT: 103
example8.jpg:source.jpg Score is 28
DCT: 102
DCT: 103
example9.jpg:source.jpg Score is 25
DCT: 102
DCT: 103
example10.jpg:source.jpg Score is 31
如果不相同的数据位不超过5,就说明两张图片很相似;如果大于10,就说明这是两张不同的图片。
代码参考:http://pastebin.com/Pj9d8jt5
原理参考:http://www.ruanyifeng.com/blog/2011/07/principle_of_similar_image_search.html
汉明距离:http://baike.baidu.com/view/725269.htm