OpenCV之图像特征点匹配

最近研究图像质量评价、相似度测量的问题。记录一下。

图像评价

很多方法都是统计意义上的均值方差分布,并不能体现单张图像的直观感受。

IQA貌似还是没有一个业界标准,开放研究。

谷歌2017年出了一篇NIMA文章,基于AVA和TID数据集。通过深度学习对图像进行特征提取并用全连接层得到10维的输出分别表示1~10的打分(AVA),AVA数据集每一张图像都有近百人评分,取平均值进入网络,比较符合人类的评分标准。

图像相似度

  • MSE: 逐像素比较,不靠谱
  • PSRN:在MSE基础上进行,还是逐像素,不靠谱
  • 直方图:颜色直方图,就是整体上的色调一致?不靠谱
  • SSIM:亮度、饱和度、结构相似度,据说符合直观感受,貌似稍微好一些
  • 感知哈希:将图像缩小成8*8的块再通过一定的方式进行编码成hash值,小规模测试不满足直观感受。
  • SIFT特征匹配

使用opencv4实现一下SIFT特征匹配。

  • Python
import numpy as np
import cv2
from matplotlib import pyplot as plt

img1 = cv2.imread('../UGATIT-pytorch/results/selfie2anime/test_ff/21_fake_A2A.png',0)          # queryImage
img2 = cv2.imread('../UGATIT-pytorch/results/selfie2anime/test_ff/21_fake_A2B2A.png',0) # trainImage

# Initiate SIFT detector
sift = cv2.xfeatures2d.SIFT_create()

# find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(img1,None)
kp2, des2 = sift.detectAndCompute(img2,None)

# FLANN parameters
FLANN_INDEX_KDTREE = 0
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
search_params = dict(checks=50)   # or pass empty dictionary

flann = cv2.FlannBasedMatcher(index_params,search_params)

matches = flann.knnMatch(des1,des2,k=2)

# Need to draw only good matches, so create a mask
matchesMask = [[0,0] for i in range(len(matches))]

# ratio test as per Lowe's paper
for i,(m,n) in enumerate(matches):
    if m.distance < 0.7*n.distance:
        matchesMask[i]=[1,0]

draw_params = dict(matchColor = (0,255,0),
                   singlePointColor = (255,0,0),
                   matchesMask = matchesMask,
                   flags = 0)

img3 = cv2.drawMatchesKnn(img1,kp1,img2,kp2,matches,None,**draw_params)

plt.imshow(img3,),plt.show()

OpenCV之图像特征点匹配_第1张图片

  • C++
#include 
#include 
#include 

using namespace cv;
using namespace std;
using namespace xfeatures2d;

int main(int argc, char *argv[])
{
    char *A2A = argv[1];
    char *A2B2A = argv[2];
    Mat object = imread(A2A, 0);
    Mat object_scene = imread(A2B2A, 0);

    // cout<< "object row: "<< object.rows <<", col: "<
    //extract feature
    int minHessian = 400;
    Ptr<SURF> surfPtr = SURF::create(minHessian);
    vector<KeyPoint> keypoint_object, keypoint_scene;
    Mat descriptor_object, descriptor_scene;
    surfPtr->detectAndCompute(object, Mat(), keypoint_object, descriptor_object);
    surfPtr->detectAndCompute(object_scene, Mat(), keypoint_scene, descriptor_scene);

    //matching
    FlannBasedMatcher matcher;
    vector<DMatch> matches;
    matcher.match(descriptor_object, descriptor_scene, matches);

    //find good matchers
    double minDist = 1000;
    double maxDist = 0;

    for (int i = 0; i < matches.size(); ++i)
    {
        if (matches[i].distance > maxDist)
        {
            maxDist = matches[i].distance;
        }

        if (matches[i].distance < minDist)
        {
            minDist = matches[i].distance;
        }
    }

    vector<DMatch> goodMatches;
    for (int i = 0; i < matches.size(); ++i)
    {
        if (matches[i].distance < max(3 * minDist, 0.02))
        {
            goodMatches.push_back(matches[i]);
        }
    }
    // cout<<"The num of good match point is "<< goodMatches.size() <
    //draw mathes
    Mat matchImage;
    drawMatches(object, keypoint_object, object_scene, keypoint_scene, goodMatches, matchImage, Scalar::all(-1), Scalar::all(-1), vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);

    vector<Point2f> obj;
    vector<Point2f> obj_scene;
    for (size_t t = 0; t < goodMatches.size(); ++t)
    {
        obj.push_back(keypoint_object[goodMatches[t].queryIdx].pt);
        obj_scene.push_back(keypoint_scene[goodMatches[t].trainIdx].pt);
    }

    Mat H = findHomography(obj, obj_scene, RANSAC);

    vector<Point2f> obj_corner(4);
    vector<Point2f> scene_corner(4);
    obj_corner[0] = Point(0, 0);
    obj_corner[1] = Point(object.cols,0);
    obj_corner[2] = Point(object.cols, object.rows);
    obj_corner[3] = Point(0, object.rows);
    perspectiveTransform(obj_corner, scene_corner, H);

    //draw line
    line(matchImage, scene_corner[0] + Point2f(object.cols, 0), scene_corner[1] + Point2f(object.cols, 0),Scalar(0,0,255),2,8,0);
    line(matchImage, scene_corner[1] + Point2f(object.cols, 0), scene_corner[2] + Point2f(object.cols, 0), Scalar(0, 0, 255), 2, 8, 0);
    line(matchImage, scene_corner[2] + Point2f(object.cols, 0), scene_corner[3] + Point2f(object.cols, 0), Scalar(0, 0, 255), 2, 8, 0);
    line(matchImage, scene_corner[3] + Point2f(object.cols, 0), scene_corner[0] + Point2f(object.cols, 0), Scalar(0, 0, 255), 2, 8, 0);

    imshow("matchImage", matchImage);

    waitKey();
    return 0;
}
  • CMakeLists.txt
cmake_minimum_required(VERSION 3.2.0)

project(similarity)

set(CMAKE_CXX_STANDARD 11)
set(CMAKE_CXX_STANDARD_REQUIRED TRUE)

find_package(OpenCV REQUIRED)

message(STATUS "OpenCV library status:")
message(STATUS "    version: ${OpenCV_VERSION}")

include_directories(${OpenCV_INCLUDE_DIRS}/opencv2)

add_executable(sift sift_match.cpp)
target_link_libraries(sift ${OpenCV_LIBS})

OpenCV之图像特征点匹配_第2张图片

效果都不太理想,应该有一堆点要去除。
算法还是很经典的,什么时候能理解

你可能感兴趣的:(OpenCV,相似度,图像评价,sift,nima,图像处理)