当我们分析双视图几何关系时,总是可以将照相机间的相对位置关系用单应性矩阵加以简化。这里的单应性矩阵通常只做刚体变换,即只通过单应性矩阵变换了坐标系。将原点和坐标轴第一个照相机对齐,则:
K1和K2是标定矩阵,R是第二个照相机的旋转矩阵,t是第二个照相机的平移向量。通过这些找到点X的投影点x1,x2(分别对应于投影矩阵P1,P2),这样我们就可以从寻找对应的图像触发,恢复照相机参数矩阵。
投影点满足:
基础矩阵F满足:
矩阵St为反对称矩阵
基础矩阵F描述了两张图像中任意对应点的约束关系,即可通过一个像面上的一点像素坐标p,则可以求得在另一个像面上的对应点像素坐标p‘。
实验对象分为室内图像对和室外图像对
代码:
from PIL import Image
from numpy import *
from pylab import *
from PCV.geometry import homography, camera, sfm
from PCV.localdescriptors import sift
from imp import reload
camera = reload(camera)
homography = reload(homography)
sfm = reload(sfm)
sift = reload(sift)
im1 = array(Image.open('data/55.jpg'))
sift.process_image('data/55.jpg', 'im1.sift')
im2 = array(Image.open('data/56.jpg'))
sift.process_image('data/56.jpg', 'im2.sift')
l1, d1 = sift.read_features_from_file('im1.sift')
l2, d2 = sift.read_features_from_file('im2.sift')
matches = sift.match_twosided(d1, d2)
ndx = matches.nonzero()[0]
x1 = homography.make_homog(l1[ndx, :2].T)
ndx2 = [int(matches[i]) for i in ndx]
x2 = homography.make_homog(l2[ndx2, :2].T)
d1n = d1[ndx]
d2n = d2[ndx2]
x1n = x1.copy()
x2n = x2.copy()
figure(figsize=(16,16))
sift.plot_matches(im1, im2, l1, l2, matches, True)
show()
def F_from_ransac(x1, x2, model, maxiter=5000, match_threshold=1e-6):
""" Robust estimation of a fundamental matrix F from point
correspondences using RANSAC (ransac.py from
http://www.scipy.org/Cookbook/RANSAC).
input: x1, x2 (3*n arrays) points in hom. coordinates. """
from PCV.tools import ransac
data = np.vstack((x1, x2))
d = 10 # 20 is the original
# compute F and return with inlier index
F, ransac_data = ransac.ransac(data.T, model,
8, maxiter, match_threshold, d, return_all=True)
return F, ransac_data['inliers']
model = sfm.RansacModel()
F, inliers = F_from_ransac(x1n, x2n, model, maxiter=5000, match_threshold=1e-5)
print(F)
P1 = array([[1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, 0]])
P2 = sfm.compute_P_from_fundamental(F)
print(P2)
print(F)
X = sfm.triangulate(x1n[:, inliers], x2n[:, inliers], P1, P2)
cam1 = camera.Camera(P1)
cam2 = camera.Camera(P2)
x1p = cam1.project(X)
x2p = cam2.project(X)
figure(figsize=(16, 16))
imj = sift.appendimages(im1, im2)
imj = vstack((imj, imj))
imshow(imj)
cols1 = im1.shape[1]
rows1 = im1.shape[0]
for i in range(len(x1p[0])):
if (0<= x1p[0][i]
基础矩阵F:
[[ 1.98887001e-06 8.27249369e-05 -8.35515278e-03]
[-8.34002841e-05 2.80437530e-06 8.98960638e-03]
[ 7.03633085e-03 -1.81199306e-02 1.00000000e+00]]
投影矩阵
P1:
[[1 0 0 0]
[0 1 0 0]
[0 0 1 0]]
P2:
[[-7.48030043e-01 8.04740348e-01 8.95374068e+01 2.16003609e+02]
[ 1.80474514e+00 -1.94187082e+00 -2.15996573e+02 8.95192868e+01]
[ 1.76908427e-02 8.07168914e-03 -4.54385772e+00 1.00000000e+00]]
P3:
[[-4.20861259e-03 -3.27420222e-03 7.09821355e-01 -5.95395303e-01]
[-3.12067197e-03 1.39965895e-03 3.49699535e-01 -1.38695457e-01]
[-3.31985560e-05 -1.05590747e-05 5.35848261e-03 -2.63669445e-03]]
P2:
[[ 4.01666559e+00 2.85513028e+00 7.89126113e+02 -7.57386679e+02]
[ 3.85513224e+00 2.74032367e+00 7.57379201e+02 7.89120873e+02]
[ 3.69109345e-03 -8.96433401e-03 9.86952808e+00 1.00000000e+00]]
基础矩阵F:
[[ 2.79581382e-06 -7.78641696e-06 5.09004126e-03]
[ 1.21599362e-05 -8.33557647e-07 3.61811421e-03]
[-7.47814732e-03 -5.23955075e-03 1.00000000e+00]]
投影矩阵
P1:
[[1 0 0 0]
[0 1 0 0]
[0 0 1 0]]
P2:
[[ 4.01666559e+00 2.85513028e+00 7.89126113e+02 -7.57386679e+02]
[ 3.85513224e+00 2.74032367e+00 7.57379201e+02 7.89120873e+02]
[ 3.69109345e-03 -8.96433401e-03 9.86952808e+00 1.00000000e+00]]
P3:
[[-1.62633361e-01 -7.16853237e-02 -1.39525288e-03 -3.70038874e-01]
[ 5.24528951e-02 -2.45617573e-01 -1.45947400e-03 3.68897123e-05]
[ 5.77787038e-04 -2.25668468e-03 -1.36116962e-05 -9.03004971e-06]]
实验过程中室内图像对的特征匹配数量远少于室外图像对,(上述2.1实验结果为刻意选取的特征匹配数量较多的情况)。个人觉得原因可能是因为室内事物较为精细,因为拍摄角度、晚上灯光导致的光照不均产生的变化较大,导致不易匹配。在加大基础矩阵的阈值之后可以有效改善。