Oculus VR SDK实现-Oculus左右眼视角的偏移实现

首先人的头部在场景中是对应有一个姿态的,我们称这个为头部姿态(HeadPose)。
而因为头部和双眼之间还有一个差异,因此我们将双眼中心的姿态称为双眼中心姿态(centerEyePose)。
我们分两步来计算:
一、首先计算出双眼中心视角矩阵centerEyeViewMatrix
二、然后根据双眼中心视角矩阵计算出左右眼视角矩阵。

一、计算出双眼中心视角矩阵centerEyeViewMatrix
随着头部的转动,会产生一个四元数,这是传感器能够检测到的。比如传感器检测到的人头部的四元数为:
ovrQuatf (w,x,y,z)
为了方便地在程序中应用这个四元数,我们需要将其转换成矩阵形式,以方便用线性代数的矩阵知识来计算:
// Returns the 4x4 rotation matrix for the given quaternion.
static inline ovrMatrix4f ovrMatrix4f_CreateFromQuaternion( const ovrQuatf * q )
{
    const float ww = q->w * q->w;
    const float xx = q->x * q->x;
    const float yy = q->y * q->y;
    const float zz = q->z * q->z;

    ovrMatrix4f out;
    out.M[0][0] = ww + xx - yy - zz;
    out.M[0][1] = 2 * ( q->x * q->y - q->w * q->z );
    out.M[0][2] = 2 * ( q->x * q->z + q->w * q->y );
    out.M[0][3] = 0;

    out.M[1][0] = 2 * ( q->x * q->y + q->w * q->z );
    out.M[1][1] = ww - xx + yy - zz;
    out.M[1][2] = 2 * ( q->y * q->z - q->w * q->x );
    out.M[1][3] = 0;

    out.M[2][0] = 2 * ( q->x * q->z - q->w * q->y );
    out.M[2][1] = 2 * ( q->y * q->z + q->w * q->x );
    out.M[2][2] = ww - xx - yy + zz;
    out.M[2][3] = 0;

    out.M[3][0] = 0;
    out.M[3][1] = 0;
    out.M[3][2] = 0;
    out.M[3][3] = 1;
    return out;
}
因为眼部姿态和头部姿态的旋转用的是同一个四元数。所以,我们将头部姿态的四元数经过上面的转换,就得到了双眼中心旋转矩阵centerEyeRotation
const ovrMatrix4f centerEyeRotation = ovrMatrix4f_CreateFromQuaternion( &tracking->HeadPose.Pose.Orientation );

这个时候根据双眼中心在世界坐标系中的位置,可以得到了双眼中心偏移矩阵centerEyeTranslation
const ovrMatrix4f centerEyeTranslation = ovrMatrix4f_CreateTranslation( centerEyeOffset.x, centerEyeOffset.y, centerEyeOffset.z );
然后将旋转矩阵和偏移矩阵相乘,即可得到双眼中心的转换矩阵centerEyeTransform
const ovrMatrix4f centerEyeTransform = ovrMatrix4f_Multiply( ¢erEyeTranslation, ¢erEyeRotation );
然后求其逆矩阵,即得到双眼中心视角矩阵(centerEyeViewMatrix)
ovrMatrix4f inverseMatrix = ovrMatrix4f_Inverse( ¢erEyeTransform );
二、根据双眼中心视角矩阵计算出左右眼视角矩阵。
对同一个VR场景,实际上左右眼看到的图像是有差异的。
这个差异要在绘制的时候,通过改变camera的位置来体现。
所以还要在前面基础上计算出左右眼对应的camera的视角矩阵才行。
这个时候首先要了解瞳距(InterpupillaryDistance )这个概念:

比如,如果瞳距InterpupillaryDistance 为0.0640f(单位为米) ,那么就要将camera往左偏移半个瞳距,形成左眼的camera位置,
将camera往右偏移半个瞳距,形成右眼的camera位置。

const float eyeOffset = ( eye ? -0.5f : 0.5f ) * headModelParms->InterpupillaryDistance;

上面计算出的左眼eyeOffset=-0.0320f,右眼eyeOffset = 0.0320f。
那么如何用矩阵来做这个偏移呢?
1.首先创建一个齐次偏移矩阵(Translation),要让物体发生一个位移(x,y,z)的位移,则可以乘以下面这个齐次转换矩阵

// Returns a 4x4 homogeneous translation matrix.
static inline ovrMatrix4f ovrMatrix4f_CreateTranslation( const float x, const float y, const float z )
{
    ovrMatrix4f out;
    out.M[0][0] = 1.0f; out.M[0][1] = 0.0f; out.M[0][2] = 0.0f; out.M[0][3] = x;
    out.M[1][0] = 0.0f; out.M[1][1] = 1.0f; out.M[1][2] = 0.0f; out.M[1][3] = y;
    out.M[2][0] = 0.0f; out.M[2][1] = 0.0f; out.M[2][2] = 1.0f; out.M[2][3] = z;
    out.M[3][0] = 0.0f; out.M[3][1] = 0.0f; out.M[3][2] = 0.0f; out.M[3][3] = 1.0f;
    return out;
}

因此如果要偏移当前camera,就可以将camera对应的矩阵乘以一个左右眼的偏移矩阵。
上面偏移矩阵的构建实现代码为:
const ovrMatrix4f eyeOffsetMatrix = ovrMatrix4f_CreateTranslation( eyeOffset, 0.0f, 0.0f );
然后将camera矩阵(也就是第一步得到的centerEyeViewMatrix) 和这个eyeOffsetmatrix相乘即可左右眼的视角矩阵。
解决方案:

// Apply the eye transformation to the camera.
Matrix.multiplyMM(view, 0, eye.getEyeView(), 0, camera, 0);
multiplyMM的函数实现为:
/**
* Multiplies two 4x4 matrices together and stores the result in a third 4x4
* matrix. In matrix notation: result = lhs x rhs. Due to the way
* matrix multiplication works, the result matrix will have the same
* effect as first multiplying by the rhs matrix, then multiplying by
* the lhs matrix. This is the opposite of what you might expect.
* 

* The same float array may be passed for result, lhs, and/or rhs. However, * the result element values are undefined if the result elements overlap * either the lhs or rhs elements. * * @param result The float array that holds the result. * @param resultOffset The offset into the result array where the result is * stored. * @param lhs The float array that holds the left-hand-side matrix. * @param lhsOffset The offset into the lhs array where the lhs is stored * @param rhs The float array that holds the right-hand-side matrix. * @param rhsOffset The offset into the rhs array where the rhs is stored. * * @throws IllegalArgumentException if result, lhs, or rhs are null, or if * resultOffset + 16 > result.length or lhsOffset + 16 > lhs.length or * rhsOffset + 16 > rhs.length. */ public static native void multiplyMM(float[] result, int resultOffset, float[] lhs, int lhsOffset, float[] rhs, int rhsOffset);


其实就是将两个4x4的矩阵相乘并将结果存储在第三个4x4矩阵中。上面的解决方案就是将eye.getEyeView()这个矩阵和camera矩阵相乘,结果存储在view这个矩阵中。
分析下实际数据:
eyeinfo = 1
0.7840466 0.17531556 -0.5954287 0.0
-0.21760693 0.9760361 8.4029883E-4 0.0
0.5813073 0.1289106 0.80340767 0.0
0.015629478 -0.0017972889 0.080063015 1.0
{
   left: 35.19728,
   right: 50.898663,
   bottom: 50.099197,
   top: 48.315865,
} false

eyeinfo = 2
 0.7840466 0.17531556 -0.5954287 0.0
 -0.21760693 0.9760361 8.4029883E-4 0.0
 0.5813073 0.1289106 0.80340767 0.0
 -0.048270524 -0.0017972889 0.080063015 1.0
{
   left: 50.89864,
   right: 35.19729,
   bottom: 50.09919,
   top: 48.31586,
} false

在计算得到了左右眼的视角矩阵之后。左右眼画面中物体的具体绘制位置,都要由这个视角矩阵来决定。
// Set the position of the light
Matrix.multiplyMV(lightPosInEyeSpace, 0, view, 0, LIGHT_POS_IN_WORLD_SPACE, 0);

// Build the ModelView and ModelViewProjection matrices
// for calculating cube position and light.
float[] perspective = eye.getPerspective(Z_NEAR, Z_FAR);
Matrix.multiplyMM(modelView, 0, view, 0, modelCube, 0);
Matrix.multiplyMM(modelViewProjection, 0, perspective, 0, modelView, 0);
drawCube();

// Set modelView for the floor, so we draw floor in the correct location
Matrix.multiplyMM(modelView, 0, view, 0, modelFloor, 0);
Matrix.multiplyMM(modelViewProjection, 0, perspective, 0, modelView, 0);
drawFloor();

比如上面的代码,通过与view相乘,来决定灯光,Cube,和Floor在视野空间中的具体位置。

你可能感兴趣的:(VR,操作系统)