ORB-SLAM中的重定位解读及追踪

我们以ORBSLAM中的重定位举例,来分析重定位的过程。当跟踪失败的时候,系统会启动重定位。 找出一些候选关键帧,对每个候选关键帧,用ransac和EPNP估计位姿,然后更新当前帧的地图点匹配,然后优化位姿,如果内点较少,则通过投影的方式对之前未匹配的点进行匹配,再进行优化求解,直到有足够的内点支持,重定位完成。   

 

 

下面简要梳理重定位的过程—— 

  • 用关键帧库的DetectRelocalizationCandidates函数找到当前帧对应的候选关键帧 
  • 对每个候选关键帧,用SearchByBow匹配当前帧和该关键帧(即更新当前帧特征点对应的地图点,更新mvMapPoints列表), 用当前帧和当前帧的特征匹配关系来初始化PnPSolver
  • 对每个候选关键,用EPNP估计位姿
    • 如果估计成功,用估计所得的内点更新当前帧的特征匹配
    • 调用PoseOptimization,优化当前帧的位姿  
    • 优化后,剔除特征匹配的外点(即去除误匹配的地图点)
    • 如果优化后的内点过少,用SearchByProjection对未匹配的特征点匹配当前帧与关键帧 
    • 所有的匹配点(内点加新增的匹配点)大于50,调用PoseOptimization 
    • 若优化后,新的内点数还是较少(30~50),用更小的窗口再次调用SearchByProjection 
    • 所有的匹配点数大于50,调用PoseOptimization(最后一次优化),并剔除外点 
    • 如优化后的内点大于50,中断,重定位成功,当前帧的ID就是最近一次的重定位ID帧   

下面的几张图来自网上,不可全信,但可参考,真相还是要看ORBSLAM的源码—— 

ORB-SLAM中的重定位解读及追踪_第1张图片

ORB-SLAM中的重定位解读及追踪_第2张图片

ORB-SLAM中的重定位解读及追踪_第3张图片


在Tracking.cpp源文件中有重定位函数的实现

bool Tracking::Relocalization()  //
{
    // Compute Bag of Words Vector
    // 步骤1:计算当前帧特征点的Bow映射
    mCurrentFrame.ComputeBoW();

    // Relocalization is performed when tracking is lost
    // Track Lost: Query KeyFrame Database for keyframe candidates for relocalisation
    // 步骤2:找到与当前帧相似的候选关键帧
    vector vpCandidateKFs = mpKeyFrameDB->DetectRelocalizationCandidates(&mCurrentFrame);  //先取出候选关键帧

    if(vpCandidateKFs.empty())
        return false;

    const int nKFs = vpCandidateKFs.size();

    // We perform first an ORB matching with each candidate
    // If enough matches are found we setup a PnP solver
    ORBmatcher matcher(0.75,true);

    vector vpPnPsolvers;
    vpPnPsolvers.resize(nKFs);

    vector > vvpMapPointMatches;
    vvpMapPointMatches.resize(nKFs);

    vector vbDiscarded;
    vbDiscarded.resize(nKFs);

    int nCandidates=0;

    for(int i=0; iisBad())
            vbDiscarded[i] = true;  //这个i关键帧pass
        else
        {
            // 步骤3:通过BoW进行匹配
            int nmatches = matcher.SearchByBoW(pKF,mCurrentFrame,vvpMapPointMatches[i]);  //第三个参数为vector,存的是匹配好的当前帧特征点所对应的地图点
            if(nmatches<15)
            {
                vbDiscarded[i] = true;
                continue;
            }
            else
            {
                // 初始化PnPsolver
                PnPsolver* pSolver = new PnPsolver(mCurrentFrame,vvpMapPointMatches[i]);
                pSolver->SetRansacParameters(0.99,10,300,4,0.5,5.991);
                vpPnPsolvers[i] = pSolver;
                nCandidates++;
            }
        }
    }

    // Alternatively perform some iterations of P4P RANSAC
    // Until we found a camera pose supported by enough inliers
    bool bMatch = false;
    ORBmatcher matcher2(0.9,true);

    while(nCandidates>0 && !bMatch)
    {
        for(int i=0; i vbInliers;
            int nInliers;
            bool bNoMore;

            // 步骤4:通过EPnP算法估计姿态
            PnPsolver* pSolver = vpPnPsolvers[i];
            cv::Mat Tcw = pSolver->iterate(5,bNoMore,vbInliers,nInliers);

            // If Ransac reachs max. iterations discard keyframe
            if(bNoMore)
            {
                vbDiscarded[i]=true;
                nCandidates--;
            }

            // If a Camera Pose is computed, optimize
            if(!Tcw.empty())
            {
                Tcw.copyTo(mCurrentFrame.mTcw);

                set sFound;

                const int np = vbInliers.size();

                for(int j=0; j(NULL);

                // If few inliers, search by projection in a coarse window and optimize again
                // 步骤6:如果内点较少,则通过投影的方式对之前未匹配的点进行匹配,再进行优化求解
                if(nGood<50)
                {
                    int nadditional =matcher2.SearchByProjection(mCurrentFrame,vpCandidateKFs[i],sFound,10,100);  //SearchByProjection第三个重载函数了

                    if(nadditional+nGood>=50)
                    {
                        nGood = Optimizer::PoseOptimization(&mCurrentFrame);

                        // If many inliers but still not enough, search by projection again in a narrower window
                        // the camera has been already optimized with many points
                        if(nGood>30 && nGood<50)
                        {
                            sFound.clear();
                            for(int ip =0; ip=50)
                            {
                                nGood = Optimizer::PoseOptimization(&mCurrentFrame);

                                for(int io =0; io=50)
                {
                    bMatch = true;
                    break;
                }
            }
        }
    }

    if(!bMatch)
    {
        return false;
    }
    else
    {
        mnLastRelocFrameId = mCurrentFrame.mnId;
        return true;
    }

}

 

你可能感兴趣的:(ORB-SLAM中的重定位解读及追踪)