HiFT在ubuntu18.04运行出现的问题

HiFT在ubuntu18.04运行出现的问题

  • 0 HiFT代码官方链接
  • 1 ModuleNotFoundError: No module named 'pysot'
  • 2 调整UAV123_10fps数据集
  • 3 cv2.error: OpenCV(4.5.5) /io/opencv/modules/highgui/src/window.cpp:1262: error: (-2:Unspecified error)
  • 4 修改eval.py
  • 5 修改 draw_success_precision.py
  • 6 ubuntu 20安装Latex
  • 7 将TXT文件转为mat
  • 8 ubuntu20.04出现 No such file or directory: 'latex'
  • 9 AssertionError: OTB100/Human4-2/img/0001.jpg
  • 10 RuntimeWarning: More than 20 figures..
  • 11 load_text_numpy(path, delimiter, dtype) Exception: Could not read file ../OTB100/Basketball/groundtruth_rect.txt
  • 12 效果图展示

0 HiFT代码官方链接

ICCV 2021 HiFT: Hierarchical Feature Transformer for Aerial Tracking
https://github.com/vision4robotics/HiFT

1 ModuleNotFoundError: No module named ‘pysot’

解决方案:
由于test.py和eval.py只需要toolkit和pysot两个依赖包,可直接把它俩放到了~/anaconda3/envs/pysot/lib/python3.7/site-packages/里面可以直接运行了。
我的虚拟环境名为HiFT,python为3.8
所以复制两个文件夹,使用以下命令:

cd ~/miniconda3/envs/HiFT/lib/python3.8/site-packages/
sudo cp -r /home/ubuntu/Code/HiFT/pysot ./
sudo cp -r /home/ubuntu/Code/HiFT/toolkit ./

检查效果:

cd /home/ubuntu/Code/HiFT/tools
python test.py --dataset /home/ubuntu/Share/QY/UAV10fps --snapshot snapshot/first.pth

2 调整UAV123_10fps数据集

数据集需要调整。configSeqs.m记录了每种视频的开始帧和结束帧,可根据该文件调整anno的标签。有些序列不是连续的,需要在anno把标签文件合并后,再在data_seq中没有标注的图片。
在这里插入图片描述
如:bird1数据集。当将bird1_1、bird1_2、bird1_3的标签合并到bird1同一个文件夹后,还需要删除data_seq中bird1 对应的86~258 、494~524的图片,因为官方没有给出标注。
在这里插入图片描述

3 cv2.error: OpenCV(4.5.5) /io/opencv/modules/highgui/src/window.cpp:1262: error: (-2:Unspecified error)

在这里插入图片描述
解决方案:

pip install opencv-python
pip install opencv-contrib-python 

4 修改eval.py

能够画出成功率图和精度图,记得加上–vis,即: python eval.py
现修改如下:

import os
import sys
import time
import argparse
import functools
sys.path.append("/home/ubuntu/Documents/HiFT") # 这里放项目的绝对路径,就能找到toolkit和pysot啦

from glob import glob
from tqdm import tqdm
from multiprocessing import Pool
from toolkit.datasets import UAV10Dataset,UAV20Dataset,DTBDataset,UAVDataset
from toolkit.evaluation import OPEBenchmark
from toolkit.visualization import draw_success_precision

if __name__ == '__main__':
    # parser = argparse.ArgumentParser(description='Single Object Tracking Evaluation')
    # parser.add_argument('--dataset_dir', default='',type=str, help='dataset root directory')
    # parser.add_argument('--dataset', default='DTB70',type=str, help='dataset name')
    # parser.add_argument('--tracker_result_dir',default='', type=str, help='tracker result root')
    # parser.add_argument('--trackers',default='general_model', nargs='+')
    # parser.add_argument('--vis', default='',dest='vis', action='store_true')
    # parser.add_argument('--show_video_level', default=' ',dest='show_video_level', action='store_true')
    # parser.add_argument('--num', default=1, type=int, help='number of processes to eval')
    # args = parser.parse_args()
    #/home/ubuntu/Share/QY/Dataset_UAV123_10fps/UAV10fps
    parser = argparse.ArgumentParser(description='tracking evaluation')
    parser.add_argument('--tracker_path', '-p', default='/home/ubuntu/Documents/HiFT/tools/results', type=str, help='tracker result path')
    parser.add_argument('--dataset', '-d', default='UAV10fps', type=str, help='dataset name')
    parser.add_argument('--num', '-n', default=1, type=int, help='number of thread to eval')
    parser.add_argument('--tracker_prefix', '-t', default='', type=str, help='tracker name')
    parser.add_argument('--show_video_level', '-s', dest='show_video_level', action='store_true')
    parser.add_argument('--vis', dest='vis', action='store_true')
    parser.set_defaults(show_video_level=True)
    args = parser.parse_args()
    
    tracker_dir = os.path.join(args.tracker_path, args.dataset)
    trackers = glob(os.path.join(args.tracker_path,
                                  args.dataset,
                                  args.tracker_prefix+'*'))
    print(os.path.join(args.tracker_path,
                                  args.dataset,
                                  args.tracker_prefix+'*'))
    # trackers = [x.split('/')[-1] for x in trackers]
    trackers = [os.path.basename(x) for x in trackers]
    print("trackers",len(trackers))
    assert len(trackers) > 0
    args.num = min(args.num, len(trackers))
    
    # root = os.path.realpath(os.path.join(os.path.dirname(__file__),
    #                          '../testing_dataset'))
    root = os.path.join('/home/ubuntu/Share/QY/Dataset_UAV123_10fps', args.dataset,'data_seq')

    # trackers=args.tracker_prefix

    if 'UAV10fps' in args.dataset:
        dataset = UAV10Dataset(args.dataset, root)
        dataset.set_tracker(tracker_dir, trackers)
        
        benchmark = OPEBenchmark(dataset)
        # success
        success_ret = {}
        with Pool(processes=args.num) as pool:
            for ret in tqdm(pool.imap_unordered(benchmark.eval_success,
                trackers), desc='eval success', total=len(trackers), ncols=18):
                success_ret.update(ret)
        # precision
        precision_ret = {}
        with Pool(processes=args.num) as pool:
            for ret in tqdm(pool.imap_unordered(benchmark.eval_precision,
                trackers), desc='eval precision', total=len(trackers), ncols=18):
                precision_ret.update(ret)
         # norm precision
        norm_precision_ret = {}
        with Pool(processes=args.num) as pool:
            for ret in tqdm(pool.imap_unordered(benchmark.eval_norm_precision,
                trackers), desc='eval norm precision', total=len(trackers), ncols=18):
                norm_precision_ret.update(ret)
        benchmark.show_result(success_ret, precision_ret, norm_precision_ret.update(ret),
                show_video_level=args.show_video_level)
        if args.vis:
            for attr, videos in dataset.attr.items():
                draw_success_precision(success_ret,
                            name=dataset.name,
                            videos=videos,
                            attr=attr,
                            precision_ret=precision_ret,
                            norm_precision_ret = norm_precision_ret)
    elif 'UAV20l' in args.dataset:
        dataset = UAV20Dataset(args.dataset, root)
        dataset.set_tracker(tracker_dir, trackers)
        benchmark = OPEBenchmark(dataset)
        success_ret = {}
        with Pool(processes=args.num) as pool:
            for ret in tqdm(pool.imap_unordered(benchmark.eval_success,
                trackers), desc='eval success', total=len(trackers), ncols=18):
                success_ret.update(ret)
        precision_ret = {}
        with Pool(processes=args.num) as pool:
            for ret in tqdm(pool.imap_unordered(benchmark.eval_precision,
                trackers), desc='eval precision', total=len(trackers), ncols=18):
                precision_ret.update(ret)
        benchmark.show_result(success_ret, precision_ret,
                show_video_level=args.show_video_level)
        if args.vis:
            for attr, videos in dataset.attr.items():
                draw_success_precision(success_ret,
                            name=dataset.name,
                            videos=videos,
                            attr=attr,
                            precision_ret=precision_ret)
    elif 'DTB70' in args.dataset:
        dataset = DTBDataset(args.dataset, root)
        dataset.set_tracker(tracker_dir, trackers)
        benchmark = OPEBenchmark(dataset)
        success_ret = {}
        with Pool(processes=args.num) as pool:
            for ret in tqdm(pool.imap_unordered(benchmark.eval_success,
                trackers), desc='eval success', total=len(trackers), ncols=18):
                success_ret.update(ret)
        precision_ret = {}
        with Pool(processes=args.num) as pool:
            for ret in tqdm(pool.imap_unordered(benchmark.eval_precision,
                trackers), desc='eval precision', total=len(trackers), ncols=18):
                precision_ret.update(ret)
        benchmark.show_result(success_ret, precision_ret,
                show_video_level=args.show_video_level)
        if args.vis:
            for attr, videos in dataset.attr.items():
                draw_success_precision(success_ret,
                            name=dataset.name,
                            videos=videos,
                            attr=attr,
                            precision_ret=precision_ret)
    elif 'UAV123' in args.dataset:
        dataset = UAVDataset(args.dataset, root)
        dataset.set_tracker(tracker_dir, trackers)
        benchmark = OPEBenchmark(dataset)
        success_ret = {}
        with Pool(processes=args.num) as pool:
            for ret in tqdm(pool.imap_unordered(benchmark.eval_success,
                trackers), desc='eval success', total=len(trackers), ncols=18):
                success_ret.update(ret)
        precision_ret = {}
        with Pool(processes=args.num) as pool:
            for ret in tqdm(pool.imap_unordered(benchmark.eval_precision,
                trackers), desc='eval precision', total=len(trackers), ncols=18):
                precision_ret.update(ret)
        benchmark.show_result(success_ret, precision_ret,
                show_video_level=args.show_video_level)
        if args.vis:
            for attr, videos in dataset.attr.items():
                draw_success_precision(success_ret,
                            name=dataset.name,
                            videos=videos,
                            attr=attr,
                            precision_ret=precision_ret)
    else:
        print('dataset error')

5 修改 draw_success_precision.py

该文件在toolkit中:
…/toolkit/visualization/draw_success_precision.py
修改的目的:能够将画好的成功率图和精度图存入本地。

import matplotlib.pyplot as plt
import numpy as np

from .draw_utils import COLOR, LINE_STYLE , MARKER_STYLE 

figure_dir = '/home/ubuntu/Documents/HiFT/figure/UAV10fps'

def draw_success_precision(success_ret, name, videos, attr, precision_ret=None,
        norm_precision_ret=None, bold_name=None, axis=[0, 1]):
    # success plot
    fig, ax = plt.subplots()
    ax.grid(b=True)
    ax.set_aspect(1)
    plt.xlabel('Overlap threshold')
    plt.ylabel('Success rate')
    if attr == 'ALL':
        # plt.title(r'\textbf{Success plots of OPE on %s}' % (name))
        plt.title('Success plots of OPE on %s' % (name))
    else:
        # plt.title(r'\textbf{Success plots of OPE - %s}' % (attr))
        plt.title('Success plots of OPE - %s' % (attr))
    plt.axis([0, 1]+axis)
    success = {}
    thresholds = np.arange(0, 1.05, 0.05)
    for tracker_name in success_ret.keys():
        value = [v for k, v in success_ret[tracker_name].items() if k in videos]
        success[tracker_name] = np.mean(value)
    for idx, (tracker_name, auc) in  \
            enumerate(sorted(success.items(), key=lambda x:x[1], reverse=True)):
        if tracker_name == bold_name:
            # label = r"\textbf{[%.3f] %s}" % (auc, tracker_name)
            label = "[%.3f] %s" % (auc, tracker_name)
        else:
            label = "[%.3f] " % (auc) + tracker_name
        value = [v for k, v in success_ret[tracker_name].items() if k in videos]
        
        plt.plot(thresholds, np.mean(value, axis=0),
                color=COLOR[idx], linestyle=LINE_STYLE[idx], label=label, linewidth=2)
        
    ax.legend(loc='lower left', labelspacing=0.2)
    ax.autoscale(enable=True, axis='both', tight=True)
    xmin, xmax, ymin, ymax = plt.axis()
    ax.autoscale(enable=False)
    ymax += 0.03
    plt.axis([xmin, xmax, ymin, ymax])
    plt.xticks(np.arange(xmin, xmax+0.01, 0.1))
    plt.yticks(np.arange(ymin, ymax, 0.1))
    ax.set_aspect((xmax - xmin)/(ymax-ymin))
    if attr == 'ALL':
            plt.savefig(figure_dir + '/s_' + name, dpi=600, bbox_inches='tight')
    else:
            plt.savefig(figure_dir + '/s_' + name, dpi=600, bbox_inches='tight')
    plt.show()

    if precision_ret:
        # norm precision plot
        fig, ax = plt.subplots()
        ax.grid(b=True)
        ax.set_aspect(50)
        plt.xlabel('Location error threshold')
        plt.ylabel('Precision')
        if attr == 'ALL':
            # plt.title(r'\textbf{Precision plots of OPE on %s}' % (name))
            plt.title('Precision plots of OPE on %s' % (name))

        else:
            # plt.title(r'\textbf{Precision plots of OPE - %s}' % (attr))
            plt.title('Precision plots of OPE - %s' % (attr))
        plt.axis([0, 50]+axis)
        precision = {}
        thresholds = np.arange(0, 51, 1)
        for tracker_name in precision_ret.keys():
            value = [v for k, v in precision_ret[tracker_name].items() if k in videos]
            precision[tracker_name] = np.mean(value, axis=0)[20]
        for idx, (tracker_name, pre) in \
                enumerate(sorted(precision.items(), key=lambda x:x[1], reverse=True)):
            if tracker_name == bold_name:
                # label = r"\textbf{[%.3f] %s}" % (pre, tracker_name)
                label = "[%.3f] %s" % (pre, tracker_name)
            else:
                label = "[%.3f] " % (pre) + tracker_name
            value = [v for k, v in precision_ret[tracker_name].items() if k in videos]
            plt.plot(thresholds, np.mean(value, axis=0),
                    color=COLOR[idx], linestyle=LINE_STYLE[idx],label=label, linewidth=2)
        ax.legend(loc='lower right', labelspacing=0.2)
        ax.autoscale(enable=True, axis='both', tight=True)
        xmin, xmax, ymin, ymax = plt.axis()
        ax.autoscale(enable=False)
        ymax += 0.03
        plt.axis([xmin, xmax, ymin, ymax])
        plt.xticks(np.arange(xmin, xmax+0.01, 5))
        plt.yticks(np.arange(ymin, ymax, 0.1))
        ax.set_aspect((xmax - xmin)/(ymax-ymin))
        if attr == 'ALL':
            plt.savefig(figure_dir + '/p_' + name, dpi=600, bbox_inches = 'tight')
        else:
            plt.savefig(figure_dir + '/p_' + name, dpi=600, bbox_inches = 'tight')
        plt.show()

    # norm precision plot
    if norm_precision_ret:
        fig, ax = plt.subplots()
        ax.grid(b=True)
        plt.xlabel('Location error threshold')
        plt.ylabel('Precision')
        if attr == 'ALL':
            # plt.title(r'\textbf{Normalized Precision plots of OPE on %s}' % (name))
            plt.title('Normalized Precision plots of OPE on %s' % (name))
        else:
            # plt.title(r'\textbf{Normalized Precision plots of OPE - %s}' % (attr))
            plt.title('Normalized Precision plots of OPE - %s' % (attr))
        norm_precision = {}
        thresholds = np.arange(0, 51, 1) / 100
        for tracker_name in precision_ret.keys():
            value = [v for k, v in norm_precision_ret[tracker_name].items() if k in videos]
            norm_precision[tracker_name] = np.mean(value, axis=0)[20]
        for idx, (tracker_name, pre) in \
                enumerate(sorted(norm_precision.items(), key=lambda x:x[1], reverse=True)):
            if tracker_name == bold_name:
                # label = r"\textbf{[%.3f] %s}" % (pre, tracker_name)
                label = "[%.3f] %s" % (pre, tracker_name)
            else:
                label = "[%.3f] " % (pre) + tracker_name
            value = [v for k, v in norm_precision_ret[tracker_name].items() if k in videos]
            plt.plot(thresholds, np.mean(value, axis=0),
                    color=COLOR[idx], linestyle=LINE_STYLE[idx],label=label, linewidth=2)
        ax.legend(loc='lower right', labelspacing=0.2)
        ax.autoscale(enable=True, axis='both', tight=True)
        xmin, xmax, ymin, ymax = plt.axis()
        ax.autoscale(enable=False)
        ymax += 0.03
        plt.axis([xmin, xmax, ymin, ymax])
        plt.xticks(np.arange(xmin, xmax+0.01, 0.05))
        plt.yticks(np.arange(ymin, ymax, 0.1))
        ax.set_aspect((xmax - xmin)/(ymax-ymin))
        if attr == 'ALL':
            plt.savefig(figure_dir + '/n_' + name, dpi=600)
        else:
            plt.savefig(figure_dir + '/n_' + name, dpi=600)
        plt.show()

6 ubuntu 20安装Latex

# 1. 下载Textlive
wget https://mirror.ctan.org/systems/texlive/tlnet/install-tl-unx.tar.gz
# 2. 解压
tar -zxvf install-tl-unx.tar.gz
# 3. 安装
sudo  perl install-tl 
# 安装过程比较久

# 安装TeXStudio
sudo add-apt-repository ppa:sunderme/texstudio                                                      
sudo apt-get update
sudo apt install texstudio 

7 将TXT文件转为mat

clear all;clc;
root = 'D:\A_graduate_student\code\pysot-toolkit-master\result\OTB100\SRDCF\';
dst = 'D:\A_graduate_student\code\tracker_benchmark_v1.0\tracker_benchmark_v1.0\SRDCF\';
if ~exist(dst, 'dir')
    mkdir(dst);
end
% 下面这几句是为了获取一个目录下的txt文件名列表
fileFolder = fullfile(root);
dirOutput = dir(fullfile(fileFolder, '*.txt'));
fileNames={dirOutput.name}; % 1*100 cell
numFile = length(fileNames);
for idxFile = 1:numFile
    path = [root  fileNames{idxFile}];
    res = load(path);
    [seq_l, xywh] = size(res);
    results{1}.res = res;
    results{1}.type = 'rect';
    results{1}.len = seq_l;
    seq_name = fileNames{idxFile}(1:end-4);
    sprintf(['now trans ' seq_name ' from txt to mat'])
    % 将需要的结构体结果写成mat格式
    save([dst seq_name '_SRDCF.mat'], 'results');    
end

8 ubuntu20.04出现 No such file or directory: ‘latex’

HiFT在ubuntu18.04运行出现的问题_第1张图片
解决方案:

sudo aptitude install texlive-fonts-recommended texlive-fonts-extra
sudo apt-get install dvipng

9 AssertionError: OTB100/Human4-2/img/0001.jpg

将OTB100.json文件中的Human4-2,修改为:Human4;Jogging-1修改为Jogging,Jogging-2修改为Jogging,Skating2-1修改为Skating2;Skating2-2修改为Skating2;
注:第一个都不改
HiFT在ubuntu18.04运行出现的问题_第2张图片

10 RuntimeWarning: More than 20 figures…

RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (matplotlib.pyplot.figure) are retained until explicitly closed and may consume too much memory.
解决方案:

plt.close()

11 load_text_numpy(path, delimiter, dtype) Exception: Could not read file …/OTB100/Basketball/groundtruth_rect.txt

读取格式不对,找到…/lib/test/utils/load_text.py,将load_text_numpy函数改为以下写法:

def load_text_numpy(path, delimiter, dtype):
    if isinstance(delimiter, (tuple, list)):
        for d in delimiter:
            try:
                # ground_truth_rect = np.loadtxt(path, delimiter=d, dtype=dtype)
                
                # to deal with different delimeters
                import io
                with open(path,'r') as f:
                    ground_truth_rect=np.loadtxt(io.StringIO(f.read().replace(',',' ')))
                
                return ground_truth_rect
            except:
                pass
 
        raise Exception('Could not read file {}'.format(path))
    else:
        ground_truth_rect = np.loadtxt(path, delimiter=delimiter, dtype=dtype)
        return ground_truth_rect

12 效果图展示

HiFT在ubuntu18.04运行出现的问题_第3张图片
HiFT在ubuntu18.04运行出现的问题_第4张图片
HiFT在ubuntu18.04运行出现的问题_第5张图片HiFT在ubuntu18.04运行出现的问题_第6张图片

你可能感兴趣的:(目标跟踪,小心前面有坑,目标跟踪)