【OTB使用介绍三(补充)】如何将自己的算法加入到Visual Tracker Benchmark v1.0中,以ECO为例

这里对OTB使用介绍三进行一个补充介绍,以目前效果最好的跟踪算法ECO为例,将ECO加入OTB的比较中。这里仅仅是一个简单的自己的理解,ECO具体调用还没有深入了解。有问题可以指正,欢迎交流。

首先当然是下载ECO的matlab源代码,作者已经在GitHub上开源。地址:https://github.com/martin-danelljan/ECO

下载的时候可以看看作者写在GitHub上的一些说明。

额外库的下载(貌似最新ECO源码的已经包含了matconvnet和pdollar两个额外的库,但还是没有network)

下载完成后,将ECO解压到OTB的tracker文件夹和其他跟踪算法放一起。(参照OTB使用介绍三)
下图是作者提供下载的两个额外的库

【OTB使用介绍三(补充)】如何将自己的算法加入到Visual Tracker Benchmark v1.0中,以ECO为例_第1张图片

在修改源码之前
如果你没有额外库的话。需要下载matconvnet和pdollar两个额外的库。用git的话,直接使用指令

git submodule init
git submodule update

作者的意思是建议用git下载,如果自己去网站下载可能会下载到不一致的版本。下载完成这两个后
还需要下载一个东西,连接:http://www.vlfeat.org/matconvnet/models/imagenet-vgg-m-2048.mat
貌似是特征提取阶段使用这个网络来提取特征。

这三个文件下载好了之后。如图解压放置
【OTB使用介绍三(补充)】如何将自己的算法加入到Visual Tracker Benchmark v1.0中,以ECO为例_第2张图片
matconvnet和pdollar_toolbox分别放在external_libs文件夹下
imagenet-vgg-m-2048.mat这个就放在feature_extraction的networks文件夹下

这样,需要的额外库文件都准备完毕了。

说明ECO_HC是手工特征,即经典的HOG CN GRAY等等
ECO_DEEP就是深度学习获得的特征
这里以ECO_HC为例。

注意:看了一下ECO的源码,不知道是什么问题。在特征提取的文件夹(feature_extraction)中没有fhog.m。会出现提取hog特征出错

这里给出下载链接:https://pan.baidu.com/s/1q862dn1llgkzP6uBF1zkhg
如果失效就用这个吧:https://download.csdn.net/download/devil_08/10388821
下载后,解压缩到feature_extraction文件夹下

额外库准备结束,开始正式工作

第一步

对额外的库进行编译操作。运行如下指令
需要安装了VS。版本最好是最新的。不然可能有版本冲突。

mex -setup C++

运行完毕后再运行install.m这个文件。

第二步

如图

【OTB使用介绍三(补充)】如何将自己的算法加入到Visual Tracker Benchmark v1.0中,以ECO为例_第3张图片

在runfiles的文件夹下,找到OTB_HC_settings.m文件,复制到ECO的文件夹下,并改名为run_ECO.m

【OTB使用介绍三(补充)】如何将自己的算法加入到Visual Tracker Benchmark v1.0中,以ECO为例_第4张图片

然后就是修改run_ECO.m的代码。

这里贴出我修改后的

function results = run_ECO(seq, res_path, bSaveImage)

setup_paths();

% Feature specific parameters
hog_params.cell_size = 6;
hog_params.compressed_dim = 10;

cn_params.tablename = 'CNnorm';
cn_params.useForGray = false;
cn_params.cell_size = 4;
cn_params.compressed_dim = 3;

ic_params.tablename = 'intensityChannelNorm6';
ic_params.useForColor = false;
ic_params.cell_size = 4;
ic_params.compressed_dim = 3;


% Which features to include
params.t_features = {
    struct('getFeature',@get_fhog,'fparams',hog_params),...
    struct('getFeature',@get_table_feature, 'fparams',cn_params),...
    struct('getFeature',@get_table_feature, 'fparams',ic_params),...
};

% Global feature parameters1s
params.t_global.normalize_power = 2;    % Lp normalization with this p
params.t_global.normalize_size = true;  % Also normalize with respect to the spatial size of the feature
params.t_global.normalize_dim = true;   % Also normalize with respect to the dimensionality of the feature

% Image sample parameters
params.search_area_shape = 'square';    % The shape of the samples
params.search_area_scale = 4.0;         % The scaling of the target size to get the search area
params.min_image_sample_size = 150^2;   % Minimum area of image samples
params.max_image_sample_size = 200^2;   % Maximum area of image samples

% Detection parameters
params.refinement_iterations = 1;       % Number of iterations used to refine the resulting position in a frame
params.newton_iterations = 5;           % The number of Newton iterations used for optimizing the detection score
params.clamp_position = false;          % Clamp the target position to be inside the image

% Learning parameters
params.output_sigma_factor = 1/16;      % Label function sigma
params.learning_rate = 0.009;           % Learning rate
params.nSamples = 30;                   % Maximum number of stored training samples
params.sample_replace_strategy = 'lowest_prior';    % Which sample to replace when the memory is full
params.lt_size = 0;                     % The size of the long-term memory (where all samples have equal weight)
params.train_gap = 5;                   % The number of intermediate frames with no training (0 corresponds to training every frame)
params.skip_after_frame = 10;           % After which frame number the sparse update scheme should start (1 is directly)
params.use_detection_sample = true;     % Use the sample that was extracted at the detection stage also for learning

% Factorized convolution parameters
params.use_projection_matrix = true;    % Use projection matrix, i.e. use the factorized convolution formulation
params.update_projection_matrix = true; % Whether the projection matrix should be optimized or not
params.proj_init_method = 'pca';        % Method for initializing the projection matrix
params.projection_reg = 1e-7;           % Regularization paremeter of the projection matrix

% Generative sample space model parameters
params.use_sample_merge = true;                 % Use the generative sample space model to merge samples
params.sample_merge_type = 'Merge';             % Strategy for updating the samples
params.distance_matrix_update_type = 'exact';   % Strategy for updating the distance matrix

% Conjugate Gradient parameters
params.CG_iter = 5;                     % The number of Conjugate Gradient iterations in each update after the first frame
params.init_CG_iter = 10*15;            % The total number of Conjugate Gradient iterations used in the first frame
params.init_GN_iter = 10;               % The number of Gauss-Newton iterations used in the first frame (only if the projection matrix is updated)
params.CG_use_FR = false;               % Use the Fletcher-Reeves (true) or Polak-Ribiere (false) formula in the Conjugate Gradient
params.CG_standard_alpha = true;        % Use the standard formula for computing the step length in Conjugate Gradient
params.CG_forgetting_rate = 50;         % Forgetting rate of the last conjugate direction
params.precond_data_param = 0.75;       % Weight of the data term in the preconditioner
params.precond_reg_param = 0.25;        % Weight of the regularization term in the preconditioner
params.precond_proj_param = 40;         % Weight of the projection matrix part in the preconditioner

% Regularization window parameters
params.use_reg_window = true;           % Use spatial regularization or not
params.reg_window_min = 1e-4;           % The minimum value of the regularization window
params.reg_window_edge = 10e-3;         % The impact of the spatial regularization
params.reg_window_power = 2;            % The degree of the polynomial to use (e.g. 2 is a quadratic window)
params.reg_sparsity_threshold = 0.05;   % A relative threshold of which DFT coefficients that should be set to zero

% Interpolation parameters
params.interpolation_method = 'bicubic';    % The kind of interpolation kernel
params.interpolation_bicubic_a = -0.75;     % The parameter for the bicubic interpolation kernel
params.interpolation_centering = true;      % Center the kernel at the feature sample
params.interpolation_windowing = false;     % Do additional windowing on the Fourier coefficients of the kernel

% Scale parameters for the translation model
% Only used if: params.use_scale_filter = false
% params.number_of_scales = 7;            % Number of scales to run the detector
% params.scale_step = 1.01;               % The scale factor

% Scale filter parameters
% Only used if: params.use_scale_filter = true
params.use_scale_filter = true;         % Use the fDSST scale filter or not (for speed)
params.scale_sigma_factor = 1/16;       % Scale label function sigma
params.scale_learning_rate = 0.025;     % Scale filter learning rate
params.number_of_scales_filter = 17;    % Number of scales
params.number_of_interp_scales = 33;    % Number of interpolated scales
params.scale_model_factor = 1.0;        % Scaling of the scale model
params.scale_step_filter = 1.02;        % The scale factor for the scale filter
params.scale_model_max_area = 32*16;    % Maximume area for the scale sample patch
params.scale_feature = 'HOG4';          % Features for the scale filter (only HOG4 supported)
params.s_num_compressed_dim = 'MAX';    % Number of compressed feature dimensions in the scale filter
params.lambda = 1e-2;                   % Scale filter regularization
params.do_poly_interp = true;           % Do 2nd order polynomial interpolation to obtain more accurate scale

% Visualization
params.visualization = 0;               % Visualiza tracking and detection scores
params.debug = 0;                       % Do full debug visualization

% GPU
params.use_gpu = false;                 % Enable GPU or not
params.gpu_id = [];                     % Set the GPU id, or leave empty to use default

% Initialize
params.seq = seq;

    if bSaveImage
        imwrite(frame2im(getframe(gcf)),[res_path num2str(frame) '.jpg']); 
    end

% Run tracker
results = tracker(params);

第三步

修改完run_ECO.m之后。就是configtracker中加入ECO的结构体。
struct('name','ECO','namePaper','ECO')
保存后就可以运行OTB的main_running.m获取你的结果了。
至此,将ECO_HC添加到OTB 进行比较的步骤结束。
ECO_DEEP的步骤应该也差不多就是这样。具体就是缺什么补什么。

你可能感兴趣的:(OTB使用教程系列)