时序预测 | MATLAB实现VMD-SSA-KELM和VMD-KELM变分模态分解结合麻雀算法优化核极限学习机时间序列预测

时序预测 | MATLAB实现VMD-SSA-KELM和VMD-KELM变分模态分解结合麻雀算法优化核极限学习机时间序列预测

目录

    • 时序预测 | MATLAB实现VMD-SSA-KELM和VMD-KELM变分模态分解结合麻雀算法优化核极限学习机时间序列预测
      • 预测效果
      • 基本介绍
      • 程序设计
      • 参考资料

预测效果

时序预测 | MATLAB实现VMD-SSA-KELM和VMD-KELM变分模态分解结合麻雀算法优化核极限学习机时间序列预测_第1张图片
时序预测 | MATLAB实现VMD-SSA-KELM和VMD-KELM变分模态分解结合麻雀算法优化核极限学习机时间序列预测_第2张图片
时序预测 | MATLAB实现VMD-SSA-KELM和VMD-KELM变分模态分解结合麻雀算法优化核极限学习机时间序列预测_第3张图片
时序预测 | MATLAB实现VMD-SSA-KELM和VMD-KELM变分模态分解结合麻雀算法优化核极限学习机时间序列预测_第4张图片
时序预测 | MATLAB实现VMD-SSA-KELM和VMD-KELM变分模态分解结合麻雀算法优化核极限学习机时间序列预测_第5张图片

基本介绍

MATLAB实现VMD-SSA-KELM和VMD-KELM变分模态分解结合麻雀算法优化核极限学习机时间序列预测

程序设计

  • 完整程序和数据下载方式1:同等价值程序兑换;
  • 完整程序和数据下载方式2:MATLAB实现VMD-SSA-KELM和VMD-KELM变分模态分解结合麻雀算法优化核极限学习机时间序列预测
  %输入:Xtrain每行为一个输入
    nb_data = size(Xtrain,1);


    if strcmpi(kernel_type,'RBF_kernel') || strcmpi(kernel_type,'RBF')
        %输入参数小于4(等于3)时是训练核矩阵,此处将训练数据映射到核空间
        if nargin<4
            XXh = sum(Xtrain.^2,2)*ones(1,nb_data);
            omega = XXh+XXh'-2*(Xtrain*Xtrain');
            omega = exp(-omega./kernel_pars(1));
        else
        %输入等于4时是将测试数据映射到核空间,此时第一个输入参数为训练数据
        %4个参数为测试数据
            XXh1 = sum(Xtrain.^2,2)*ones(1,size(Xt,1));
            XXh2 = sum(Xt.^2,2)*ones(1,nb_data);
            omega = XXh1+XXh2' - 2*Xtrain*Xt';
            omega = exp(-omega./kernel_pars(1));
        end

    elseif strcmpi(kernel_type,'lin_kernel') || strcmpi(kernel_type,'lin')
        if nargin<4
            omega = Xtrain*Xtrain';
        else
            omega = Xtrain*Xt';
        end

    elseif strcmpi(kernel_type,'poly_kernel') || strcmpi(kernel_type,'poly')
        if nargin<4
            omega = (Xtrain*Xtrain'+kernel_pars(1)).^kernel_pars(2);
        else
            omega = (Xtrain*Xt'+kernel_pars(1)).^kernel_pars(2);
        end

    elseif strcmpi(kernel_type,'wav_kernel') || strcmpi(kernel_type,'wav')
        if nargin<4
            XXh = sum(Xtrain.^2,2)*ones(1,nb_data);
            omega = XXh+XXh'-2*(Xtrain*Xtrain');

            XXh1 = sum(Xtrain,2)*ones(1,nb_data);
            omega1 = XXh1-XXh1';
            omega = cos(kernel_pars(3)*omega1./kernel_pars(2)).*exp(-omega./kernel_pars(1));

        else
            XXh1 = sum(Xtrain.^2,2)*ones(1,size(Xt,1));
            XXh2 = sum(Xt.^2,2)*ones(1,nb_data);
            omega = XXh1+XXh2' - 2*(Xtrain*Xt');

            XXh11 = sum(Xtrain,2)*ones(1,size(Xt,1));
            XXh22 = sum(Xt,2)*ones(1,nb_data);
            omega1 = XXh11-XXh22';

            omega = cos(kernel_pars(3)*omega1./kernel_pars(2)).*exp(-omega./kernel_pars(1));
        end
    end  

参考资料

[1] https://blog.csdn.net/kjm13182345320/article/details/128577926?spm=1001.2014.3001.5501
[2] https://blog.csdn.net/kjm13182345320/article/details/128573597?spm=1001.2014.3001.5501

你可能感兴趣的:(时间序列,VMD-SSA-KELM,VMD-KELM,变分模态分解,麻雀算法优化,核极限学习机)