最优最差多准则决策方法(BWMCDM)Python实现

文章目录

  • I.理论基础
    • 1.离散MCDM问题描述
    • 2.BWM
    • 3.一致性计算
    • 4.参考
  • II.Python源码
  • III.测试

I.理论基础

1.离散MCDM问题描述

\quad 离散MCDM问题表达如下:
c 1   c 2   ⋯   c n A = a 1 a 2 ⋮ a m ( p 11 p 12 ⋯ p 1 n p 21 p 22 ⋯ p 2 n ⋮ ⋮ ⋱ ⋮ p m 1 p m 2 ⋯ p m n ) (1) \begin{array}{l} \begin{array}{llll} \qquad&&c_{1} & \ c_{2} & \ \cdots & \ c_{n} \end{array} \\ A = \begin{array}{c} a_{1} \\ a_{2} \\ \vdots \\ a_{m} \end{array}\left(\begin{array}{cccc} p_{11} & p_{12} & \cdots & p_{1 n} \\ p_{21} & p_{22} & \cdots & p_{2 n} \\ \vdots & \vdots & \ddots & \vdots \\ p_{m 1} & p_{m 2} & \cdots & p_{m n} \end{array}\right) \\ \end{array} \tag{1} c1 c2  cnA=a1a2am p11p21pm1p12p22pm2p1np2npmn (1)

其中, { a 1 , a 2 , … , a m } \left\{a_1,a_2,\ldots,a_m\right\} {a1,a2,,am}为一组可行替代方案, { c 1 , c 2 , … , c n } \left\{c_1,c_2,\ldots,c_n\right\} {c1,c2,,cn}为一组决策准则, p i j p_{ij} pij为替代方案 i i i在准则 j j j上的得分。

\quad 目标是选择最佳(例如,最持久、最重要)的替代方案,即选择具有最高总体价值的替代方案。

\quad 若为每个准则分配权重 w j , ( w j ≥ 0 , ∑ w j = 1 ) , j = 1 , 2 , … , n w_j,\left(w_j \geq 0, \sum w_j=1\right),j=1,2,\ldots,n wj,(wj0,wj=1),j=1,2,,n,则可行方案 i i i的加权价值函数为:
V i = ∑ j = 1 n w j p i j (2) V_i = \sum_{j=1}^n w_j p_{ij} \tag{2} Vi=j=1nwjpij(2)

2.BWM

\quad 假设有n个评价准则,并使用 1 / 9 1/9 1/9 9 9 9对这些准则进行成对比较,从而得分判断矩阵 A A A
A = ( a 11 a 12 … a 1 n a 21 a 22 … a 2 n ⋮ ⋮ ⋱ ⋮ a n 1 a n 2 … a n n ) (3) A = \left(\begin {array}{c} a_{11} & a_{12} & \ldots & a_{1n} \\ a_{21} & a_{22} & \ldots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} & \ldots & a_{nn} \\ \end{array}\right) \tag{3} A= a11a21an1a12a22an2a1na2nann (3)

其中, a i j a_{ij} aij表示准则 i i i相对于准则 j j j的相对偏好, a i j = 1 a_{ij}=1 aij=1表示两者同等重要, a i j ≥ 1 a_{ij}\ge 1 aij1表示准则 i i i比准则 j j j更加重要, a i j = 9 a_{ij}=9 aij=9表示绝对重要。满足 a j i = 1 a i j , a i i = 1 a_{ji}=\frac{1}{a_{ij}},a_{ii}=1 aji=aij1,aii=1

为了获得完整的判断矩阵 A A A需要进行 n ( n − 1 ) / 2 n\left(n-1\right)/2 n(n1)/2次成对比较。若满足以下条件,则认为判断矩阵 A A A是完全一致的:
a i k × a k j = a i j , ∀ i , j (4) a_{ik}\times a_{kj} = a_{ij}, \forall i,j \tag{4} aik×akj=aij,i,j(4)

将成对比较分为两类:(1)参考比较(Reference Comparisons)和(2)二次比较(Secondary Comparisons)

定义1 如果 i i i是最佳要素和/或 j j j是最差要素,则 a i j a_{ij} aij表示一个参考比较。

定义2 如果 i i i j j j都不是最佳或最差要素,且 a i j ≥ 1 a_{ij}\geq 1 aij1,则 a i j a_{ij} aij为一个二次比较。

实施步骤
S1.确定决策准则集

\quad 采用准则 C = { c 1 , c 2 , … , c n } C=\{c_1,c_2,\ldots,c_n\} C={c1,c2,,cn}作为决策准则

S2.确定最佳(如,最重要)和最差(如,最不重要)准则

\quad 由决策者从整体上确定最佳和最差的准则

S3.使用1-9得分确定最佳准则相对所有准则的偏好值(Best-to-Others向量)
A B = ( a B 1 , a B 2 , … , a B n ) A_B = \left(a_{B1},a_{B2},\ldots,a_{Bn}\right) AB=(aB1,aB2,,aBn)
其中, a B j a_{Bj} aBj表示最佳准则 B B B相对于准则 j j j的偏好,显然, a B B = 1 a_{BB}=1 aBB=1

S4.使用1-9得分确定最差准则相对所有准则的偏好值
A W = ( a 1 W , a 2 W , … , a n W ) A_W = \left(a_{1W},a_{2W},\ldots,a_{nW}\right) AW=(a1W,a2W,,anW)
其中, a j W a_{jW} ajW表示准则 j j j相对于最佳准则 W W W的偏好,显然, a W W = 1 a_{WW}=1 aWW=1

S5.找到最优权重 ( w 1 ∗ , w 2 ∗ , … , w n ∗ ) \left(w_{1}^*,w_{2}^*,\ldots,w_{n}^*\right) (w1,w2,,wn)

对于每一对 w B / w j w_B/w_j wB/wj w j / w W w_j/w_W wj/wW,有 w B / w j = a B j w_B/w_j = a_{Bj} wB/wj=aBj w j / w W = a j W w_j/w_W = a_{jW} wj/wW=ajW。为了对所有 j j j都满足以上条件,需要找到一个对所有 j j j的最大绝对差 ∣ w B w j − a B j ∣ \left|\frac{w_B}{w_j}-a_{Bj}\right| wjwBaBj ∣ w j w W − a j W ∣ \left|\frac{w_j}{w_W}-a_{jW}\right| wWwjajW 最小。考虑到权重的非负性和和约束,有:
min ⁡ max ⁡ j { ∣ w B w j − a B j ∣ , ∣ w j w W − a j W ∣ } s . t . ∑ j w j = 1 w j ≥ 0 , ∀ j (5) \begin{array}{l} \min \max_{j}\left\{\left|\frac{w_B}{w_j}-a_{Bj}\right|,\left|\frac{w_j}{w_W}-a_{jW}\right|\right\} \\ s.t. \\ \sum_{j} w_j = 1 \\ w_j \geq 0, \forall j \\ \end{array} \tag{5} minmaxj{ wjwBaBj , wWwjajW }s.t.jwj=1wj0,j(5)

可将上述问题(5)转化为以下问题:
min ⁡ ξ s . t . ∣ w B w j − a B j ∣ ≤ ξ , ∀ j ∣ w j w W − a j W ∣ ≤ ξ , ∀ j ∑ j w j = 1 w j ≥ 0 , ∀ j (6) \begin{array}{l} \min \xi \\ s.t. \\ \left|\frac{w_B}{w_j}-a_{Bj}\right| \leq \xi, \forall j \\ \left|\frac{w_j}{w_W}-a_{jW}\right| \leq \xi, \forall j \\ \sum_{j} w_{j} = 1 \\ w_j \geq 0, \forall j \\ \end{array} \tag{6} minξs.t. wjwBaBj ξ,j wWwjajW ξ,jjwj=1wj0,j(6)

求解问题(6),即可得到最优权重 ( w 1 ∗ , w 2 ∗ , … , w n ∗ ) \left(w_{1}^*,w_{2}^*,\ldots,w_{n}^*\right) (w1,w2,,wn) ξ ∗ \xi^* ξ

3.一致性计算

\quad a B j × a j W = a B W a_{Bj} \times a_{jW} = a_{BW} aBj×ajW=aBW对所有j都满足,则该比较是一致的。

\quad 不一致:
( a B j − ξ ) × ( a j W − ξ ) = ( a B W + ξ ) (7) \left(a_{Bj}-\xi\right) \times \left(a_{jW}-\xi\right) = \left(a_{BW}+\xi\right) \tag{7} (aBjξ)×(ajWξ)=(aBW+ξ)(7)

为了得到最小一致性 a B j = a j W = a B W a_{Bj}=a_{jW}=a_{BW} aBj=ajW=aBW,有:
( a B j − ξ ) × ( a j W − ξ ) = ( a B W + ξ ) ⇒ ξ 2 − ( 1 + 2 a B W ) ξ + ( a B W 2 − a B W ) = 0 (8) \left(a_{Bj}-\xi\right) \times \left(a_{jW}-\xi\right) = \left(a_{BW}+\xi\right) \Rightarrow \xi^2 - \left(1+2a_{BW}\right)\xi + \left({a_{BW}}^2 - a_{BW}\right)=0 \\ \tag{8} (aBjξ)×(ajWξ)=(aBW+ξ)ξ2(1+2aBW)ξ+(aBW2aBW)=0(8)

为了求解 a B W ∈ { 1 , 2 , … , 9 } a_{BW}\in\left\{1,2,\ldots,9\right\} aBW{1,2,,9},可以寻找最大可能 ξ \xi ξ max ⁡ ξ \max \xi maxξ)。使用这些最大值作为一致性指标,然后使用 ξ ∗ \xi^* ξ和对应的一致性指标计算一致率:
C R = ξ ∗ C I CR = \frac{\xi^*}{CI} CR=CIξ

4.参考

  1. Rezaei, J. Best-worst multi-criteria decision-making method. Omega, 2015, 53, 49-57.

II.Python源码

import numpy as np
import math
import random
import os

## 灰狼优化求解算法

# Function
def target_function():
    return

# Function: Initialize Variables
def initial_position(pack_size = 5, min_values = [-5,-5], max_values = [5,5], target_function = target_function):
    position = np.zeros((pack_size, len(min_values)+1))
    for i in range(0, pack_size):
        for j in range(0, len(min_values)):
             position[i,j] = random.uniform(min_values[j], max_values[j])
        position[i,-1] = target_function(position[i,0:position.shape[1]-1])
    return position

# Function: Initialize Alpha
def alpha_position(dimension = 2, target_function = target_function):
    alpha = np.zeros((1, dimension + 1))
    for j in range(0, dimension):
        alpha[0,j] = 0.0
    alpha[0,-1] = target_function(alpha[0,0:alpha.shape[1]-1])
    return alpha

# Function: Initialize Beta
def beta_position(dimension = 2, target_function = target_function):
    beta = np.zeros((1, dimension + 1))
    for j in range(0, dimension):
        beta[0,j] = 0.0
    beta[0,-1] = target_function(beta[0,0:beta.shape[1]-1])
    return beta

# Function: Initialize Delta
def delta_position(dimension = 2, target_function = target_function):
    delta =  np.zeros((1, dimension + 1))
    for j in range(0, dimension):
        delta[0,j] = 0.0
    delta[0,-1] = target_function(delta[0,0:delta.shape[1]-1])
    return delta

# Function: Updtade Pack by Fitness
def update_pack(position, alpha, beta, delta):
    updated_position = np.copy(position)
    for i in range(0, position.shape[0]):
        if (updated_position[i,-1] < alpha[0,-1]):
            alpha[0,:] = np.copy(updated_position[i,:])
        if (updated_position[i,-1] > alpha[0,-1] and updated_position[i,-1] < beta[0,-1]):
            beta[0,:] = np.copy(updated_position[i,:])
        if (updated_position[i,-1] > alpha[0,-1] and updated_position[i,-1] > beta[0,-1]  and updated_position[i,-1] < delta[0,-1]):
            delta[0,:] = np.copy(updated_position[i,:])
    return alpha, beta, delta

# Function: Updtade Position
def update_position(position, alpha, beta, delta, a_linear_component = 2, min_values = [-5,-5], max_values = [5,5], target_function = target_function):
    updated_position = np.copy(position)
    for i in range(0, updated_position.shape[0]):
        for j in range (0, len(min_values)):   
            r1_alpha              = int.from_bytes(os.urandom(8), byteorder = "big") / ((1 << 64) - 1)
            r2_alpha              = int.from_bytes(os.urandom(8), byteorder = "big") / ((1 << 64) - 1)
            a_alpha               = 2*a_linear_component*r1_alpha - a_linear_component
            c_alpha               = 2*r2_alpha      
            distance_alpha        = abs(c_alpha*alpha[0,j] - position[i,j]) 
            x1                    = alpha[0,j] - a_alpha*distance_alpha   
            r1_beta               = int.from_bytes(os.urandom(8), byteorder = "big") / ((1 << 64) - 1)
            r2_beta               = int.from_bytes(os.urandom(8), byteorder = "big") / ((1 << 64) - 1)
            a_beta                = 2*a_linear_component*r1_beta - a_linear_component
            c_beta                = 2*r2_beta            
            distance_beta         = abs(c_beta*beta[0,j] - position[i,j]) 
            x2                    = beta[0,j] - a_beta*distance_beta                          
            r1_delta              = int.from_bytes(os.urandom(8), byteorder = "big") / ((1 << 64) - 1)
            r2_delta              = int.from_bytes(os.urandom(8), byteorder = "big") / ((1 << 64) - 1)
            a_delta               = 2*a_linear_component*r1_delta - a_linear_component
            c_delta               = 2*r2_delta            
            distance_delta        = abs(c_delta*delta[0,j] - position[i,j]) 
            x3                    = delta[0,j] - a_delta*distance_delta                                 
            updated_position[i,j] = np.clip(((x1 + x2 + x3)/3),min_values[j],max_values[j])     
        updated_position[i,-1] = target_function(updated_position[i,0:updated_position.shape[1]-1])
    return updated_position

# GWO Function
def grey_wolf_optimizer(pack_size = 5, min_values = [-5,-5], max_values = [5,5], iterations = 50, target_function = target_function, verbose = True):    
    count    = 0
    alpha    = alpha_position(dimension = len(min_values), target_function = target_function)
    beta     = beta_position(dimension  = len(min_values), target_function = target_function)
    delta    = delta_position(dimension = len(min_values), target_function = target_function)
    position = initial_position(pack_size = pack_size, min_values = min_values, max_values = max_values, target_function = target_function)
    while (count <= iterations): 
        if (verbose == True):    
            print('Iteration = ', count, ' f(x) = ', alpha[0][-1])      
        a_linear_component = 2 - count*(2/iterations)
        alpha, beta, delta = update_pack(position, alpha, beta, delta)
        position           = update_position(position, alpha, beta, delta, a_linear_component = a_linear_component, min_values = min_values, max_values = max_values, target_function = target_function)    
        count              = count + 1          
    return alpha



# BWM
def BWM(dataset, mic, lic, size = 50, iterations = 150):
    X         = np.copy(dataset)/1.0
    best      = np.where(mic == 1)[0][0] #获得最优指标值为1的索引
    worst     = np.where(lic == 1)[0][0] #获得最差准则只为1的索引
    # 建立Best-to-Others向量
    pairs_b   = [(best, i)  for i in range(0, mic.shape[0])]
    # 建立Others-to-Worst向量
    pairs_w   = [(i, worst) for i in range(0, mic.shape[0]) if (i, worst) not in pairs_b]
    # 建立优化目标函数
    def target_function(variables):
        eps       = [float('+inf')]
        for pair in pairs_b: #与最佳准则比较
            i, j = pair
            diff = abs(variables[i] - variables[j]*mic[j]) #计算|a_Bj-w_b/w_j|
            if ( i != j):
                eps.append(diff)
        for pair in pairs_w: #与最差准则比较
            i, j = pair
            diff = abs(variables[i] - variables[j]*lic[j])
            if ( i != j):
                eps.append(diff)
        if ( np.sum(variables) == 0):
            eps = float('+inf')
        else:
            eps = max(eps[1:])
        return eps
    # 采用灰狼优化算法寻找最优权重
    weights = grey_wolf_optimizer(pack_size = size, min_values = [0.01]*X.shape[1], max_values = [1]*X.shape[1], iterations = iterations, target_function = target_function)
    # 权重归一化
    weights = weights[0][:-1]/sum(weights[0][:-1])
    return weights

III.测试

# 数据集
dataset = np.array([
                # g1   g2   g3   g4
                [250,  16,  12,  5],
                [200,  16,  8 ,  3],   
                [300,  32,  16,  4],
                [275,  32,  8 ,  4],
                [225,  16,  16,  2]
                ])
# 最优准则
mic = np.array([1, 3, 4, 7])
# 最差准则
lic = np.array([7, 5, 5, 1])
# 采用BWM计算最优权重
weights =  BWM(dataset, mic, lic, size = 50, iterations = 150)
# 输出各个指标的最优权重
for i in range(0, weights.shape[0]):
    print('w(g'+str(i+1)+'): ', round(weights[i], 3))

输出结果:

Iteration =  0  f(x) =  inf
Iteration =  1  f(x) =  0.8225123652041643
Iteration =  2  f(x) =  0.26978730095302067
Iteration =  3  f(x) =  0.11575769796201554
Iteration =  4  f(x) =  0.11575769796201554
Iteration =  5  f(x) =  0.07244990479832858
Iteration =  6  f(x) =  0.07244990479832858
Iteration =  7  f(x) =  0.07244990479832858
Iteration =  8  f(x) =  0.07244990479832858
Iteration =  9  f(x) =  0.07244990479832858
Iteration =  10  f(x) =  0.07244990479832858
Iteration =  11  f(x) =  0.07244990479832858
Iteration =  12  f(x) =  0.0627512470203439
Iteration =  13  f(x) =  0.0627512470203439
Iteration =  14  f(x) =  0.0627512470203439
Iteration =  15  f(x) =  0.0627512470203439
Iteration =  16  f(x) =  0.0627512470203439
Iteration =  17  f(x) =  0.0627512470203439
Iteration =  18  f(x) =  0.0627512470203439
Iteration =  19  f(x) =  0.0627512470203439
Iteration =  20  f(x) =  0.0613258458509815
Iteration =  21  f(x) =  0.0416184489289983
Iteration =  22  f(x) =  0.0416184489289983
Iteration =  23  f(x) =  0.0416184489289983
Iteration =  24  f(x) =  0.04043851330583073
Iteration =  25  f(x) =  0.04043851330583073
Iteration =  26  f(x) =  0.03839692794308719
Iteration =  27  f(x) =  0.03839692794308719
Iteration =  28  f(x) =  0.03839692794308719
Iteration =  29  f(x) =  0.03839692794308719
Iteration =  30  f(x) =  0.03839692794308719
Iteration =  31  f(x) =  0.03839692794308719
Iteration =  32  f(x) =  0.03362197238350268
Iteration =  33  f(x) =  0.03362197238350268
Iteration =  34  f(x) =  0.027563216968690428
Iteration =  35  f(x) =  0.027563216968690428
Iteration =  36  f(x) =  0.027563216968690428
Iteration =  37  f(x) =  0.027563216968690428
Iteration =  38  f(x) =  0.027563216968690428
Iteration =  39  f(x) =  0.027563216968690428
Iteration =  40  f(x) =  0.02521773272774567
Iteration =  41  f(x) =  0.02521773272774567
Iteration =  42  f(x) =  0.02521773272774567
Iteration =  43  f(x) =  0.02521773272774567
Iteration =  44  f(x) =  0.02521773272774567
Iteration =  45  f(x) =  0.02521773272774567
Iteration =  46  f(x) =  0.02521773272774567
Iteration =  47  f(x) =  0.02521773272774567
Iteration =  48  f(x) =  0.02521773272774567
Iteration =  49  f(x) =  0.021877679478816078
Iteration =  50  f(x) =  0.021877679478816078
Iteration =  51  f(x) =  0.021877679478816078
Iteration =  52  f(x) =  0.017349379506246702
Iteration =  53  f(x) =  0.017349379506246702
Iteration =  54  f(x) =  0.017349379506246702
Iteration =  55  f(x) =  0.017349379506246702
Iteration =  56  f(x) =  0.016814330101299044
Iteration =  57  f(x) =  0.016814330101299044
Iteration =  58  f(x) =  0.016814330101299044
Iteration =  59  f(x) =  0.016814330101299044
Iteration =  60  f(x) =  0.016814330101299044
Iteration =  61  f(x) =  0.016814330101299044
Iteration =  62  f(x) =  0.01664986899457347
Iteration =  63  f(x) =  0.012419697076621149
Iteration =  64  f(x) =  0.012419697076621149
Iteration =  65  f(x) =  0.012419697076621149
Iteration =  66  f(x) =  0.012419697076621149
Iteration =  67  f(x) =  0.012419697076621149
Iteration =  68  f(x) =  0.012419697076621149
Iteration =  69  f(x) =  0.012419697076621149
Iteration =  70  f(x) =  0.01200555185181644
Iteration =  71  f(x) =  0.011952061481401982
Iteration =  72  f(x) =  0.011952061481401982
Iteration =  73  f(x) =  0.011952061481401982
Iteration =  74  f(x) =  0.008761657912284386
Iteration =  75  f(x) =  0.008761657912284386
Iteration =  76  f(x) =  0.008761657912284386
Iteration =  77  f(x) =  0.008761657912284386
Iteration =  78  f(x) =  0.008761657912284386
Iteration =  79  f(x) =  0.008419538595294263
Iteration =  80  f(x) =  0.008419538595294263
Iteration =  81  f(x) =  0.008419538595294263
Iteration =  82  f(x) =  0.008419538595294263
Iteration =  83  f(x) =  0.008419538595294263
Iteration =  84  f(x) =  0.008419538595294263
Iteration =  85  f(x) =  0.008419538595294263
Iteration =  86  f(x) =  0.008419538595294263
Iteration =  87  f(x) =  0.008419538595294263
Iteration =  88  f(x) =  0.008419538595294263
Iteration =  89  f(x) =  0.008419538595294263
Iteration =  90  f(x) =  0.008419538595294263
Iteration =  91  f(x) =  0.008419538595294263
Iteration =  92  f(x) =  0.008419538595294263
Iteration =  93  f(x) =  0.008419538595294263
Iteration =  94  f(x) =  0.008419538595294263
Iteration =  95  f(x) =  0.008419538595294263
Iteration =  96  f(x) =  0.008419538595294263
Iteration =  97  f(x) =  0.008419538595294263
Iteration =  98  f(x) =  0.008419538595294263
Iteration =  99  f(x) =  0.008419538595294263
Iteration =  100  f(x) =  0.008419538595294263
Iteration =  101  f(x) =  0.008419538595294263
Iteration =  102  f(x) =  0.008419538595294263
Iteration =  103  f(x) =  0.008419538595294263
Iteration =  104  f(x) =  0.008419538595294263
Iteration =  105  f(x) =  0.008419538595294263
Iteration =  106  f(x) =  0.008175882927208215
Iteration =  107  f(x) =  0.008175882927208215
Iteration =  108  f(x) =  0.008175882927208215
Iteration =  109  f(x) =  0.008175882927208215
Iteration =  110  f(x) =  0.008175882927208215
Iteration =  111  f(x) =  0.008175882927208215
Iteration =  112  f(x) =  0.008175882927208215
Iteration =  113  f(x) =  0.008175882927208215
Iteration =  114  f(x) =  0.008175882927208215
Iteration =  115  f(x) =  0.008175882927208215
Iteration =  116  f(x) =  0.008175882927208215
Iteration =  117  f(x) =  0.008175882927208215
Iteration =  118  f(x) =  0.008175882927208215
Iteration =  119  f(x) =  0.008175882927208215
Iteration =  120  f(x) =  0.008175882927208215
Iteration =  121  f(x) =  0.008175882927208215
Iteration =  122  f(x) =  0.008175882927208215
Iteration =  123  f(x) =  0.008175882927208215
Iteration =  124  f(x) =  0.008175882927208215
Iteration =  125  f(x) =  0.00809499667967843
Iteration =  126  f(x) =  0.00809499667967843
Iteration =  127  f(x) =  0.00809499667967843
Iteration =  128  f(x) =  0.00809499667967843
Iteration =  129  f(x) =  0.00809499667967843
Iteration =  130  f(x) =  0.00809499667967843
Iteration =  131  f(x) =  0.00809499667967843
Iteration =  132  f(x) =  0.00808352974904726
Iteration =  133  f(x) =  0.00808352974904726
Iteration =  134  f(x) =  0.00808352974904726
Iteration =  135  f(x) =  0.00808352974904726
Iteration =  136  f(x) =  0.00808352974904726
Iteration =  137  f(x) =  0.00808352974904726
Iteration =  138  f(x) =  0.00808352974904726
Iteration =  139  f(x) =  0.00808352974904726
Iteration =  140  f(x) =  0.00808352974904726
Iteration =  141  f(x) =  0.00808352974904726
Iteration =  142  f(x) =  0.00808352974904726
Iteration =  143  f(x) =  0.00808352974904726
Iteration =  144  f(x) =  0.008060349193135287
Iteration =  145  f(x) =  0.008060349193135287
Iteration =  146  f(x) =  0.008060349193135287
Iteration =  147  f(x) =  0.008060349193135287
Iteration =  148  f(x) =  0.008019470478954472
Iteration =  149  f(x) =  0.008019470478954472
Iteration =  150  f(x) =  0.008004493940747618
w(g1):  0.578
w(g2):  0.168
w(g3):  0.162
w(g4):  0.093

你可能感兴趣的:(多属性决策算法,python,算法,多属性决策,BWM)