Keras 多输入、多输出、多loss模型构建

  • Keras在会为Model的每一个输出构建一个loss,这些loss之间无法交互。同时,Model中每一个output,都必须在fit()方法中有对应的y_true。因此,数据输入的label数==model.outputs数==loss数。
  • 而Model的每一个输入,都必须在fit()方法中有对应的x。即len(x in model.fit())==len(model.inputs)

https://blog.csdn.net/AZRRR/article/details/90380372

# 终于搞懂了loss之间的对应关系
model = Model(inputs=[img, tgt], outputs=[out1, out2])   
#定义网络的时候会给出输入和输出
model.compile(optimizer=Adam(lr=lr), loss=[
                      losses.cc3D(), losses.gradientLoss('l2')], loss_weights=[1.0, reg_param]) 
#训练网络的时候指定loss,如果是多loss,loss weights分别对应前面的每个loss的权重,最后输出loss的和
train_loss = model.train_on_batch(
            [x1,x2], [y_true_1, y_true_2]) 

开始训练,loss中的对应关系是:
推理输出out1与y_true_1算cc3D_loss,推理输出out2与y_true_2算gradientloss。
而模型的两个输入img、tgt对应的分别是数据x1,x2。

数据生成器写法:

Keras的数据生成器每次生成并返回的必须是一个tuple,而python函数返回的 x,y会被默认包装为tuple。

The output of the generator must be either
                - a tuple `(inputs, targets)`
                - a tuple `(inputs, targets, sample_weights)`.

因此单输入单输出的模型,数据生成器每次可以

def .....
    while True:
        yield x,y_true

或者,当有多输入多输出时:

def .....
    While True:
        yield [x1,x2,...], [label1,label2,...]

小结:

每次返回的x1,....,xn都会被自动喂入model.input中,故长度必须一致。之后模型进行推理,根据model.output获取m个output推理值,每一个output都会去调用相应的loss函数,并去获取得到对应的真实的label值,进行loss的计算。因此有m个label,对应了m个model的output数,对应了loss的数目。

也可以使用dict包裹:

            def generate_arrays_from_file(path):
                while True:
                    with open(path) as f:
                        for line in f:
                            # create numpy arrays of input data
                            # and labels, from each line in the file
                            x1, x2, y = process_line(line)
                            yield ({'input_1': x1, 'input_2': x2}, {'output': y})
            model.fit_generator(generate_arrays_from_file('/my_file.txt'),
                                steps_per_epoch=10000, epochs=10)

实战案例:

  • 多输入,单输出,配合Dataset API:
if __name__ == '__main__':

    a = Input(shape=(368, 368, 3))
    a2 = Input(shape=(368, 368, 4))

    conv1 = layers.Conv2D(64, 3)(a)
    conv2 = layers.Conv2D(64, 3)(conv1)
    maxpool = layers.MaxPooling2D(pool_size=8, strides=8, padding='same')(conv2)
    conv3 = layers.Conv2D(5, 1)(maxpool)

    model = keras.Model(inputs=[a,a2], outputs=[conv3])

    model.compile(optimizer=keras.optimizers.SGD(lr=0.05),
                  loss=keras.losses.mean_squared_error)

    import numpy as np

    data = np.random.rand(10, 368, 368, 3)
    data2 = np.random.rand(10, 368, 368, 4)
    label = np.random.rand(10, 46, 46, 5)

    dataset = tf.data.Dataset.from_tensor_slices((data,data2, label)).batch(5).repeat()

    iterator = dataset.make_one_shot_iterator()
    # print(next(iterator))
    # print(K.get_session().run(iterator.get_next())[1][0])

    def mannual_iter(iter_):
        next_batch = iter_.get_next()

        while True:
            img, img2, label = K.get_session().run(next_batch)
            yield [img, img2], label
            # yield [data,data2],label

    with K.get_session() as sess:
        model.fit_generator(mannual_iter(iterator), epochs=3, steps_per_epoch=5,
                            workers=1,  # This is important
                            verbose=1
                            )
  • 单输入,多输出:
if __name__ == '__main__':

    a = Input(shape=(368, 368, 3))
    a2 = Input(shape=(368, 368, 4))

    conv1 = layers.Conv2D(64, 3)(a)
    conv2 = layers.Conv2D(64, 3)(conv1)
    maxpool = layers.MaxPooling2D(pool_size=8, strides=8, padding='same')(conv2)
    conv3 = layers.Conv2D(5, 1)(maxpool)

    model = keras.Model(inputs=[a], outputs=[maxpool, conv3])
    model.summary()

    model.compile(optimizer=keras.optimizers.SGD(lr=0.05),
                  loss=[keras.losses.mean_squared_error,
                        keras.losses.mean_squared_error,
                        ],
                  loss_weights=[0.1,1])

    import numpy as np

    data = np.random.rand(10, 368, 368, 3)
    data2 = np.random.rand(10, 368, 368, 4)
    label_maxpool = np.random.rand(10, 46, 46, 64)
    label = np.random.rand(10, 46, 46, 5)

    dataset = tf.data.Dataset.from_tensor_slices((data, label_maxpool, label)).batch(5).repeat()

    iterator = dataset.make_one_shot_iterator()
    # print(next(iterator))
    # print(K.get_session().run(iterator.get_next())[1][0])

    def mannual_iter(iter_):
        next_batch = iter_.get_next()

        while True:
            img, label_maxpool, label = K.get_session().run(next_batch)
            yield [img], [label_maxpool, label]
            # yield [data,data2],label

    with K.get_session() as sess:
        model.fit_generator(mannual_iter(iterator), epochs=3, steps_per_epoch=5,
                            workers=1,  # This is important
                            verbose=1
                            )

你可能感兴趣的:(Keras 多输入、多输出、多loss模型构建)