Tensorflow: Model parallelism 模型并行计算

在tensorflow官方tutorial上给出了多GPU的用法,但那是基于data-parallelism的计算,主要思想是将数据划分成不同部分,用同一个模型进行计算

但是我在写代码中发现,会出现单个模型过大无法再单个GPU上运行,这时候就需要model-parallelism

上网查找了很多资料后,发现这个博主写的不错,附带了github代码,How to Use Distributed TensorFlow to Split Your TensorFlow Graph Between Multiple Machines

实现起来其实非常简单,只需要将模型划分,让不同的网络层在不同的GPU上计算就可以了

#实现一个[9k,9k,9k]的densenet,前两层在GPU0上训练
#最后一层在GPU1上训练,因为输出层权重矩阵大概是[28k,10k]单个GPU会显示内存不够
def dense_gpu(input, keep_prob):
    units = 9000
    with tf.device("/gpu:0"):
        input_layer = input
        dropout1 = tf.nn.dropout(input_layer, keep_prob=keep_prob)

        # Dense Layer1
        hidden1 = weightnorm.dense(inputs=dropout1, units=units)
        dense1 = tf.keras.layers.concatenate([hidden1, input_layer])
        dropout2 = tf.nn.dropout(dense1, keep_prob=keep_prob)
        activation1 = tf.nn.leaky_relu(dropout2)

        hidden2 = weightnorm.dense(inputs=activation1, units=units)
        dense2 = tf.keras.layers.concatenate([hidden2, dense1])
        dropout3 = tf.nn.dropout(dense2, keep_prob=keep_prob)
        activation2 = tf.nn.leaky_relu(dropout3)

    with tf.device("/gpu:1"):
        hidden3 = weightnorm.dense(inputs=activation2, units=units)
        dense3 = tf.keras.layers.concatenate([hidden3, dense2])
        dropout4 = tf.nn.dropout(dense3, keep_prob=keep_prob)
        activation3 = tf.nn.leaky_relu(dropout4)

        # Output Layer
        # 9520 is the length of the target gene
        output = weightnorm.dense(inputs=activation3, units=9520)

    return output

你可能感兴趣的:(tensorflow)