1. 读取h5文件中layer_name的内容,并获取layer_name下的weight信息,按先layer_name后weight_name的形式把所有weight存放在数组里。
2. 将图中的layers按照key=layer_name, value=layer的方式重新组织所有内容,并将结果存放到dict里。(多个layer可能公用一个layer_name,这些重名的layers被组织到同一个数组下)
3. 遍历h5下的所有layer,读取特定layer下的所有weight,并找到与之对应的图中的layer,依次判断h5下该layer的各weight的size和图中该layer下weight的size是否匹配。
若匹配,完成bound;不匹配,跳过或抛出异常。
总结:图中的layer是weight中layer的子集。
如果想只子图的权重,那么需要找到该子图对应的layer和对应的名称。
def load_weights_from_hdf5_group_by_name( f, layers, skip_mismatch=False): """Implements name-based weight loading. (instead of topological weight loading). Layers that have no matching name are skipped. Arguments: f: A pointer to a HDF5 group. layers: a list of target layers. skip_mismatch: Boolean, whether to skip loading of layers where there is a mismatch in the number of weights, or a mismatch in the shape of the weights. Raises: ValueError: in case of mismatch between provided layers and weights file and skip_match=False. """ if 'keras_version' in f.attrs: original_keras_version = f.attrs['keras_version'].decode('utf8') else: original_keras_version = '1' if 'backend' in f.attrs: original_backend = f.attrs['backend'].decode('utf8') else: original_backend = None # New file format. layer_names = load_attributes_from_hdf5_group(f, 'layer_names')# 1. 读取h5下layer_names存放的内容 # Reverse index of layer name to list of layers with name.# index = {} for layer in layers: # 2.将layers按照key=layer_name, value=layer的方式组织所有内容,并存放到index里。 if layer.name: index.setdefault(layer.name, []).append(layer) # We batch weight value assignments in a single backend call # which provides a speedup in TensorFlow. weight_value_tuples = [] for k, name in enumerate(layer_names): g = f[name] weight_names = load_attributes_from_hdf5_group(g, 'weight_names') # 3. 读取h5中layer下的所有weights,并从h5中加载weight相应内容存放到weight_value数组中 weight_values = [np.asarray(g[weight_name]) for weight_name in weight_names] # 4. 获取图中layer对象,分析该对象的len和4中获取的weight_value的len是否对应 4.1 对应:把weight_value相应的内容赋值给layer对象 4.2 不对应:如果可以skip_mismatch,跳过不匹配的layer对象;反之,抛出异常。 for layer in index.get(name, []): symbolic_weights = _legacy_weights(layer) weight_values = preprocess_weights_for_loading( layer, weight_values, original_keras_version, original_backend) if len(weight_values) != len(symbolic_weights): if skip_mismatch: logging.warning('Skipping loading of weights for ' 'layer {}'.format(layer.name) + ' due to mismatch ' 'in number of weights ({} vs {}).'.format( len(symbolic_weights), len(weight_values))) continue raise ValueError('Layer #' + str(k) + ' (named "' + layer.name + '") expects ' + str(len(symbolic_weights)) + ' weight(s), but the saved weights' + ' have ' + str(len(weight_values)) + ' element(s).') # Set values. for i in range(len(weight_values)): if K.int_shape(symbolic_weights[i]) != weight_values[i].shape: if skip_mismatch: logging.warning('Skipping loading of weights for ' 'layer {}'.format(layer.name) + ' due to ' 'mismatch in shape ({} vs {}).'.format( symbolic_weights[i].shape, weight_values[i].shape)) continue raise ValueError('Layer #' + str(k) +' (named "' + layer.name + '"), weight ' + str(symbolic_weights[i]) + ' has shape {}'.format(K.int_shape( symbolic_weights[i])) + ', but the saved weight has shape ' + str(weight_values[i].shape) + '.') else: weight_value_tuples.append((symbolic_weights[i], weight_values[i])) K.batch_set_value(weight_value_tuples)
https://faroit.com/keras-docs/1.2.0/backend/
https://github.com/yuanyuanli85/Keras-Multiple-Process-Prediction
Keras is a model-level library, providing high-level building blocks for developing deep learning models. It does not handle itself low-level operations such as tensor products, convolutions and so on. Instead, it relies on a specialized, well-optimized tensor manipulation library to do so, serving as the "backend engine" of Keras. Rather than picking one single tensor library and making the implementation of Keras tied to that library, Keras handles the problem in a modular way, and several different backend engines can be plugged seamlessly into Keras.
keras是一个上层high-level的框架,其自身并不提供底层实现,而这些实现由tensorflow和theano做支撑。与此同时,Keras本身集成了控制底层框架的API, 如set_session。因此,keras能很好的嵌入到tensorflow,并以我们按照我们希望的方式执行。
但在执行过程中,有如下注意事项:
1.Session不能在不同的进程间共享(https://github.com/keras-team/keras/issues/9964)
2. Session在执行结束后一定要及时关闭
3. keras尽量不要在全局位置而是进程代码端内导入。(https://stackoverflow.com/questions/42504669/keras-tensorflow-and-multiprocessing-in-python?answertab=votes#tab-top)
# 代码出自于最后一个链接(感谢大佬)
def _training_worker(train_params):
import keras #在函数内部导入
model = obtain_model(train_params)
model.fit(train_params)
send_message_to_main_process(...)
def train_new_model(train_params):
training_process = multiprocessing.Process(target=_training_worker, args =train_params)
training_process.start() #在独立的进程里运行不同的keras模型
get_message_from_training_process(...)
training_process.join()
# 控制底层框架的执行方式
def run_keras_model(CPU_num):
import keras
sess_config = tf.ConfigProto(device_count={"CPU": CPU_num},
intra_op_parallelism_threads=CPU_num,
# use_per_session_threads=True,
allow_soft_placement=False,
log_device_placement=False)
sess = tf.Session(config=sess_config)
keras.backend.tensorflow_backend.set_session(sess)
# .... run model.....
# result = sess.run(out,feed_dict=feed}
sess.close()
if __name__ == '__main__':
import multiprocessing as mp
mp.Process(target=run_keras_model,args=(5,)).start()
mp.Process(target=run_keras_model,args=(1,)).start()