https://github.com/tensorflow/docs/blob/r1.3/site/en/api_docs/api_docs/python/tf/split.md
site/en/api_docs/api_docs/python/tf/split.md
split(
value,
num_or_size_splits,
axis=0,
num=None,
name='split'
)
Defined in tensorflow/python/ops/array_ops.py
.
See the guide: Tensor Transformations > Slicing and Joining
Splits a tensor into sub tensors.
将张量分割成子张量。
If num_or_size_splits
is an integer type, num_split
, then splits value
along dimension axis
into num_split
smaller tensors. Requires that num_split
evenly divides value.shape[axis]
.
如果 num_or_size_splits 是整数类型 (num_split),则沿维度 axis 分割 value 成为 num_split 更小的张量。要求 num_split 均匀分配 value.shape[axis]。
If num_or_size_splits
is not an integer type, it is presumed to be a Tensor size_splits
, then splits value
into len(size_splits)
pieces. The shape of the i
-th piece has the same size as the value
except along dimension axis
where the size is size_splits[i]
.
如果 num_or_size_splits 不是整数类型,则它被认为是一个张量 size_splits,然后将 value 分割成 len(size_splits) 块。第 i 部分的形状与 value 的大小相同,除了沿维度 axis 之外的大小 size_splits[i]。
For example:
# 'value' is a tensor with shape [5, 30]
# Split 'value' into 3 tensors with sizes [4, 15, 11] along dimension 1
split0, split1, split2 = tf.split(value, [4, 15, 11], 1)
tf.shape(split0) # [5, 4]
tf.shape(split1) # [5, 15]
tf.shape(split2) # [5, 11]
# Split 'value' into 3 tensors along dimension 1
split0, split1, split2 = tf.split(value, num_or_size_splits=3, axis=1)
tf.shape(split0) # [5, 10]
value
: The Tensor
to split.num_or_size_splits
: Either a 0-D integer Tensor
indicating the number of splits along split_dim or a 1-D integer Tensor
integer tensor containing the sizes of each output tensor along split_dim. If a scalar then it must evenly divide value.shape[axis]
; otherwise the sum of sizes along the split dimension must match that of the value
. (指示沿 split_dim 分割数量为 0-D 整数 Tensor 或包含沿 split_dim 每个输出张量大小的 1-D 整数 Tensor。如果为一个标量,那么它必须均匀分割 value.shape[axis],否则沿分割维度的大小总和必须与该 value 相匹配。)axis
: A 0-D int32
Tensor
. The dimension along which to split. Must be in the range [-rank(value), rank(value))
. Defaults to 0. (表示分割的尺寸,必须在 [-rank(value), rank(value)) 范围内,默认为 0。)num
: Optional, used to specify the number of outputs when it cannot be inferred from the shape of size_splits
. (可选的,用于指定无法从 size_splits 的形状推断出的输出数目。)name
: A name for the operation (optional). (操作的名称 (可选)。)如果 num_or_size_splits 传入的是一个整数,直接在 axis 维度上把张量平均切分成几个小张量。
如果 num_or_size_splits 传入的是一个向量 (向量各个元素的和要与原本这个维度的数值相等) 就根据这个向量有几个元素分为几项。
if num_or_size_splits
is a scalar returns num_or_size_splits
Tensor
objects; if num_or_size_splits
is a 1-D Tensor returns num_or_size_splits.get_shape[0]
Tensor
objects resulting from splitting value
.
如果 num_or_size_splits 是标量,返回 num_or_size_splits Tensor对象。如果 num_or_size_splits 是一维张量,则返回由 value 分割产生的 num_or_size_splits.get_shape[0] Tensor 对象。
ValueError
: If num
is unspecified and cannot be inferred. (如果 num 没有指定并且无法推断。)#!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import absolute_import
from __future__ import print_function
from __future__ import division
import os
import sys
import numpy as np
import tensorflow as tf
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
current_directory = os.path.dirname(os.path.abspath(__file__))
print(16 * "++--")
print("current_directory:", current_directory)
print(16 * "++--")
value = [[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12]]
split0, split1, split2 = tf.split(value, [1, 1, 1], 0)
split3, split4, split5 = tf.split(value, [1, 2, 1], 1)
with tf.Session() as sess:
print("split0:\n", sess.run(split0))
print('-' * 32)
print("split1:\n", sess.run(split1))
print('-' * 32)
print("split2:\n", sess.run(split2))
print('-' * 32)
print("split3:\n", sess.run(split3))
print('-' * 32)
print("split4:\n", sess.run(split4))
print('-' * 32)
print("split5:\n", sess.run(split5))
/usr/bin/python2.7 /home/strong/tensorflow_work/R2CNN_Faster-RCNN_Tensorflow/yongqiang.py
++--++--++--++--++--++--++--++--++--++--++--++--++--++--++--++--
current_directory: /home/strong/tensorflow_work/R2CNN_Faster-RCNN_Tensorflow
++--++--++--++--++--++--++--++--++--++--++--++--++--++--++--++--
2019-08-20 08:15:08.428010: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2019-08-20 08:15:08.485819: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:892] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-08-20 08:15:08.486035: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Found device 0 with properties:
name: GeForce GTX 1080 major: 6 minor: 1 memoryClockRate(GHz): 1.7335
pciBusID: 0000:01:00.0
totalMemory: 7.92GiB freeMemory: 3.80GiB
2019-08-20 08:15:08.486046: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1080, pci bus id: 0000:01:00.0, compute capability: 6.1)
split0:
[[1 2 3 4]]
--------------------------------
split1:
[[5 6 7 8]]
--------------------------------
split2:
[[ 9 10 11 12]]
--------------------------------
split3:
[[1]
[5]
[9]]
--------------------------------
split4:
[[ 2 3]
[ 6 7]
[10 11]]
--------------------------------
split5:
[[ 4]
[ 8]
[12]]
Process finished with exit code 0
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import absolute_import
from __future__ import print_function
from __future__ import division
import os
import sys
import numpy as np
import tensorflow as tf
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
current_directory = os.path.dirname(os.path.abspath(__file__))
print(16 * "++--")
print("current_directory:", current_directory)
print(16 * "++--")
value = [[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12]]
split0, split1, split2 = tf.split(value, 3, axis=0)
split3, split4, split5, split6 = tf.split(value, num_or_size_splits=4, axis=1)
with tf.Session() as sess:
print("split0:\n", sess.run(split0))
print('-' * 32)
print("split1:\n", sess.run(split1))
print('-' * 32)
print("split2:\n", sess.run(split2))
print('-' * 32)
print("split3:\n", sess.run(split3))
print('-' * 32)
print("split4:\n", sess.run(split4))
print('-' * 32)
print("split5:\n", sess.run(split5))
print('-' * 32)
print("split6:\n", sess.run(split6))
/usr/bin/python2.7 /home/strong/tensorflow_work/R2CNN_Faster-RCNN_Tensorflow/yongqiang.py
++--++--++--++--++--++--++--++--++--++--++--++--++--++--++--++--
current_directory: /home/strong/tensorflow_work/R2CNN_Faster-RCNN_Tensorflow
++--++--++--++--++--++--++--++--++--++--++--++--++--++--++--++--
2019-08-19 09:13:37.946792: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2019-08-19 09:13:38.027963: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:892] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-08-19 09:13:38.028232: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Found device 0 with properties:
name: GeForce GTX 1080 major: 6 minor: 1 memoryClockRate(GHz): 1.7335
pciBusID: 0000:01:00.0
totalMemory: 7.92GiB freeMemory: 7.42GiB
2019-08-19 09:13:38.028256: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1080, pci bus id: 0000:01:00.0, compute capability: 6.1)
split0:
[[1 2 3 4]]
--------------------------------
split1:
[[5 6 7 8]]
--------------------------------
split2:
[[ 9 10 11 12]]
--------------------------------
split3:
[[1]
[5]
[9]]
--------------------------------
split4:
[[ 2]
[ 6]
[10]]
--------------------------------
split5:
[[ 3]
[ 7]
[11]]
--------------------------------
split6:
[[ 4]
[ 8]
[12]]
Process finished with exit code 0
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import absolute_import
from __future__ import print_function
from __future__ import division
import os
import sys
import numpy as np
import tensorflow as tf
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
current_directory = os.path.dirname(os.path.abspath(__file__))
print(16 * "++--")
print("current_directory:", current_directory)
print(16 * "++--")
batch_size = 1
num_step = 6
num_input = 2
# x_anchor shape: (batch_size, num_step, num_input)
x_anchor = tf.constant([[[0, 1],
[2, 3],
[4, 5],
[6, 7],
[8, 9],
[10, 11]]], dtype=np.float32)
# Permute num_step and batch_size.
y_anchor = tf.transpose(x_anchor, perm=[1, 0, 2])
# (num_step * batch_size, num_input)
y_reshape = tf.reshape(y_anchor, [num_step * batch_size, num_input])
# Split data because rnn cell needs a list of inputs for the RNN inner loop num_step * (batch_size, num_input)
y_split = tf.split(y_reshape, num_step, 0)
with tf.Session() as sess:
input_x_anchor = sess.run(x_anchor)
print("type(input_x_anchor):", type(input_x_anchor))
print("input_x_anchor.shape:", input_x_anchor.shape)
print(8 * "++--")
output_y_anchor = sess.run(y_anchor)
print("type(output_y_anchor):", type(output_y_anchor))
print("output_y_anchor.shape:", output_y_anchor.shape)
print("output_y_anchor:\n", output_y_anchor)
print(8 * "++--")
output_y_reshape = sess.run(y_reshape)
print("type(output_y_reshape):", type(output_y_reshape))
print("output_y_reshape.shape:", output_y_reshape.shape)
print("output_y_reshape:\n", output_y_reshape)
print(8 * "++--")
output_y_split = sess.run(y_split)
print("type(output_y_split):", type(output_y_split))
print("output_y_split:\n", output_y_split)
print(8 * "++--")
print("output_y_split[0-5]:")
for step in range(num_step):
print("type(output_y_split[%d]):%s" % (step, type(output_y_split[step])))
print("output_y_split[%d]:\n" % (step), output_y_split[step])
print(8 * "++--")
print("output_y_split[0-5]:")
for step in range(num_step):
print("type([output_y_split[%d]]):%s" % (step, type([output_y_split[step]])))
print("[output_y_split[%d]]:\n" % (step), [output_y_split[step]])
print(8 * "++--")
/usr/bin/python2.7 /home/strong/tensorflow_work/R2CNN_Faster-RCNN_Tensorflow/yongqiang.py
++--++--++--++--++--++--++--++--++--++--++--++--++--++--++--++--
current_directory: /home/strong/tensorflow_work/R2CNN_Faster-RCNN_Tensorflow
++--++--++--++--++--++--++--++--++--++--++--++--++--++--++--++--
2019-08-19 10:39:35.983470: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2019-08-19 10:39:36.065438: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:892] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-08-19 10:39:36.065680: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Found device 0 with properties:
name: GeForce GTX 1080 major: 6 minor: 1 memoryClockRate(GHz): 1.7335
pciBusID: 0000:01:00.0
totalMemory: 7.92GiB freeMemory: 7.39GiB
2019-08-19 10:39:36.065692: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1080, pci bus id: 0000:01:00.0, compute capability: 6.1)
type(input_x_anchor):
input_x_anchor.shape: (1, 6, 2)
++--++--++--++--++--++--++--++--
type(output_y_anchor):
output_y_anchor.shape: (6, 1, 2)
output_y_anchor:
[[[ 0. 1.]]
[[ 2. 3.]]
[[ 4. 5.]]
[[ 6. 7.]]
[[ 8. 9.]]
[[10. 11.]]]
++--++--++--++--++--++--++--++--
type(output_y_reshape):
output_y_reshape.shape: (6, 2)
output_y_reshape:
[[ 0. 1.]
[ 2. 3.]
[ 4. 5.]
[ 6. 7.]
[ 8. 9.]
[10. 11.]]
++--++--++--++--++--++--++--++--
type(output_y_split):
output_y_split:
[array([[0., 1.]], dtype=float32), array([[2., 3.]], dtype=float32), array([[4., 5.]], dtype=float32), array([[6., 7.]], dtype=float32), array([[8., 9.]], dtype=float32), array([[10., 11.]], dtype=float32)]
++--++--++--++--++--++--++--++--
output_y_split[0-5]:
type(output_y_split[0]):
output_y_split[0]:
[[0. 1.]]
type(output_y_split[1]):
output_y_split[1]:
[[2. 3.]]
type(output_y_split[2]):
output_y_split[2]:
[[4. 5.]]
type(output_y_split[3]):
output_y_split[3]:
[[6. 7.]]
type(output_y_split[4]):
output_y_split[4]:
[[8. 9.]]
type(output_y_split[5]):
output_y_split[5]:
[[10. 11.]]
++--++--++--++--++--++--++--++--
output_y_split[0-5]:
type([output_y_split[0]]):
[output_y_split[0]]:
[array([[0., 1.]], dtype=float32)]
type([output_y_split[1]]):
[output_y_split[1]]:
[array([[2., 3.]], dtype=float32)]
type([output_y_split[2]]):
[output_y_split[2]]:
[array([[4., 5.]], dtype=float32)]
type([output_y_split[3]]):
[output_y_split[3]]:
[array([[6., 7.]], dtype=float32)]
type([output_y_split[4]]):
[output_y_split[4]]:
[array([[8., 9.]], dtype=float32)]
type([output_y_split[5]]):
[output_y_split[5]]:
[array([[10., 11.]], dtype=float32)]
++--++--++--++--++--++--++--++--
Process finished with exit code 0
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import absolute_import
from __future__ import print_function
from __future__ import division
import os
import sys
import numpy as np
import tensorflow as tf
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
current_directory = os.path.dirname(os.path.abspath(__file__))
print(16 * "++--")
print("current_directory:", current_directory)
print(16 * "++--")
batch_size = 2
num_step = 6
num_input = 2
# x_anchor shape: (batch_size, num_step, num_input)
x_anchor = tf.constant([[[0, 1],
[2, 3],
[4, 5],
[6, 7],
[8, 9],
[10, 11]],
[[12, 13],
[14, 15],
[16, 17],
[18, 19],
[20, 21],
[22, 23]]], dtype=np.float32)
# Permute num_step and batch_size.
y_anchor = tf.transpose(x_anchor, perm=[1, 0, 2])
# (num_step * batch_size, num_input)
y_reshape = tf.reshape(y_anchor, [num_step * batch_size, num_input])
# Split data because rnn cell needs a list of inputs for the RNN inner loop num_step * (batch_size, num_input)
y_split = tf.split(y_reshape, num_step, 0)
with tf.Session() as sess:
input_x_anchor = sess.run(x_anchor)
print("type(input_x_anchor):", type(input_x_anchor))
print("input_x_anchor.shape:", input_x_anchor.shape)
print(8 * "++--")
output_y_anchor = sess.run(y_anchor)
print("type(output_y_anchor):", type(output_y_anchor))
print("output_y_anchor.shape:", output_y_anchor.shape)
print("output_y_anchor:\n", output_y_anchor)
print(8 * "++--")
output_y_reshape = sess.run(y_reshape)
print("type(output_y_reshape):", type(output_y_reshape))
print("output_y_reshape.shape:", output_y_reshape.shape)
print("output_y_reshape:\n", output_y_reshape)
print(8 * "++--")
output_y_split = sess.run(y_split)
print("type(output_y_split):", type(output_y_split))
print("output_y_split:\n", output_y_split)
print(8 * "++--")
print("output_y_split[0-5]:")
for step in range(num_step):
print("type(output_y_split[%d]):%s" % (step, type(output_y_split[step])))
print("output_y_split[%d]:\n" % (step), output_y_split[step])
print(8 * "++--")
print("output_y_split[0-5]:")
for step in range(num_step):
print("type([output_y_split[%d]]):%s" % (step, type([output_y_split[step]])))
print("[output_y_split[%d]]:\n" % (step), [output_y_split[step]])
print(8 * "++--")
/usr/bin/python2.7 /home/strong/tensorflow_work/R2CNN_Faster-RCNN_Tensorflow/yongqiang.py
++--++--++--++--++--++--++--++--++--++--++--++--++--++--++--++--
current_directory: /home/strong/tensorflow_work/R2CNN_Faster-RCNN_Tensorflow
++--++--++--++--++--++--++--++--++--++--++--++--++--++--++--++--
2019-08-19 11:18:03.818784: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2019-08-19 11:18:03.896519: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:892] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-08-19 11:18:03.896755: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Found device 0 with properties:
name: GeForce GTX 1080 major: 6 minor: 1 memoryClockRate(GHz): 1.7335
pciBusID: 0000:01:00.0
totalMemory: 7.92GiB freeMemory: 7.38GiB
2019-08-19 11:18:03.896766: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1080, pci bus id: 0000:01:00.0, compute capability: 6.1)
type(input_x_anchor):
input_x_anchor.shape: (2, 6, 2)
++--++--++--++--++--++--++--++--
type(output_y_anchor):
output_y_anchor.shape: (6, 2, 2)
output_y_anchor:
[[[ 0. 1.]
[12. 13.]]
[[ 2. 3.]
[14. 15.]]
[[ 4. 5.]
[16. 17.]]
[[ 6. 7.]
[18. 19.]]
[[ 8. 9.]
[20. 21.]]
[[10. 11.]
[22. 23.]]]
++--++--++--++--++--++--++--++--
type(output_y_reshape):
output_y_reshape.shape: (12, 2)
output_y_reshape:
[[ 0. 1.]
[12. 13.]
[ 2. 3.]
[14. 15.]
[ 4. 5.]
[16. 17.]
[ 6. 7.]
[18. 19.]
[ 8. 9.]
[20. 21.]
[10. 11.]
[22. 23.]]
++--++--++--++--++--++--++--++--
type(output_y_split):
output_y_split:
[array([[ 0., 1.],
[12., 13.]], dtype=float32), array([[ 2., 3.],
[14., 15.]], dtype=float32), array([[ 4., 5.],
[16., 17.]], dtype=float32), array([[ 6., 7.],
[18., 19.]], dtype=float32), array([[ 8., 9.],
[20., 21.]], dtype=float32), array([[10., 11.],
[22., 23.]], dtype=float32)]
++--++--++--++--++--++--++--++--
output_y_split[0-5]:
type(output_y_split[0]):
output_y_split[0]:
[[ 0. 1.]
[12. 13.]]
type(output_y_split[1]):
output_y_split[1]:
[[ 2. 3.]
[14. 15.]]
type(output_y_split[2]):
output_y_split[2]:
[[ 4. 5.]
[16. 17.]]
type(output_y_split[3]):
output_y_split[3]:
[[ 6. 7.]
[18. 19.]]
type(output_y_split[4]):
output_y_split[4]:
[[ 8. 9.]
[20. 21.]]
type(output_y_split[5]):
output_y_split[5]:
[[10. 11.]
[22. 23.]]
++--++--++--++--++--++--++--++--
output_y_split[0-5]:
type([output_y_split[0]]):
[output_y_split[0]]:
[array([[ 0., 1.],
[12., 13.]], dtype=float32)]
type([output_y_split[1]]):
[output_y_split[1]]:
[array([[ 2., 3.],
[14., 15.]], dtype=float32)]
type([output_y_split[2]]):
[output_y_split[2]]:
[array([[ 4., 5.],
[16., 17.]], dtype=float32)]
type([output_y_split[3]]):
[output_y_split[3]]:
[array([[ 6., 7.],
[18., 19.]], dtype=float32)]
type([output_y_split[4]]):
[output_y_split[4]]:
[array([[ 8., 9.],
[20., 21.]], dtype=float32)]
type([output_y_split[5]]):
[output_y_split[5]]:
[array([[10., 11.],
[22., 23.]], dtype=float32)]
++--++--++--++--++--++--++--++--
Process finished with exit code 0