tensorflow-GPU显卡内存不足的问题

2020-03-02 10:12:59.069121: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
2020-03-02 10:13:00.096254: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties:
name: GeForce GTX 1050 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.62
pciBusID: 0000:01:00.0
totalMemory: 4.00GiB freeMemory: 3.30GiB
2020-03-02 10:13:00.102577: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
2020-03-02 10:13:00.527763: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-03-02 10:13:00.531075: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990]      0
2020-03-02 10:13:00.532523: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0:   N
2020-03-02 10:13:00.534046: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3007 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1050 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1)
Device mapping:
/job:localhost/replica:0/task:0/device:GPU:0 -> device: 0, name: GeForce GTX 1050 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1
2020-03-02 10:13:00.541463: I tensorflow/core/common_runtime/direct_session.cc:317] Device mapping:
/job:localhost/replica:0/task:0/device:GPU:0 -> device: 0, name: GeForce GTX 1050 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1

random_uniform/RandomUniform: (RandomUniform): /job:localhost/replica:0/task:0/device:GPU:0
2020-03-02 10:13:00.548548: I tensorflow/core/common_runtime/placer.cc:1059] random_uniform/RandomUniform: (RandomUniform)/job:localhost/replica:0/task:0/device:GPU:0
random_uniform/sub: (Sub): /job:localhost/replica:0/task:0/device:GPU:0
2020-03-02 10:13:00.552009: I tensorflow/core/common_runtime/placer.cc:1059] random_uniform/sub: (Sub)/job:localhost/replica:0/task:0/device:GPU:0
random_uniform/mul: (Mul): /job:localhost/replica:0/task:0/device:GPU:0
2020-03-02 10:13:00.557408: I tensorflow/core/common_runtime/placer.cc:1059] random_uniform/mul: (Mul)/job:localhost/replica:0/task:0/device:GPU:0
random_uniform: (Add): /job:localhost/replica:0/task:0/device:GPU:0
2020-03-02 10:13:00.572255: I tensorflow/core/common_runtime/placer.cc:1059] random_uniform: (Add)/job:localhost/replica:0/task:0/device:GPU:0
transpose: (Transpose): /job:localhost/replica:0/task:0/device:GPU:0
2020-03-02 10:13:00.585283: I tensorflow/core/common_runtime/placer.cc:1059] transpose: (Transpose)/job:localhost/replica:0/task:0/device:GPU:0
MatMul: (MatMul): /job:localhost/replica:0/task:0/device:GPU:0
2020-03-02 10:13:00.601770: I tensorflow/core/common_runtime/placer.cc:1059] MatMul: (MatMul)/job:localhost/replica:0/task:0/device:GPU:0
Sum: (Sum): /job:localhost/replica:0/task:0/device:GPU:0
2020-03-02 10:13:00.608635: I tensorflow/core/common_runtime/placer.cc:1059] Sum: (Sum)/job:localhost/replica:0/task:0/device:GPU:0
random_uniform/shape: (Const): /job:localhost/replica:0/task:0/device:GPU:0
2020-03-02 10:13:00.622041: I tensorflow/core/common_runtime/placer.cc:1059] random_uniform/shape: (Const)/job:localhost/replica:0/task:0/device:GPU:0
random_uniform/min: (Const): /job:localhost/replica:0/task:0/device:GPU:0
2020-03-02 10:13:00.636870: I tensorflow/core/common_runtime/placer.cc:1059] random_uniform/min: (Const)/job:localhost/replica:0/task:0/device:GPU:0
random_uniform/max: (Const): /job:localhost/replica:0/task:0/device:GPU:0
2020-03-02 10:13:00.653088: I tensorflow/core/common_runtime/placer.cc:1059] random_uniform/max: (Const)/job:localhost/replica:0/task:0/device:GPU:0
transpose/perm: (Const): /job:localhost/replica:0/task:0/device:GPU:0
2020-03-02 10:13:00.657859: I tensorflow/core/common_runtime/placer.cc:1059] transpose/perm: (Const)/job:localhost/replica:0/task:0/device:GPU:0
Const: (Const): /job:localhost/replica:0/task:0/device:GPU:0
2020-03-02 10:13:00.672797: I tensorflow/core/common_runtime/placer.cc:1059] Const: (Const)/job:localhost/replica:0/task:0/device:GPU:0
2020-03-02 10:13:00.698245: I tensorflow/stream_executor/dso_loader.cc:152] successfully opened CUDA library cublas64_100.dll locally
2020-03-02 10:13:11.070585: W tensorflow/core/common_runtime/bfc_allocator.cc:267] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.49GiB.  Current allocation summary follows.
2020-03-02 10:13:11.081488: I tensorflow/core/common_runtime/bfc_allocator.cc:597] Bin (256):   Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2020-03-02 10:13:11.090922: I tensorflow/core/common_runtime/bfc_allocator.cc:597] Bin (512):   Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2020-03-02 10:13:11.100314: I tensorflow/core/common_runtime/bfc_allocator.cc:597] Bin (1024):  Total Chunks: 1, Chunks in use: 1. 1.3KiB allocated for chunks. 1.3KiB in use in bin. 1.0KiB client-requested in use in bin.
2020-03-02 10:13:11.111221: I tensorflow/core/common_runtime/bfc_allocator.cc:597] Bin (2048):  Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2020-03-02 10:13:11.117813: I tensorflow/core/common_runtime/bfc_allocator.cc:597] Bin (4096):  Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2020-03-02 10:13:11.124073: I tensorflow/core/common_runtime/bfc_allocator.cc:597] Bin (8192):  Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2020-03-02 10:13:11.130266: I tensorflow/core/common_runtime/bfc_allocator.cc:597] Bin (16384):         Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2020-03-02 10:13:11.138576: I tensorflow/core/common_runtime/bfc_allocator.cc:597] Bin (32768):         Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2020-03-02 10:13:11.154680: I tensorflow/core/common_runtime/bfc_allocator.cc:597] Bin (65536):         Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2020-03-02 10:13:11.171067: I tensorflow/core/common_runtime/bfc_allocator.cc:597] Bin (131072):        Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2020-03-02 10:13:11.186765: I tensorflow/core/common_runtime/bfc_allocator.cc:597] Bin (262144):        Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2020-03-02 10:13:11.203541: I tensorflow/core/common_runtime/bfc_allocator.cc:597] Bin (524288):        Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2020-03-02 10:13:11.219648: I tensorflow/core/common_runtime/bfc_allocator.cc:597] Bin (1048576):       Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2020-03-02 10:13:11.235084: I tensorflow/core/common_runtime/bfc_allocator.cc:597] Bin (2097152):       Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2020-03-02 10:13:11.251084: I tensorflow/core/common_runtime/bfc_allocator.cc:597] Bin (4194304):       Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2020-03-02 10:13:11.266306: I tensorflow/core/common_runtime/bfc_allocator.cc:597] Bin (8388608):       Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2020-03-02 10:13:11.274447: I tensorflow/core/common_runtime/bfc_allocator.cc:597] Bin (16777216):      Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2020-03-02 10:13:11.290123: I tensorflow/core/common_runtime/bfc_allocator.cc:597] Bin (33554432):      Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2020-03-02 10:13:11.306677: I tensorflow/core/common_runtime/bfc_allocator.cc:597] Bin (67108864):      Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2020-03-02 10:13:11.322973: I tensorflow/core/common_runtime/bfc_allocator.cc:597] Bin (134217728):     Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2020-03-02 10:13:11.342718: I tensorflow/core/common_runtime/bfc_allocator.cc:597] Bin (268435456):     Total Chunks: 2, Chunks in use: 1. 2.94GiB allocated for chunks. 1.49GiB in use in bin. 1.49GiB client-requested in use in bin.
2020-03-02 10:13:11.367066: I tensorflow/core/common_runtime/bfc_allocator.cc:613] Bin for 1.49GiB was 256.00MiB, Chunk State:
2020-03-02 10:13:11.371126: I tensorflow/core/common_runtime/bfc_allocator.cc:619]   Size: 1.45GiB | Requested Size: 0B | in_use: 0, prev:   Size: 1.49GiB | Requested Size: 1.49GiB | in_use: 1
2020-03-02 10:13:11.383105: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0000000902C00000 of size 1280
2020-03-02 10:13:11.396344: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0000000902C00500 of size 1600000000
2020-03-02 10:13:11.401904: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Free  at 00000009621E1500 of size 1553695744
2020-03-02 10:13:11.405850: I tensorflow/core/common_runtime/bfc_allocator.cc:638]      Summary of in-use Chunks by size:
2020-03-02 10:13:11.416262: I tensorflow/core/common_runtime/bfc_allocator.cc:641] 1 Chunks of size 1280 totalling 1.3KiB
2020-03-02 10:13:11.419257: I tensorflow/core/common_runtime/bfc_allocator.cc:641] 1 Chunks of size 1600000000 totalling 1.49GiB
2020-03-02 10:13:11.429068: I tensorflow/core/common_runtime/bfc_allocator.cc:645] Sum Total of in-use chunks: 1.49GiB
2020-03-02 10:13:11.437217: I tensorflow/core/common_runtime/bfc_allocator.cc:647] Stats:
Limit:                  3153697177
InUse:                  1600001280
MaxInUse:               1600001280
NumAllocs:                       2
MaxAllocSize:           1600000000

2020-03-02 10:13:11.478396: W tensorflow/core/common_runtime/bfc_allocator.cc:271] ***************************************************_________________________________________________
2020-03-02 10:13:11.486216: W tensorflow/core/framework/op_kernel.cc:1401] OP_REQUIRES failed at transpose_op.cc:199 : Resource exhausted: OOM when allocating tensor with shape[20000,20000] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
Traceback (most recent call last):
  File "D:\ProgramFiles\Anaconda3\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\client\session.py", line 1334, in _do_call
    return fn(*args)
  File "D:\ProgramFiles\Anaconda3\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\client\session.py", line 1319, in _run_fn
    options, feed_dict, fetch_list, target_list, run_metadata)
  File "D:\ProgramFiles\Anaconda3\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\client\session.py", line 1407, in _call_tf_sessionrun
    run_metadata)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[20000,20000] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[{{node transpose}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

         [[{{node Sum}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.


During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "untitled1.py", line 27, in
    result = session.run(sum_operation)
  File "D:\ProgramFiles\Anaconda3\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\client\session.py", line 929, in run
    run_metadata_ptr)
  File "D:\ProgramFiles\Anaconda3\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\client\session.py", line 1152, in _run
    feed_dict_tensor, options, run_metadata)
  File "D:\ProgramFiles\Anaconda3\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\client\session.py", line 1328, in _do_run
    run_metadata)
  File "D:\ProgramFiles\Anaconda3\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\client\session.py", line 1348, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[20000,20000] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node transpose (defined at untitled1.py:22) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

         [[node Sum (defined at untitled1.py:23) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.


Caused by op 'transpose', defined at:
  File "untitled1.py", line 22, in
    dot_operation = tf.matmul(random_matrix, tf.transpose(random_matrix))
  File "D:\ProgramFiles\Anaconda3\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\ops\array_ops.py", line 1666, in transpose
    ret = transpose_fn(a, perm, name=name)
  File "D:\ProgramFiles\Anaconda3\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 10238, in transpose
    "Transpose", x=x, perm=perm, name=name)
  File "D:\ProgramFiles\Anaconda3\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 788, in _apply_op_helper
    op_def=op_def)
  File "D:\ProgramFiles\Anaconda3\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\util\deprecation.py", line 507, in new_func
    return func(*args, **kwargs)
  File "D:\ProgramFiles\Anaconda3\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\framework\ops.py", line 3300, in create_op
    op_def=op_def)
  File "D:\ProgramFiles\Anaconda3\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\framework\ops.py", line 1801, in __init__
    self._traceback = tf_stack.extract_stack()

ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[20000,20000] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node transpose (defined at untitled1.py:22) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

         [[node Sum (defined at untitled1.py:23) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

哪位专业大神帮忙看看是怎么回事,笔记本显卡是GTX1050TI 4G内存,但是在运行tensorflow-GPU的时候,提示使用中的内存只有1.49G,为什么运行的时候不是3.3G全部使用呢,输入10000*10000的数据没问题,可以正常运行,但是2000*20000的数据就会报错。求网络大神指点一二!!!感激不尽~~~

 

你可能感兴趣的:(tensorflow,深度学习,机器学习)