graphsage:example_supervised.sh在docker中运行

由于直接使用linux自带的python2.7和pip无法直接运行supervised.sh
故本文介绍如何解决这个问题

  1. 问题:
root@862cbef0b220:/notebooks# sh example_supervised.sh

/usr/local/lib/python2.7/dist-packages/h5py/__init__.py:34: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`. 
  from ._conv import register_converters as _register_converters 
Traceback (most recent call last): 
  File "/usr/lib/python2.7/runpy.py", line 174, in _run_module_as_main 
    "__main__", fname, loader, pkg_name) 
  File "/usr/lib/python2.7/runpy.py", line 72, in _run_code 
    exec code in run_globals 
  File "/notebooks/graphsage/supervised_train.py", line 59, in <module> 
    os.environ["CUDA_VISIBLE_DEVICES"]=str(FLAGS.gpu) 
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/flags.py", line 84, in __getattr__ 
    wrapped(_sys.argv) 
  File "/usr/local/lib/python2.7/dist-packages/absl/flags/_flagvalues.py", line 630, in __call__ 
    name, value, suggestions=suggestions) 
absl.flags._exceptions.UnrecognizedFlagError: Unknown command line flag 'sigmoid '. Did you mean: sigmoid ? 
  1. 修改example_supervised.sh中的内容
#python -m graphsage.supervised_train --train_prefix ./example_data/toy-ppi --model graphsage_mean --sigmoid
#注意最后要有一个空格,否则model无法正确识别graphsage_mean
python -m graphsage.supervised_train --train_prefix ./example_data/toy-ppi --model graphsage_mean 
#这两选其一
python -m graphsage.supervised_train --train_prefix ./example_data/toy-ppi --model graphsage_mean --sigmoid true 
  1. 添加true运行结果:
/usr/local/lib/python2.7/dist-packages/h5py/__init__.py:34: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
Loading training data..
Removed 0 nodes that lacked proper annotations due to networkx versioning issues
Loaded data.. now preprocessing..
Done loading training data..
2021-01-30 12:30:28.895600: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 AVX512F FMA
Epoch: 0001
/usr/local/lib/python2.7/dist-packages/sklearn/metrics/classification.py:1135: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples.
  'precision', 'predicted', average, warn_for)
Iter: 0000 train_loss= 0.69432 train_f1_mic= 0.36073 train_f1_mac= 0.31235 val_loss= 0.66627 val_f1_mic= 0.39092 val_f1_mac= 0.15853 time= 0.46338
Iter: 0005 train_loss= 0.58514 train_f1_mic= 0.37472 train_f1_mac= 0.09606 val_loss= 0.66627 val_f1_mic= 0.39092 val_f1_mac= 0.15853 time= 0.13751
Iter: 0010 train_loss= 0.54713 train_f1_mic= 0.39910 train_f1_mac= 0.10233 val_loss= 0.66627 val_f1_mic= 0.39092 val_f1_mac= 0.15853 time= 0.10775
Iter: 0015 train_loss= 0.54716 train_f1_mic= 0.37314 train_f1_mac= 0.08882 val_loss= 0.66627 val_f1_mic= 0.39092 val_f1_mac= 0.15853 time= 0.09711
Epoch: 0002
Iter: 0001 train_loss= 0.54240 train_f1_mic= 0.38882 train_f1_mac= 0.09738 val_loss= 0.58213 val_f1_mic= 0.39087 val_f1_mac= 0.10418 time= 0.09446
Iter: 0006 train_loss= 0.54390 train_f1_mic= 0.39436 train_f1_mac= 0.10239 val_loss= 0.58213 val_f1_mic= 0.39087 val_f1_mac= 0.10418 time= 0.09056
Iter: 0011 train_loss= 0.53871 train_f1_mic= 0.38232 train_f1_mac= 0.09799 val_loss= 0.58213 val_f1_mic= 0.39087 val_f1_mac= 0.10418 time= 0.08786
Iter: 0016 train_loss= 0.52769 train_f1_mic= 0.40183 train_f1_mac= 0.10887 val_loss= 0.58213 val_f1_mic= 0.39087 val_f1_mac= 0.10418 time= 0.08600
Epoch: 0003
Iter: 0002 train_loss= 0.53543 train_f1_mic= 0.40884 train_f1_mac= 0.12267 val_loss= 0.54932 val_f1_mic= 0.40501 val_f1_mac= 0.12482 time= 0.08568
Iter: 0007 train_loss= 0.53182 train_f1_mic= 0.41435 train_f1_mac= 0.12918 val_loss= 0.54932 val_f1_mic= 0.40501 val_f1_mac= 0.12482 time= 0.08414
Iter: 0012 train_loss= 0.53737 train_f1_mic= 0.41591 train_f1_mac= 0.13511 val_loss= 0.54932 val_f1_mic= 0.40501 val_f1_mac= 0.12482 time= 0.08308
Iter: 0017 train_loss= 0.51917 train_f1_mic= 0.42600 train_f1_mac= 0.14416 val_loss= 0.54932 val_f1_mic= 0.40501 val_f1_mac= 0.12482 time= 0.08219
Epoch: 0004
Iter: 0003 train_loss= 0.52194 train_f1_mic= 0.43495 train_f1_mac= 0.15523 val_loss= 0.55966 val_f1_mic= 0.42360 val_f1_mac= 0.15764 time= 0.08276
Iter: 0008 train_loss= 0.51926 train_f1_mic= 0.43632 train_f1_mac= 0.17155 val_loss= 0.55966 val_f1_mic= 0.42360 val_f1_mac= 0.15764 time= 0.08191
Iter: 0013 train_loss= 0.51333 train_f1_mic= 0.44723 train_f1_mac= 0.18020 val_loss= 0.55966 val_f1_mic= 0.42360 val_f1_mac= 0.15764 time= 0.08141
Iter: 0018 train_loss= 0.52652 train_f1_mic= 0.44222 train_f1_mac= 0.16950 val_loss= 0.55966 val_f1_mic= 0.42360 val_f1_mac= 0.15764 time= 0.08106
Epoch: 0005
Iter: 0004 train_loss= 0.51005 train_f1_mic= 0.47784 train_f1_mac= 0.21889 val_loss= 0.53217 val_f1_mic= 0.47446 val_f1_mac= 0.21017 time= 0.08141
Iter: 0009 train_loss= 0.52249 train_f1_mic= 0.46545 train_f1_mac= 0.22623 val_loss= 0.53217 val_f1_mic= 0.47446 val_f1_mac= 0.21017 time= 0.08102
Iter: 0014 train_loss= 0.51685 train_f1_mic= 0.45182 train_f1_mac= 0.18890 val_loss= 0.53217 val_f1_mic= 0.47446 val_f1_mac= 0.21017 time= 0.08055
Epoch: 0006
Iter: 0000 train_loss= 0.51084 train_f1_mic= 0.47247 train_f1_mac= 0.22091 val_loss= 0.52850 val_f1_mic= 0.44377 val_f1_mac= 0.20533 time= 0.08110
Iter: 0005 train_loss= 0.50129 train_f1_mic= 0.48252 train_f1_mac= 0.23852 val_loss= 0.52850 val_f1_mic= 0.44377 val_f1_mac= 0.20533 time= 0.08077
Iter: 0010 train_loss= 0.50691 train_f1_mic= 0.46397 train_f1_mac= 0.22143 val_loss= 0.52850 val_f1_mic= 0.44377 val_f1_mac= 0.20533 time= 0.08046
Iter: 0015 train_loss= 0.52503 train_f1_mic= 0.48250 train_f1_mac= 0.24665 val_loss= 0.52850 val_f1_mic= 0.44377 val_f1_mac= 0.20533 time= 0.08004
Epoch: 0007
Iter: 0001 train_loss= 0.50124 train_f1_mic= 0.47579 train_f1_mac= 0.24212 val_loss= 0.51397 val_f1_mic= 0.50244 val_f1_mac= 0.27052 time= 0.08042
Iter: 0006 train_loss= 0.51077 train_f1_mic= 0.46243 train_f1_mac= 0.23261 val_loss= 0.51397 val_f1_mic= 0.50244 val_f1_mac= 0.27052 time= 0.08000
Iter: 0011 train_loss= 0.49992 train_f1_mic= 0.50862 train_f1_mac= 0.27917 val_loss= 0.51397 val_f1_mic= 0.50244 val_f1_mac= 0.27052 time= 0.07972
Iter: 0016 train_loss= 0.50590 train_f1_mic= 0.48967 train_f1_mac= 0.24376 val_loss= 0.51397 val_f1_mic= 0.50244 val_f1_mac= 0.27052 time= 0.07939
Epoch: 0008
Iter: 0002 train_loss= 0.50953 train_f1_mic= 0.45427 train_f1_mac= 0.22354 val_loss= 0.53055 val_f1_mic= 0.48952 val_f1_mac= 0.26339 time= 0.07957
Iter: 0007 train_loss= 0.50673 train_f1_mic= 0.50676 train_f1_mac= 0.29080 val_loss= 0.53055 val_f1_mic= 0.48952 val_f1_mac= 0.26339 time= 0.07922
Iter: 0012 train_loss= 0.50169 train_f1_mic= 0.47467 train_f1_mac= 0.23575 val_loss= 0.53055 val_f1_mic= 0.48952 val_f1_mac= 0.26339 time= 0.07898
Iter: 0017 train_loss= 0.51196 train_f1_mic= 0.46920 train_f1_mac= 0.24677 val_loss= 0.53055 val_f1_mic= 0.48952 val_f1_mac= 0.26339 time= 0.07874
Epoch: 0009
Iter: 0003 train_loss= 0.51006 train_f1_mic= 0.50357 train_f1_mac= 0.27963 val_loss= 0.54308 val_f1_mic= 0.53150 val_f1_mac= 0.34686 time= 0.07902
Iter: 0008 train_loss= 0.50071 train_f1_mic= 0.49455 train_f1_mac= 0.26657 val_loss= 0.54308 val_f1_mic= 0.53150 val_f1_mac= 0.34686 time= 0.07889
Iter: 0013 train_loss= 0.50078 train_f1_mic= 0.48694 train_f1_mac= 0.27452 val_loss= 0.54308 val_f1_mic= 0.53150 val_f1_mac= 0.34686 time= 0.07866
Iter: 0018 train_loss= 0.50095 train_f1_mic= 0.50621 train_f1_mac= 0.27854 val_loss= 0.54308 val_f1_mic= 0.53150 val_f1_mac= 0.34686 time= 0.07844
Epoch: 0010
Iter: 0004 train_loss= 0.48717 train_f1_mic= 0.50655 train_f1_mac= 0.27870 val_loss= 0.52146 val_f1_mic= 0.50954 val_f1_mac= 0.28835 time= 0.07870
Iter: 0009 train_loss= 0.49033 train_f1_mic= 0.49413 train_f1_mac= 0.28028 val_loss= 0.52146 val_f1_mic= 0.50954 val_f1_mac= 0.28835 time= 0.07851
Iter: 0014 train_loss= 0.49679 train_f1_mic= 0.49359 train_f1_mac= 0.27485 val_loss= 0.52146 val_f1_mic= 0.50954 val_f1_mac= 0.28835 time= 0.07832
Optimization Finished!
Full validation stats: loss= 0.50865 f1_micro= 0.53414 f1_macro= 0.33826 time= 0.30770
Writing test set stats to file (don't peak!)

  1. 删除–sigmoid运行结果:
/usr/local/lib/python2.7/dist-packages/h5py/__init__.py:34: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
Loading training data..
Removed 0 nodes that lacked proper annotations due to networkx versioning issues
Loaded data.. now preprocessing..
Done loading training data..
WARNING:tensorflow:From graphsage/supervised_models.py:118: softmax_cross_entropy_with_logits (from tensorflow.python.ops.nn_ops) is deprecated and will be removed in a future version.
Instructions for updating:

Future major versions of TensorFlow will allow gradients to flow
into the labels input on backprop by default.

See @{
     tf.nn.softmax_cross_entropy_with_logits_v2}.

2021-01-30 12:23:09.614691: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 AVX512F FMA
Epoch: 0001
/usr/local/lib/python2.7/dist-packages/sklearn/metrics/classification.py:1135: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples.
  'precision', 'predicted', average, warn_for)
/usr/local/lib/python2.7/dist-packages/sklearn/metrics/classification.py:1137: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no true samples.
  'recall', 'true', average, warn_for)
Iter: 0000 train_loss= 160.41428 train_f1_mic= 0.00000 train_f1_mac= 0.00000 val_loss= 191.87144 val_f1_mic= 0.00000 val_f1_mac= 0.00000 time= 0.44861
Iter: 0005 train_loss= 177.73340 train_f1_mic= 0.00000 train_f1_mac= 0.00000 val_loss= 191.87144 val_f1_mic= 0.00000 val_f1_mac= 0.00000 time= 0.13390
Iter: 0010 train_loss= 168.67764 train_f1_mic= 0.00000 train_f1_mac= 0.00000 val_loss= 191.87144 val_f1_mic= 0.00000 val_f1_mac= 0.00000 time= 0.10495
Iter: 0015 train_loss= 174.82837 train_f1_mic= 0.00000 train_f1_mac= 0.00000 val_loss= 191.87144 val_f1_mic= 0.00000 val_f1_mac= 0.00000 time= 0.09378
Epoch: 0002
Iter: 0001 train_loss= 169.44260 train_f1_mic= 0.00000 train_f1_mac= 0.00000 val_loss= 196.31119 val_f1_mic= 0.00000 val_f1_mac= 0.00000 time= 0.09135
Iter: 0006 train_loss= 171.03847 train_f1_mic= 0.00000 train_f1_mac= 0.00000 val_loss= 196.31119 val_f1_mic= 0.00000 val_f1_mac= 0.00000 time= 0.08695
Iter: 0011 train_loss= 168.96021 train_f1_mic= 0.00000 train_f1_mac= 0.00000 val_loss= 196.31119 val_f1_mic= 0.00000 val_f1_mac= 0.00000 time= 0.08446
Iter: 0016 train_loss= 164.49638 train_f1_mic= 0.00000 train_f1_mac= 0.00000 val_loss= 196.31119 val_f1_mic= 0.00000 val_f1_mac= 0.00000 time= 0.08256
Epoch: 0003
Iter: 0002 train_loss= 171.00937 train_f1_mic= 0.00000 train_f1_mac= 0.00000 val_loss= 181.06224 val_f1_mic= 0.00000 val_f1_mac= 0.00000 time= 0.08223
Iter: 0007 train_loss= 170.52426 train_f1_mic= 0.00000 train_f1_mac= 0.00000 val_loss= 181.06224 val_f1_mic= 0.00000 val_f1_mac= 0.00000 time= 0.08141
Iter: 0012 train_loss= 174.40283 train_f1_mic= 0.00000 train_f1_mac= 0.00000 val_loss= 181.06224 val_f1_mic= 0.00000 val_f1_mac= 0.00000 time= 0.08030
Iter: 0017 train_loss= 162.59254 train_f1_mic= 0.00000 train_f1_mac= 0.00000 val_loss= 181.06224 val_f1_mic= 0.00000 val_f1_mac= 0.00000 time= 0.07951
Epoch: 0004
Iter: 0003 train_loss= 170.47632 train_f1_mic= 0.00000 train_f1_mac= 0.00000 val_loss= 190.85687 val_f1_mic= 0.00000 val_f1_mac= 0.00000 time= 0.07974
Iter: 0008 train_loss= 169.11850 train_f1_mic= 0.00000 train_f1_mac= 0.00000 val_loss= 190.85687 val_f1_mic= 0.00000 val_f1_mac= 0.00000 time= 0.07906
Iter: 0013 train_loss= 166.35480 train_f1_mic= 0.00000 train_f1_mac= 0.00000 val_loss= 190.85687 val_f1_mic= 0.00000 val_f1_mac= 0.00000 time= 0.07860
Iter: 0018 train_loss= 173.13470 train_f1_mic= 0.00000 train_f1_mac= 0.00000 val_loss= 190.85687 val_f1_mic= 0.00000 val_f1_mac= 0.00000 time= 0.07808
Epoch: 0005
Iter: 0004 train_loss= 165.90140 train_f1_mic= 0.00000 train_f1_mac= 0.00000 val_loss= 191.77600 val_f1_mic= 0.00000 val_f1_mac= 0.00000 time= 0.07838
Iter: 0009 train_loss= 168.58208 train_f1_mic= 0.00195 train_f1_mac= 0.00020 val_loss= 191.77600 val_f1_mic= 0.00000 val_f1_mac= 0.00000 time= 0.07815
Iter: 0014 train_loss= 173.08069 train_f1_mic= 0.00000 train_f1_mac= 0.00000 val_loss= 191.77600 val_f1_mic= 0.00000 val_f1_mac= 0.00000 time= 0.07775
Epoch: 0006
Iter: 0000 train_loss= 168.51591 train_f1_mic= 0.00000 train_f1_mac= 0.00000 val_loss= 177.74667 val_f1_mic= 0.00000 val_f1_mac= 0.00000 time= 0.07799
Iter: 0005 train_loss= 164.98459 train_f1_mic= 0.00000 train_f1_mac= 0.00000 val_loss= 177.74667 val_f1_mic= 0.00000 val_f1_mac= 0.00000 time= 0.07759
Iter: 0010 train_loss= 166.24792 train_f1_mic= 0.00000 train_f1_mac= 0.00000 val_loss= 177.74667 val_f1_mic= 0.00000 val_f1_mac= 0.00000 time= 0.07728
Iter: 0015 train_loss= 177.47586 train_f1_mic= 0.00000 train_f1_mac= 0.00000 val_loss= 177.74667 val_f1_mic= 0.00000 val_f1_mac= 0.00000 time= 0.07701
Epoch: 0007
Iter: 0001 train_loss= 167.84467 train_f1_mic= 0.00000 train_f1_mac= 0.00000 val_loss= 186.25230 val_f1_mic= 0.00000 val_f1_mac= 0.00000 time= 0.07729
Iter: 0006 train_loss= 174.62109 train_f1_mic= 0.00195 train_f1_mac= 0.00022 val_loss= 186.25230 val_f1_mic= 0.00000 val_f1_mac= 0.00000 time= 0.07703
Iter: 0011 train_loss= 165.84830 train_f1_mic= 0.00000 train_f1_mac= 0.00000 val_loss= 186.25230 val_f1_mic= 0.00000 val_f1_mac= 0.00000 time= 0.07658
Iter: 0016 train_loss= 171.69147 train_f1_mic= 0.00195 train_f1_mac= 0.00021 val_loss= 186.25230 val_f1_mic= 0.00000 val_f1_mac= 0.00000 time= 0.07624
Epoch: 0008
Iter: 0002 train_loss= 174.47983 train_f1_mic= 0.00000 train_f1_mac= 0.00000 val_loss= 192.23184 val_f1_mic= 0.00000 val_f1_mac= 0.00000 time= 0.07636
Iter: 0007 train_loss= 177.03201 train_f1_mic= 0.00195 train_f1_mac= 0.00021 val_loss= 192.23184 val_f1_mic= 0.00000 val_f1_mac= 0.00000 time= 0.07611
Iter: 0012 train_loss= 167.97951 train_f1_mic= 0.00195 train_f1_mac= 0.00020 val_loss= 192.23184 val_f1_mic= 0.00000 val_f1_mac= 0.00000 time= 0.07587
Iter: 0017 train_loss= 172.86757 train_f1_mic= 0.00000 train_f1_mac= 0.00000 val_loss= 192.23184 val_f1_mic= 0.00000 val_f1_mac= 0.00000 time= 0.07573
Epoch: 0009
Iter: 0003 train_loss= 176.04779 train_f1_mic= 0.00000 train_f1_mac= 0.00000 val_loss= 197.77148 val_f1_mic= 0.00000 val_f1_mac= 0.00000 time= 0.07589
Iter: 0008 train_loss= 169.42477 train_f1_mic= 0.00000 train_f1_mac= 0.00000 val_loss= 197.77148 val_f1_mic= 0.00000 val_f1_mac= 0.00000 time= 0.07568
Iter: 0013 train_loss= 168.66295 train_f1_mic= 0.00000 train_f1_mac= 0.00000 val_loss= 197.77148 val_f1_mic= 0.00000 val_f1_mac= 0.00000 time= 0.07556
Iter: 0018 train_loss= 168.70827 train_f1_mic= 0.00200 train_f1_mac= 0.00023 val_loss= 197.77148 val_f1_mic= 0.00000 val_f1_mac= 0.00000 time= 0.07548
Epoch: 0010
Iter: 0004 train_loss= 165.38162 train_f1_mic= 0.00000 train_f1_mac= 0.00000 val_loss= 185.19086 val_f1_mic= 0.00000 val_f1_mac= 0.00000 time= 0.07574
Iter: 0009 train_loss= 169.55675 train_f1_mic= 0.00195 train_f1_mac= 0.00019 val_loss= 185.19086 val_f1_mic= 0.00000 val_f1_mac= 0.00000 time= 0.07558
Iter: 0014 train_loss= 168.75366 train_f1_mic= 0.00000 train_f1_mac= 0.00000 val_loss= 185.19086 val_f1_mic= 0.00000 val_f1_mac= 0.00000 time= 0.07545
Optimization Finished!
Full validation stats: loss= 184.93787 f1_micro= 0.00055 f1_macro= 0.00005 time= 0.24668
Writing test set stats to file (don't peak!)

  1. 最终比较
#--sigmoid true
Full validation stats: loss= 0.50865 f1_micro= 0.53414 f1_macro= 0.33826 time= 0.30770
#no sigmoid
Full validation stats: loss= 184.93787 f1_micro= 0.00055 f1_macro= 0.00005 time= 0.24668

你可能感兴趣的:(graphsage,python,linux,tensorflow,深度学习)