tf axis = 1

总是搞不清楚在axis上加减的结果,例子如下:

import tensorflow as tf
from scipy.spatial.distance import pdist, squareform


class EmbeddingTable(object):
    def __init__(self, ini):
        self.embedding_table = tf.get_variable(name="embedding",
                                               initializer=ini, trainable=True)

    def get_shape(self):
        return self.embedding_table.get_shape()

    def embed_words(self, words):
        """

        :param words:  padding 之后的id
        :return:
        """
        emb = tf.nn.embedding_lookup(self.embedding_table, words)
        return emb


ini = [[1, 0, 0], [0, 1, 1], [0, 0, 1]]
embed_table = EmbeddingTable(ini)

word = tf.placeholder(dtype=tf.int32, shape=(None, None))
mm = tf.placeholder(dtype=tf.int32, shape=(None, None))
x_op = embed_table.embed_words(word)

xx = [[1, 2],
      [0, 1]]

mask = [[1, 0],
        [0, 1]]

xx_result = tf.reduce_sum(xx, axis=1)
multi_op = x_op*mm[:, :, None]
n = tf.reduce_sum(multi_op, axis=0)

ini_op = tf.global_variables_initializer()
with tf.Session() as sess:
    sess.run(ini_op)
    # m = sess.run(x_op, feed_dict={word:xx})

    # x_, bb, x_m = sess.run([x_op,multi_op ,n], feed_dict={mm: mask, word: xx})
    x_ = sess.run(xx_result, feed_dict={mm: mask, word: xx})  #[3 1]


#########
xx_result = tf.reduce_sum(xx, axis=0)  
x_ = sess.run(xx_result, feed_dict={mm: mask, word: xx})     #[1, 3]

little detail

  1. 求矩阵的转置的时候,没有tenor.T 这种写法的;
    tf.transpose(tensor)
  2. Tensor 之间的运算规则
    相同大小 Tensor 之间的任何算术运算都会将运算应用到元素级
    不同大小 Tensor(要求dimension 0 必须相同) 之间的运算叫做广播(broadcasting)
    Tensor 与 Scalar(0维 tensor) 间的算术运算会将那个标量值传播到各个元素
    Note: TensorFLow 在进行数学运算时,一定要求各个 Tensor 数据类型一致

tensor 之间的数学运算可以参考一下几点:
https://blog.csdn.net/zywvvd/article/details/78593618

sentence embedding:
A Simple Language Model based Evaluator for Sentence Compression.
Exploring Semantic Properties of Sentence Embeddings
Unsupervised Discrete Sentence Representation Learning for Interpretable Neural Dialog Generation. Tiancheng Zhao, Kyusong Lee and Maxine Eskenazi.
Sentence-State LSTM for Text Representation. Yue Zhang, Qi Liu and Linfeng Song.
Subword-level Word Vector Representations for Korean. Sungjoon Park, Jeongmin Byun, Sion Baek, Yongseok Cho and Alice Oh.
hyperdoc2vec: Distributed Representations of Hypertext Documents. Jialong Han, Yan Song, Wayne Xin Zhao, Shuming Shi and Haisong Zhang.

tensorflow API

你可能感兴趣的:(tensorflow)