spark Tokenization的用法

声明:版权所有,转载请联系作者并注明出处  http://blog.csdn.net/u013719780?viewmode=contents


博主简介:风雪夜归子(Allen),机器学习算法攻城狮,喜爱钻研Meachine Learning的黑科技,对Deep Learning和Artificial Intelligence充满兴趣,经常关注Kaggle数据挖掘竞赛平台,对数据、Machine Learning和Artificial Intelligence有兴趣的童鞋可以一起探讨哦,个人CSDN博客:http://blog.csdn.net/u013719780?viewmode=contents





Tokenization是将文本(例如句子)分割成单词,

RegexTokenizer是基于正则表达式进行单词分割,默认打分割方式是'\s+',



具体应用如下:



from pyspark.ml.feature import Tokenizer, RegexTokenizer

sentenceDataFrame = sqlContext.createDataFrame([
    (0, "Hi I heard about Spark"),
    (1, "I wish Java could use case classes"),
    (2, "Logistic,regression,models,are,neat")
], ["label", "sentence"])
tokenizer = Tokenizer(inputCol="sentence", outputCol="words")
wordsDataFrame = tokenizer.transform(sentenceDataFrame)
wordsDataFrame.select("words", "label").show(5, False)

regexTokenizer = RegexTokenizer(inputCol="sentence", outputCol="words", pattern="\\W")
# alternatively, pattern="\\w+", gaps(False)
regexTokenizer.transform(sentenceDataFrame).show(5, False)
+------------------------------------------+-----+
|words                                     |label|
+------------------------------------------+-----+
|[hi, i, heard, about, spark]              |0    |
|[i, wish, java, could, use, case, classes]|1    |
|[logistic,regression,models,are,neat]     |2    |
+------------------------------------------+-----+

+-----+-----------------------------------+------------------------------------------+
|label|sentence                           |words                                     |
+-----+-----------------------------------+------------------------------------------+
|0    |Hi I heard about Spark             |[hi, i, heard, about, spark]              |
|1    |I wish Java could use case classes |[i, wish, java, could, use, case, classes]|
|2    |Logistic,regression,models,are,neat|[logistic, regression, models, are, neat] |
+-----+-----------------------------------+------------------------------------------+





你可能感兴趣的:(spark)