Spark ML 基于Iris数据集进行数据建模及回归聚类综合分析-Spark商业ML实战

本套技术专栏是作者(秦凯新)平时工作的总结和升华,通过从真实商业环境抽取案例进行总结和分享,并给出商业应用的调优建议和集群环境容量规划等内容,请持续关注本套博客。版权声明:禁止转载,欢迎学习。QQ邮箱地址:[email protected],如有任何商业交流,可随时联系。

1 Iris数据集(开灶做饭)

  • Iris数据集是常用的分类实验数据集,由Fisher于1936收集整理。Iris也称鸢尾花卉数据集,是一类多重变量分析的数据集。

  • 数据集包含150个数据集,分为3类,每类50个数据,每个数据包含4个属性。iris以鸢尾花的特征作为数据来源,常用在分类操作中。该数据集由3种不同类型的鸢尾花的50个样本数据构成。其中的一个种类与另外两个种类是线性可分离的,后两个种类是非线性可分离的。

  • 一般通过以下4个属性预测鸢尾花卉属于(Setosa,Versicolour,Virginica)三个种类中的哪一类。四个属性:

      Sepal.Length(花萼长度), 单位是cm;
      Sepal.Width (花萼宽度), 单位是cm;
      Petal.Length(花瓣长度), 单位是cm;
      Petal.Width (花瓣宽度), 单位是cm;
    
  • 三个种类:

      Iris Setosa(山鸢尾)
      Iris Versicolour(杂色鸢尾)
      Iris Virginica(维吉尼亚鸢尾)
    

2 数据集展示

  Sepal.Length	  Sepal.Width	  Petal.Length	   Petal.Width	          Species
    5.1	            3.5	            1.4	                0.2	             Iris-setosa
    4.9	            3	            1.4	                0.2	             Iris-setosa
    4.7	            3.2	            1.3	                0.2	             Iris-setosa
    4.6	            3.1	            1.5	                0.2	             Iris-setosa
    5	            3.6	            1.4	                0.2	             Iris-setosa
    5.4	            3.9	            1.7	                0.4	             Iris-setosa
    4.6	            3.4	            1.4	                0.3	             Iris-setosa

3 数据预处理和分析

3.1 通过CSV文件进行预处理

1 数据读入处理
val df = spark.read.format("csv") .option("sep", ",").option("inferSchema", "true").option("header", "true") .load("/data/iris.csv")
+------------+-----------+------------+-----------+-----------+
|Sepal.Length|Sepal.Width|Petal.Length|Petal.Width|    Species|
+------------+-----------+------------+-----------+-----------+
|         5.1|        3.5|         1.4|        0.2|Iris-setosa|
|         4.9|        3.0|         1.4|        0.2|Iris-setosa|
|         4.7|        3.2|         1.3|        0.2|Iris-setosa|
|         4.6|        3.1|         1.5|        0.2|Iris-setosa|
|         5.0|        3.6|         1.4|        0.2|Iris-setosa|
|         5.4|        3.9|         1.7|        0.4|Iris-setosa|

2 特征索引转化 
import org.apache.spark.ml.feature.StringIndexer
val indexer = new StringIndexer().setInputCol("Species").setOutputCol("categoryIndex")
val model =indexer.fit(df)
val indexed = model.transform(df)
model.show()

+------------+-----------+------------+-----------+-----------+-------------+
|Sepal.Length|Sepal.Width|Petal.Length|Petal.Width|    Species|categoryIndex|
+------------+-----------+------------+-----------+-----------+-------------+
|         5.1|        3.5|         1.4|        0.2|Iris-setosa|          0.0|
|         4.9|        3.0|         1.4|        0.2|Iris-setosa|          0.0|
|         4.7|        3.2|         1.3|        0.2|Iris-setosa|          0.0|
|         4.6|        3.1|         1.5|        0.2|Iris-setosa|          0.0|
|         5.0|        3.6|         1.4|        0.2|Iris-setosa|          0.0|
|         5.4|        3.9|         1.7|        0.4|Iris-setosa|          0.0|

3.2 通过txt文件进行预处理

1 数据读入处理
case class Iris(Sepal_Length:Double, Sepal_Width:Double, Petal_Length:Double, Petal_Width:Double, Species:String)
val data = sc.textFile("/data/iris.txt")
val header = data.first
val df2  = data.filter(_ != header).map(_.split("\t")).map(l => Iris(l(0).toDouble,l(1).toDouble,l(2).toDouble,l(3).toDouble,l(4).toString)).toDF

+------------+-----------+------------+-----------+-----------+
|Sepal_Length|Sepal_Width|Petal_Length|Petal_Width|    Species|
+------------+-----------+------------+-----------+-----------+
|         5.1|        3.5|         1.4|        0.2|Iris-setosa|
|         4.9|        3.0|         1.4|        0.2|Iris-setosa|
|         4.7|        3.2|         1.3|        0.2|Iris-setosa|
|         4.6|        3.1|         1.5|        0.2|Iris-setosa|
|         5.0|        3.6|         1.4|        0.2|Iris-setosa|
|         5.4|        3.9|         1.7|        0.4|Iris-setosa|

2 特征索引转化 
import org.apache.spark.ml.feature.StringIndexer
val indexer = new StringIndexer().setInputCol("Species").setOutputCol("categoryIndex")
val model =indexer.fit(df2)
//val indexed = model.transform(df2).filter(!$"Species".equalTo("Iris-virginica"))
val indexed = model.transform(df2).filter("categoryIndex<2.0")
//indexed.show()

+------------+-----------+------------+-----------+-----------+-------------+
|Sepal_Length|Sepal_Width|Petal_Length|Petal_Width|    Species|categoryIndex|
+------------+-----------+------------+-----------+-----------+-------------+
|         5.1|        3.5|         1.4|        0.2|Iris-setosa|          0.0|
|         4.9|        3.0|         1.4|        0.2|Iris-setosa|          0.0|
|         4.7|        3.2|         1.3|        0.2|Iris-setosa|          0.0|
|         4.6|        3.1|         1.5|        0.2|Iris-setosa|          0.0|
|         5.0|        3.6|         1.4|        0.2|Iris-setosa|          0.0|

4 Iris数据集回归分析

1 得到特征值和便签的索引
val features = List("Sepal_Length", "Sepal_Width", "Petal_Length", "Petal_Width").map(indexed.columns.indexOf(_))
features: List[Int] = List(0, 1, 2, 3)
 
val targetInd = indexed.columns.indexOf("categoryIndex") 
targetInd: Int = 5 

2 特征转换成向量
import org.apache.spark.mllib.linalg.{Vector, Vectors}  
import org.apache.spark.mllib.regression.LabeledPoint
val labeledPointIris = indexed.rdd.map(r => LabeledPoint(r.getDouble(targetInd),Vectors.dense(features.map(r.getDouble(_)).toArray)))

scala> labeledPointIris.foreach(println)
(0.0,[5.1,3.5,1.4,0.2])
(0.0,[4.9,3.0,1.4,0.2])
(0.0,[4.7,3.2,1.3,0.2])
(0.0,[4.6,3.1,1.5,0.2])
(0.0,[5.0,3.6,1.4,0.2])
(0.0,[5.4,3.9,1.7,0.4])
(0.0,[4.6,3.4,1.4,0.3])
(0.0,[5.0,3.4,1.5,0.2])
(0.0,[4.4,2.9,1.4,0.2])
(0.0,[4.9,3.1,1.5,0.1])

scala> println(labeledPointIris.first.features)
[5.1,3.5,1.4,0.2]
scala> println(labeledPointIris.first.label)
0.0

3 测试集与训练集分开
val splits = labeledPointIris.randomSplit(Array(0.8, 0.2), seed = 11L)
val trainingData = splits(0).cache
val testData = splits(1).cache


4 线性回归算法预测1-LogisticRegressionWithSGD

import org.apache.spark.mllib.classification.{LogisticRegressionWithSGD,LogisticRegressionWithLBFGS}
import org.apache.spark.mllib.classification.LogisticRegressionModel
import org.apache.spark.mllib.regression.GeneralizedLinearAlgorithm

 val lr = new LogisticRegressionWithSGD().setIntercept(true)
 lr.optimizer.setStepSize(10.0).setRegParam(0.0).setNumIterations(20).setConvergenceTol(0.0005)
  
 scala>      val model = lr.run(trainingData)
 model: org.apache.spark.mllib.classification.LogisticRegressionModel =  org.apache.spark.mllib.classification.LogisticRegressionModel: intercept = -0.24895905804746296, numFeatures = 4, numClasses = 2, threshold = 0.5  


5 线性回归算法预测2-LogisticRegressionWithLBFGS  
 val numiteartor = 2
 val model = new LogisticRegressionWithLBFGS().setNumClasses(numiteartor).run(trainingData)
 
 val labelAndPreds = testData.map { point => val prediction = model.predict(point.features)   (point.label, prediction)}
 
 model: org.apache.spark.mllib.classification.LogisticRegressionModel =
 org.apache.spark.mllib.classification.LogisticRegressionModel: intercept = 0.0, numFeatures = 4, numClasses = 2, threshold = 0.5

5 结语

未完待续,请持续关注本文。综合分析必须体现出来,时间原因,暂时到此。

秦凯新 于深圳 2018 11 18 2223

你可能感兴趣的:(spark,大数据,机器学习)