拿到某超市的销售数据,将数据整理后得到一年三千万条交易记录,想试试用spark中的推荐系统做一下预测
先把数据导入到HDFS中,数据需要用户id,商品id,和购买次数,这里我拿购买次数当作电影推荐系统中的电影评分
HDFS中的数据用":"分割开。如下:
461365:22535:1.0
461365:5059:1.0
461365:5420:4.0
461366:1987:4.0
461366:31911:1.0
进入spark-shell
引入需要的mllib包和日志的设置
import org.apache.spark.mllib.recommendation.{ALS, Rating,MatrixFactorizationModel}
import org.apache.spark.sql.hive.HiveContext
import org.apache.log4j.{Logger,Level}
import org.apache.spark.mllib.evaluation.{RankingMetrics, RegressionMetrics}
Logger.getLogger("org.apache.spark").setLevel(Level.WARN)
Logger.getLogger("org.eclipse.jetty.server").setLevel(Level.OFF)
val data = sc.textFile("/input/rate")
val ratings = data.map(_.split(':') match { case Array(user, item, rate) => Rating(user.toInt, item.toInt, rate.toDouble)})
scala> val users = ratings.map(_.user).distinct()
users: org.apache.spark.rdd.RDD[Int] = MapPartitionsRDD[1356] at distinct at :35
scala> val products = ratings.map(_.product).distinct()
products: org.apache.spark.rdd.RDD[Int] = MapPartitionsRDD[1360] at distinct at :35
scala> println("Got "+ratings.count()+" ratings from "+users.count+" users on "+products.count+" products.")
Got 30299054 ratings from 354172 users on 45786 products.
将数据划分,我这里用的8:2,
val splits = ratings.randomSplit(Array(0.8, 0.2))
val training = splits(0)
val test = splits(1)
进行训练,并设置参数
Rank: 对应ALS模型中的因子个数,即矩阵分解出的两个矩阵的新的行/列数
numIterations:模型迭代最大次数
参数0.01: 控制模型的正则化过程,从而控制模型的过拟合情况。
val rank = 30
val numIterations = 12
val model = ALS.train(training, rank, numIterations, 0.01)
然后将训练结果得到的预测分和原始分合并在一起,算出rmse
val testUsersProducts = test.map { case Rating(user, product, rate) =>
(user, product)
}
val predictions = model.predict(testUsersProducts).map { case Rating(user, product, rate) =>
((user, product), rate)
}
val ratesAndPreds = ratings.map { case Rating(user, product, rate) =>
((user, product), rate)
}.join(predictions)
val rmse= math.sqrt(ratesAndPreds.map { case ((user, product), (r1, r2)) =>
val err = (r1 - r2)
err * err
}.mean())