【hadoop学习】--(3)安装mahout


1 安装mahout
参考:http://zhengyongkun.blog.51cto.com/1163218/1420935
 http://www.cnblogs.com/guarder/p/3704981.html
<1>下载最新安装包0.9版
http://archive.apache.org/dist/mahout/0.9/
<2>解压
执行tar -zxvf mahout-distribution-0.9.tar.gz 
移动mv mahout-distribution-0.9 /usr/lib/mahout
<3>配置环境变量
gedit /etc/profile相应内容修改为:
export JAVA_HOME=/usr/lib/jvm
export CLASSPATH=.:$JAVA_HOME/jre/lib:$JAVA_HOME/lib/tools.jar 
export JRE_HOME=$JAVA_HOME/jre 
export HADOOP_HOME=/usr/lib/hadoop/
export HADOOP_CONF_DIR=/usr/lib/hadoop/conf
export HADOOP_CLASSPATH=/usr/lib/hadoop/bin
export MAHOUT_HOME=/usr/lib/mahout
export MAHOUT_HOME_DIR=/usr/lib/mahout/conf
export PATH=$PATH:$JAVA_HOME/bin:$MAHOUT_HOME/bin:$MAHOUT_HOME/conf
<4>验证
mahout -help
输出mahout支持的50种算法:
root@hadoop:/usr/lib/mahout/bin# mahout -help
Warning: $HADOOP_HOME is deprecated.


Running on hadoop, using /usr/lib/hadoop//bin/hadoop and HADOOP_CONF_DIR=
MAHOUT-JOB: /usr/lib/mahout/mahout-examples-0.9-job.jar
Warning: $HADOOP_HOME is deprecated.


14/09/09 08:16:43 WARN driver.MahoutDriver: Unable to add class: -help
14/09/09 08:16:43 WARN driver.MahoutDriver: No -help.props found on classpath, will use command-line arguments only
Unknown program '-help' chosen.
Valid program names are:
 arff.vector: : Generate Vectors from an ARFF file or directory
 baumwelch: : Baum-Welch algorithm for unsupervised HMM training
 canopy: : Canopy clustering
 cat: : Print a file or resource as the logistic regression models would see it
 cleansvd: : Cleanup and verification of SVD output
 clusterdump: : Dump cluster output to text
 clusterpp: : Groups Clustering Output In Clusters
 cmdump: : Dump confusion matrix in HTML or text formats
 concatmatrices: : Concatenates 2 matrices of same cardinality into a single matrix
 cvb: : LDA via Collapsed Variation Bayes (0th deriv. approx)
 cvb0_local: : LDA via Collapsed Variation Bayes, in memory locally.
 evaluateFactorization: : compute RMSE and MAE of a rating matrix factorization against probes
 fkmeans: : Fuzzy K-means clustering
 hmmpredict: : Generate random sequence of observations by given HMM
 itemsimilarity: : Compute the item-item-similarities for item-based collaborative filtering
 kmeans: : K-means clustering
 lucene.vector: : Generate Vectors from a Lucene index
 lucene2seq: : Generate Text SequenceFiles from a Lucene index
 matrixdump: : Dump matrix in CSV format
 matrixmult: : Take the product of two matrices
 parallelALS: : ALS-WR factorization of a rating matrix
 qualcluster: : Runs clustering experiments and summarizes results in a CSV
 recommendfactorized: : Compute recommendations using the factorization of a rating matrix
 recommenditembased: : Compute recommendations using item-based collaborative filtering
 regexconverter: : Convert text files on a per line basis based on regular expressions
 resplit: : Splits a set of SequenceFiles into a number of equal splits
 rowid: : Map SequenceFile<Text,VectorWritable> to {SequenceFile<IntWritable,VectorWritable>, SequenceFile<IntWritable,Text>}
 rowsimilarity: : Compute the pairwise similarities of the rows of a matrix
 runAdaptiveLogistic: : Score new production data using a probably trained and validated AdaptivelogisticRegression model
 runlogistic: : Run a logistic regression model against CSV data
 seq2encoded: : Encoded Sparse Vector generation from Text sequence files
 seq2sparse: : Sparse Vector generation from Text sequence files
 seqdirectory: : Generate sequence files (of Text) from a directory
 seqdumper: : Generic Sequence File dumper
 seqmailarchives: : Creates SequenceFile from a directory containing gzipped mail archives
 seqwiki: : Wikipedia xml dump to sequence file
 spectralkmeans: : Spectral k-means clustering
 split: : Split Input data into test and train sets
 splitDataset: : split a rating dataset into training and probe parts
 ssvd: : Stochastic SVD
 streamingkmeans: : Streaming k-means clustering
 svd: : Lanczos Singular Value Decomposition
 testnb: : Test the Vector-based Bayes classifier
 trainAdaptiveLogistic: : Train an AdaptivelogisticRegression model
 trainlogistic: : Train a logistic regression using stochastic gradient descent
 trainnb: : Train the Vector-based Bayes classifier
 transpose: : Take the transpose of a matrix
 validateAdaptiveLogistic: : Validate an AdaptivelogisticRegression model against hold-out data set
 vecdist: : Compute the distances between a set of Vectors (or Cluster or Canopy, they must fit in memory) and a list of Vectors
 vectordump: : Dump vectors from a sequence file to text
 viterbi: : Viterbi decoding of hidden states from given output states sequence
2 运行测试
    参考:http://zhengyongkun.blog.51cto.com/1163218/1420935 作者hijiangtao
<1>下载测试数据:
http://archive.ics.uci.edu/ml/databases/synthetic_control/synthetic_control.data
<2>创建测试目录testdata
hadoop fs -mkdir testdata
<3>导入数据
hadoop fs -put /usr/lib/mahout/synthetic_control.data testdata
<4>运行kmeans算法
hadoop jar /usr/lib/mahout/mahout-examples-0.9-job.jar org.apache.mahout.clustering.syntheticcontrol.kmeans.Job
<5>查看运行结果
hadoop fs -lsr output
得到如下结果:
clusteredPoints clusters-0 clusters-1 clusters-10 clusters-2 clusters-3 clusters-4 clusters-5 clusters-6 clusters-7 clusters-8 clusters-9 data


你可能感兴趣的:(hadoop,ubuntu,Mahout,clustering)