mahout源码canopy算法分析之二CanopyMapper

首先更正一点,前篇博客里面说到一个Canopy的测试的例子里面有这样的一句代码:

buildClusters(Configuration conf, Path input, Path output,
      DistanceMeasure measure, double t1, double t2, double t3, double t4,
      int clusterFilter, boolean runSequential)
我以前认为clusterFilter是分类的数目,其实应该是每个分类中至少含有的(样本数+1)才对,而非分类数,因为Canopy就是要得到分类数目的。

下面分析CanopyMapper:

首先把CanopyMapper改编为可以不依靠hadoop可以跑的代码,即纯java代码,如下:

package mahout.test.canopy.debug;
import java.util.ArrayList;
import java.util.Collection;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.io.Text;
import org.apache.mahout.clustering.canopy.Canopy;
import org.apache.mahout.clustering.canopy.CanopyClusterer;
import org.apache.mahout.clustering.canopy.CanopyConfigKeys;
import org.apache.mahout.common.distance.DistanceMeasure;
import org.apache.mahout.common.distance.ManhattanDistanceMeasure;
import org.apache.mahout.math.RandomAccessSparseVector;
import org.apache.mahout.math.Vector;
import org.apache.mahout.math.VectorWritable;
import com.google.common.collect.Lists;
public class CanopyDebug {
	/**
	 * canopy  测试程序
	 * @author fansy
	 * time 2013/7/21 22:59
	 * edited at 2013/7/22 21:38
	 */
	private static final Collection<Canopy> canopies = Lists.newArrayList();
	private static CanopyClusterer canopyClusterer;
	private static int clusterFilter;
	private static int num=0;
	private static Map<Text,VectorWritable> center=new HashMap<Text,VectorWritable>();
	public static void main(String[] args) {
		 // 初始化
		DistanceMeasure measure=new ManhattanDistanceMeasure();
		double t1=30.2;
		double t2=5.3;
		int clusterFilter  =1;
		Configuration conf=init(measure,t1,t2,clusterFilter );		
		// setup() 函数
		setup(conf);
		// map()函数
		 map();
		//  cleanup() 函数
		Map<Text,VectorWritable> result=cleanup();
		System.out.println("done..."+result);	    
	}	
	/**
	 * 初始化
	 * @param measure
	 * @param t1
	 * @param t2
	 * @param clusterFileter:每个canopy至少含有的样本数
	 * @return
	 */
	public static Configuration init(DistanceMeasure measure,double t1,double t2,int clusterFileter){
		Configuration conf=new Configuration();		
		double t3=t1;
		double t4=t2;	
 		conf.set(CanopyConfigKeys.DISTANCE_MEASURE_KEY, measure.getClass()
		        .getName());
	    conf.set(CanopyConfigKeys.T1_KEY, String.valueOf(t1));
	    conf.set(CanopyConfigKeys.T2_KEY, String.valueOf(t2));
	    conf.set(CanopyConfigKeys.T3_KEY, String.valueOf(t3));
	    conf.set(CanopyConfigKeys.T4_KEY, String.valueOf(t4));
	    conf.set(CanopyConfigKeys.CF_KEY, String.valueOf(clusterFilter));
	    return conf;
	}	
	/**
	 * map setup() 函数
	 * @param conf 输入初始参数
	 */
	public  static void setup(Configuration  conf){
		canopyClusterer = new CanopyClusterer(conf);
	    clusterFilter = Integer.parseInt(conf.get(
	        CanopyConfigKeys.CF_KEY));
	}
	/**
	 * 获得初始数据 <key,value>--><null,VectorWritable>
	 * @return
	 */
	public static List<VectorWritable> makeInData(){
		List<VectorWritable> list=new ArrayList<VectorWritable>();
		VectorWritable vw=null;
		Vector vector=null;
		for(int i=0;i<10;i++){
			vw=new VectorWritable();
			vector=new RandomAccessSparseVector(3);
			vector.set(0, i%3*(i%3)*(i%3)*(i%3)+Math.random());
			vector.set(1, i%3*(i%3)*(i%3)*(i%3)+Math.random());
			vector.set(2, i%3*(i%3)*(i%3)*(i%3)+Math.random());
			vw.set(vector);
			list.add(vw);
		}
		return list;
	}
	/**
	 * 模仿Mapper的map函数
	 */
	public static void map(){
		List<VectorWritable> vwList=makeInData2();
		for(VectorWritable point: vwList){
			canopyClusterer.addPointToCanopies(point.get(), canopies);
		}
	}
	/**
	 * 模仿Mapper的cleanup函数
	 * @return
	 */
	public static Map<Text,VectorWritable> cleanup(){
		for (Canopy canopy : canopies) {
		      canopy.computeParameters();
		      if (canopy.getNumObservations() > clusterFilter) {
		        center.put(new Text("centroid"+num++), new VectorWritable(canopy
		            .getCenter()));
		      }
		    }
		return center;
	}
	/**
	 * 固定输入数据,产生输入数据的第二种方式,方便调试
	 * @return
	 */
	public static List<VectorWritable> makeInData2(){
		List<VectorWritable> list=new ArrayList<VectorWritable>();
		VectorWritable vw=null;
		Vector vector=null;
		for(int i=0;i<10;i++){
			vw=new VectorWritable();
			vector=new RandomAccessSparseVector(3);
			vector.set(0, i%3*(i%3)*(i%3)*(i%3)+1);
			vector.set(1, i%3*(i%3)*(i%3)*(i%3)+1);
			vector.set(2, i%3*(i%3)*(i%3)*(i%3)+1);
			vw.set(vector);
			list.add(vw);
		}
		return list;
	}
}
利用上面的代码可以直接进行调试,调试可以看到每一步运行的结果,很方便理解该算法。下面就结合实例进行说明:

输入数据(VectorWritable格式):

[1.0,1.0,1.0]
[2.0,2.0,2.0]
[17.0,17.0,17.0]
[1.0,1.0,1.0]
[2.0,2.0,2.0]
[17.0,17.0,17.0]
[1.0,1.0,1.0]
[2.0,2.0,2.0]
[17.0,17.0,17.0]
[1.0,1.0,1.0]
通过makeData2()函数即可获得上面转换成VectorWritable的数据,然后进入map函数,其中的for循环即是仿造CanopyMapper中的map函数了。map函数里面使用到的函数为:addPointToCanopies,打开CanopyCluster类可以看到这个函数的主要内容如下;

public void addPointToCanopies(Vector point, Collection<Canopy> canopies) {
    boolean pointStronglyBound = false;
    for (Canopy canopy : canopies) {
      double dist = measure.distance(canopy.getCenter().getLengthSquared(), canopy.getCenter(), point);
      if (dist < t1) {
        if (log.isDebugEnabled()) {
          log.debug("Added point: {} to canopy: {}", AbstractCluster.formatVector(point, null), canopy.getIdentifier());
        }
        canopy.observe(point);
      }
      pointStronglyBound = pointStronglyBound || dist < t2;
    }
    if (!pointStronglyBound) {
      if (log.isDebugEnabled()) {
        log.debug("Created new Canopy:{} at center:{}", nextCanopyId, AbstractCluster.formatVector(point, null));
      }
      canopies.add(new Canopy(point, nextCanopyId++, measure));
    }
  }
针对第一行的样本[1.0,1.0,1.0]因为canopies为空,所以直接 执行 canopies.add(new Canopy(point,nextCanopyId++,measure));这样就生成了第一个canopy,先看下Canopy这个类,其父类为DistanceMeasureCluster,父类为AbstractCluster,其属性有:

private int id;
  private long numObservations;
  private long totalObservations;
  private Vector center;
  private Vector radius; 
  // the observation statistics
  private double s0; 
  private Vector s1; 
  private Vector s2;
在这个函数里面canopy变化的属性只有 s0,s1,s2 ,s0代表这个canopy里面的样本数,s1代表所有样本的对应维度相加,s2代表所有样本所有维度先平方再对应相加;

针对第二行样本[2.0,2.0,2.0],dist=3(曼哈顿距离)<T1=30.3,所以要执行canopy.observe(point)方法,这个方法其实就是设置s0、s1、s2的,设置完成后s0变为2,s1为[3,3,3],s2为[5,5,5]; 因为dist=3<t2=5.2所以不再执行canopies.add()方法了;

针对第三行样本[17,17,17]因为dist=16*3>t1=30.2所以直接执行canopies.add()方法,即又添加了一个canopy;后面的样本以此类推;

在CanopMapper类中最后执行的方法为cleanup函数,这里也仿造了一个cleanup函数,其实cleanup函数的主要作用就是一个过滤和设置值的过程函数对应为canopy.computeParameters(),过滤即是说一个canopy中的样本数必须要>clusterFilter才行,设置值即为设置radius和center值,具体代码参见:

setNumObservations((long) getS0());
    setTotalObservations(getTotalObservations() + getNumObservations());
    setCenter(getS1().divide(getS0()));
    // compute the component stds
    if (getS0() > 1) {
      setRadius(getS2().times(getS0()).minus(getS1().times(getS1())).assign(new SquareRootFunction()).divide(getS0()));
    }
    setS0(0);
    setS1(center.like());
    setS2(center.like());
center的值设置为s1/s0,radius的值:A=[s2*s0-s1.*s1],(前一个是乘以,后面是点乘)然后把A的每一个维度的值开方并除以s0即可得到radius。


分享,快乐,成长


转载请注明出处:http://blog.csdn.net/fansy1990



你可能感兴趣的:(Mahout)