svd in hadoop

Base on hadoop, given a large matrix, I want to calculate the first k smallest eigenvectors. who knows how I can do that?


http://code.google.com/p/decomposer/ shows that the author had implemented the svd decompose based on hadoop. it seems that the decomposer project implement two matrix-decompose method, one is the Hebbian Algorithm, which is svd method, but don't base on hadoop? and the other is Lanczos, also svd,both single-thread and hadoop version have been implemented.

http://old.nabble.com/Making-Very-Large-Scale-Linear-Algebraic-Computations-Possible-Via-Randomization-td25614301.html shows that the author wanted to move his code into mahout.Does this work finished? focus


http://issues.apache.org/jira/browse/MAHOUT-180

你可能感兴趣的:(apache,thread,html,hadoop,Google)