mahout SparseVectorsFromSequenceFiles详解(2)

文档处理

DocumentProcessor类处理sequencefile

创建输出Path

Path tokenizedPath = new Path(outputDir, DocumentProcessor.TOKENIZED_DOCUMENT_OUTPUT_FOLDER);

这个Path是hadoop的函数,前面的参数是parent,后面的参数是child,将他们组合在一起并规范化(替换\为/,去掉最后一个/)

调用DocumentProcessor.tokenizeDocuments处理输入文档

DocumentProcessor.tokenizeDocuments(inputDir, analyzerClass, tokenizedPath, conf);

这里说下analyzerClass,其它参数都很好理解,查看代码:

Class<? extends Analyzer> analyzerClass = DefaultAnalyzer.class;

查看同目录下的DefaultAnalyzer.java

private final StandardAnalyzer stdAnalyzer = new StandardAnalyzer(Version.LUCENE_31);

这个analyzer是可以修改的,通过参数-a修改即可

DocumentProcessor细节

    Configuration conf = new Configuration(baseConf);
    // this conf parameter needs to be set enable serialisation of conf values
    conf.set("io.serializations", "org.apache.hadoop.io.serializer.JavaSerialization,"
                                  + "org.apache.hadoop.io.serializer.WritableSerialization");
    conf.set(ANALYZER_CLASS, analyzerClass.getName());

    Job job = new Job(conf);
    job.setJobName("DocumentProcessor::DocumentTokenizer: input-folder: " + input);
    job.setJarByClass(DocumentProcessor.class);

    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(StringTuple.class);
    FileInputFormat.setInputPaths(job, input);
    FileOutputFormat.setOutputPath(job, output);

    job.setMapperClass(SequenceFileTokenizerMapper.class);
    job.setInputFormatClass(SequenceFileInputFormat.class);
    job.setNumReduceTasks(0);
    job.setOutputFormatClass(SequenceFileOutputFormat.class);
    HadoopUtil.delete(conf, output);

    job.waitForCompletion(true);
这是一个hadoop程序,mapper是SequenceFileTokenizerMapper,而reducer没有设置

去看看SequenceFileTokenizerMapper都做了哪些工作

  @Override
  protected void map(Text key, Text value, Context context) throws IOException, InterruptedException {
    TokenStream stream = analyzer.reusableTokenStream(key.toString(), new StringReader(value.toString()));
    CharTermAttribute termAtt = stream.addAttribute(CharTermAttribute.class);
    StringTuple document = new StringTuple();
    stream.reset();
    while (stream.incrementToken()) {
      if (termAtt.length() > 0) {
        document.add(new String(termAtt.buffer(), 0, termAtt.length()));
      }
    }
    context.write(key, document);
  }

很简单,就是把value给tokenize为多个token放置到StringTuple里边,上面reusableTokenStream的定义是

reusableTokenStream(final String fieldName, final Reader reader)

所以key只是做为field没有参与到tokenize的过程

你可能感兴趣的:(mahout SparseVectorsFromSequenceFiles详解(2))