* The mapper extends from the org.apache.hadoop.mapreduce.Mapper interface. When Hadoop runs,
* it receives each new line in the input files as an input to the mapper. The "map" function
* tokenize the line, and for each token (word) emits (word,1) as the output.
*/
public static class TokenizerMapper
extends Mapper{
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(Object key, Text value, Context context
) throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
}
}
/**
*
Reduce function receives all the values that has the same key as the input, and it output the key
* and the number of occurrences of the key as the output.
*/
public static class IntSumReducer
extends Reducer {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable values,
Context context
) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}
public int run(String[] args) throws Exception {
if (args.length < 2) {
System.out.println("chapter3.WordCountWithTools ");
ToolRunner.printGenericCommandUsage(System.out);
System.out.println("");
return -1;
}
String inputPath = args[0];
String outPath = args[1];
Job job = prepareJob(inputPath, outPath, getConf());
job.waitForCompletion(true);
return 0;
}
public Job prepareJob(String inputPath, String outPath,Configuration conf)
throws IOException {
Job job = Job.getInstance(conf, "word count");
job.setJarByClass(WordCountWithTools.class);
job.setMapperClass(TokenizerMapper.class);
// Uncomment this to
// job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(inputPath));
FileOutputFormat.setOutputPath(job, new Path(outPath));
return job;
}
public static void main(String[] args) throws Exception {
int res = ToolRunner.run(new Configuration(), new WordCountWithTools(), args);
System.exit(res);
}
}
本篇文章重点说明什么是函数柯里化,这个语法现象的背后动机是什么,有什么样的应用场景,以及与部分应用函数(Partial Applied Function)之间的联系 1. 什么是柯里化函数
A way to write functions with multiple parameter lists. For instance
def f(x: Int)(y: Int) is a
ApplicationContext能读取多个Bean定义文件,方法是:
ApplicationContext appContext = new ClassPathXmlApplicationContext(
new String[]{“bean-config1.xml”,“bean-config2.xml”,“bean-config3.xml”,“bean-config4.xml
#!/bin/bash
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information re
参考了
http://zhedahht.blog.163.com/blog/static/25411174201142733927831/
但是用java来实现有一个问题。
由于Java无法像C那样“传递参数的地址,函数返回时能得到参数的值”,唯有新建一个辅助类:AuxClass
import ljn.help.*;
public class BalancedBTree {
BeanUtils.copyProperties VS PropertyUtils.copyProperties
作为两个bean属性copy的工具类,他们被广泛使用,同时也很容易误用,给人造成困然;比如:昨天发现同事在使用BeanUtils.copyProperties copy有integer类型属性的bean时,没有考虑到会将null转换为0,而后面的业