MapReduce 实现常用算法

1. Word Count

题目描述:

Using map reduce to count word frequency

样例:

chunk1: "Google Bye GoodBye Hadoop code"
chunk2: "lintcode code Bye"

Get MapReduce result:
    Bye: 2
    GoodBye: 1
    Google: 1
    Hadoop: 1
    code: 2
    lintcode: 1

代码实现:

/**
 * Definition of OutputCollector:
 * class OutputCollector {
 *     public void collect(K key, V value);
 *         // Adds a key/value pair to the output buffer
 * }
 */
public class WordCount {

    public static class Map {
        public void map(String key, String value, OutputCollector output) {
            // Write your code here
            // Output the results into output buffer.
            // Ps. output.collect(String key, int value);
            StringTokenizer tokenizer = new StringTokenizer(value);
            while (tokenizer.hasMoreTokens()) {
                String word = tokenizer.nextToken();
                output.collect(word, 1);
            }
        }
    }

    public static class Reduce {
        public void reduce(String key, Iterator values,
                           OutputCollector output) {
            // Write your code here
            // Output the results into output buffer.
            // Ps. output.collect(String key, int value);
            int sum = 0;
            while (values.hasNext()) {
                    sum += values.next();
            }
            output.collect(key, sum);
        }
    }
}

2. 反向索引

题目描述:

创建给定文档的反向索引

样例:

给出一个包括id与内容的文档list(我们提供了document类)
[
  {
    "id": 1,
    "content": "This is the content of document 1 it is very short"
  },
  {
    "id": 2,
    "content": "This is the content of document 2 it is very long bilabial bilabial heheh hahaha ..."
  },
]
返回一个反向索引(hashmap的key是单词, value是文档的id).

{
   "This": [1, 2],
   "is": [1, 2],
   ...
}

代码实现:

/**
 * Definition of Document:
 * class Document {
 *     public int id;
 *     public String content;
 * }
 */
public class Solution {
    /**
     * @param docs a list of documents
     * @return an inverted index
     */
    public Map> invertedIndex(List docs) {
        // Write your code here
        Map> results = new HashMap>();
        for (Document doc : docs) {
            int id = doc.id;
            StringBuffer temp = new StringBuffer("");
            String content = doc.content;
            int n = content.length();
            for (int i = 0; i < n; ++i) {
                if (content.charAt(i) == ' ') {
                    insert(results, temp.toString(), id);
                    temp = new StringBuffer("");
                } else
                    temp.append(content.charAt(i));
            }
            insert(results, temp.toString(), id);
        }
        return results;
    }
    public void insert(Map> rt, String tmp, int id) {
        if (tmp.equals("") || tmp == null)
            return;
        if (!rt.containsKey(tmp))
            rt.put(tmp, new ArrayList());
        
        int n = rt.get(tmp).size();
        if (n == 0 || rt.get(tmp).get(n - 1) != id)
            rt.get(tmp).add(id);
    }
}

3. 倒排索引

题目描述:

使用 map reduce 来实现一个倒排索引

样例:

下面是要被索引的文本:
T0:"it is what it is"
T1:"what is it"
T2:"it is a banana"

我们就能得到下面的反向文件索引:
 "a":      {2}
 "banana": {2}
 "is":     {0, 1, 2}
 "it":     {0, 1, 2}
 "what":   {0, 1}

代码实现:

/**
 * Definition of OutputCollector:
 * class OutputCollector {
 *     public void collect(K key, V value);
 *         // Adds a key/value pair to the output buffer
 * }
 * Definition of Document:
 * class Document {
 *     public int id;
 *     public String content;
 * }
 */
public class InvertedIndex {
    public static class Map {
        public void map(String _, Document value,
                        OutputCollector output) {
            // Write your code here
            // Output the results into output buffer.
            // Ps. output.collect(String key, int value);
            int id = value.id;
            StringTokenizer tokenizer = new StringTokenizer(value.content);
            while (tokenizer.hasMoreTokens()) {
                String word = tokenizer.nextToken();
                output.collect(word, id);
            }
        }
    }

    public static class Reduce {
        public void reduce(String key, Iterator values,
                           OutputCollector> output) {
            // Write your code here
            // Output the results into output buffer.
            // Ps. output.collect(String key, List value);
            List results = new ArrayList();
            int previous = -1;
            while (values.hasNext()) {
                int now = values.next();
                if(previous != now) {
                    results.add(now);
                }
                previous = now;
            }
            output.collect(key, results);
        }
    }
}

4. GFS客户端

题目描述:

为GFS(Google文件系统)实现一个简单的客户端,提供一下功能:
1.read(文件名),通过文件名从GFS中读取文件。
2.write(文件名,内容),通过文件名和内容写入GFS中。
现在有两种已经在基础类中实现的方法:
1.readChunk(文件名,块索引),从GFS中读取一个块。
2.writeChunk(文件名,块索引,块数据),向GFS中写入一个块。
为了简化这个问题,我们可以假设块大小为 chunkSize 位的(在真实的文件系统中,是64M),GFS客户端的任务是将一个文件分为若干块(如果需要的话)并且保存在远端的GFS服务器上,chunkSize会在构造函数中给出,你需要的是实现读和写这两个private方法。

样例:

GFSClient(5)
read("a.txt")
>> null
write("a.txt", "World")
>> You don't need to return anything, but you need to call writeChunk("a.txt", 0, "World") to write a 5 bytes chunk to GFS.
read("a.txt")
>> "World"
write("b.txt", "111112222233")
>> You need to save "11111" at chink 0, "22222" at chunk 1, "33" at chunk 2.
write("b.txt", "aaaaabbbbb")
read("b.txt")
>> "aaaaabbbbb"

代码实现:

/* Definition of BaseGFSClient
 * class BaseGFSClient {
 *     private Map chunk_list;
 *     public BaseGFSClient() {}
 *     public String readChunk(String filename, int chunkIndex) {
 *         // Read a chunk from GFS
 *     }
 *     public void writeChunk(String filename, int chunkIndex,
 *                            String content) {
 *         // Write a chunk to GFS
 *     }
 * }
 */
public class GFSClient extends BaseGFSClient {

    public int chunkSize;
    public Map chunkNum;

    public GFSClient(int chunkSize) {
        // initialize your data structure here
        this.chunkSize = chunkSize;
        this.chunkNum = new HashMap();
    }
    
    // @param filename a file name
    // @return conetent of the file given from GFS
    public String read(String filename) {
        // Write your code here
        if (!chunkNum.containsKey(filename))
            return null;

        StringBuffer content = new StringBuffer();

        for (int i = 0; i < chunkNum.get(filename); ++i) {
            String sub_content = readChunk(filename, i);
            if (sub_content != null)
                content.append(sub_content);
        }
        return content.toString();
    }

    // @param filename a file name
    // @param content a string
    // @return void
    public void write(String filename, String content) {
        // Write your code here
        int length = content.length();

        int num = (length - 1) / chunkSize + 1;
        chunkNum.put(filename, num);

        for (int i = 0; i < num; ++i) {
            int start = i * chunkSize;
            int end = i == num -1 ? length : (i + 1) * chunkSize; 
            String sub_content = content.substring(start, end);
            writeChunk(filename, i, sub_content);
        }
    }
}

5. 最常使用的k个单词(Map Reduce)

题目描述

使用map reduce框架查找最常使用的k个单词.
mapper的key为文档的id, 值是文档的内容, 文档中的单词由空格分割.
对于reducer,应该输出最多为k个key-value对, 包括最常用的k个单词以及他们在当前reducer中的使用频率.评判系统会合并不同的reducer中的结果以得到 全局 最常使用的k个单词, 所以你不需要关注这一环节. k 在TopK类的构造器中给出.

样例

给出文档 A =

lintcode is the best online judge
I love lintcode
以及文档 B =

lintcode is an online judge for coding interview
you can test your code online at lintcode
最常用的2个单词以及他们的使用频率应该为:

lintcode, 4
online, 3

代码实现

class Pair {
    String key;
    int value;
    
    Pair(String key, int value) {
        this.key = key;
        this.value = value;
    }
}

public class TopKFrequentWords {

    public static class Map {
        public void map(String _, Document value,
                        OutputCollector output) {
            // Write your code here
            // Output the results into output buffer.
            // Ps. output.collect(String key, int value);
            int id = value.id;
            String content = value.content;
            String[] words = content.split(" ");
            for (String word : words)
                if (word.length() > 0) {
                    output.collect(word, 1);
                }
        }
    }

    public static class Reduce {
        private PriorityQueue Q = null;
        private int k;

        private Comparator pairComparator = new Comparator() {
            public int compare(Pair left, Pair right) {
                if (left.value != right.value) {
                    return left.value - right.value;
                }
                return right.key.compareTo(left.key);
            }
        };

        public void setup(int k) {
            // initialize your data structure here
            this.k = k;
            Q = new PriorityQueue(k, pairComparator);
        }   

        public void reduce(String key, Iterator values) {
            // Write your code here
            int sum = 0;
            while (values.hasNext()) {
                    sum += values.next();
            }

            Pair pair = new Pair(key, sum);
            if (Q.size() < k) {
                Q.add(pair);
            } else {
                Pair peak = Q.peek();
                if (pairComparator.compare(pair, peak) > 0) {
                    Q.poll();
                    Q.add(pair);
                }
            }
        }

        public void cleanup(OutputCollector output) {
            // Output the top k pairs  into output buffer.
            // Ps. output.collect(String key, Integer value);
            List pairs = new ArrayList();
            while (!Q.isEmpty()) {
                pairs.add(Q.poll());
            }

            // reverse result
            int n = pairs.size();
            for (int i = n - 1; i >= 0; --i) {
                Pair pair = pairs.get(i);
                output.collect(pair.key, pair.value);
            }
        }
    }
}

你可能感兴趣的:(MapReduce 实现常用算法)