用scala Map写个词频统计

1.使用可变map实现单词统计

//  这里要注意文件编码问题如果有中文要用UTF-8保存,最好文件统一使用utf-8保存
    val in = Source.fromFile("g:/a/1.txt")
    //  获取所有行
    val initer = in.getLines()
    import collection.mutable
    val m3 = mutable.Map[String, Int]()
    while (initer.hasNext) {
        val words = initer.next.split("\\s+")
        for (word <- words) {
            m3(word) = m3.getOrElse(word, 0) + 1
        }
    }
    in.close

2.使用不可变map实现单词统计

    var m4 = Map[String, Int]()
    val in1 = new java.util.Scanner(new java.io.File("g:/a/1.txt"))
    while (in1.hasNext()) {
        val words = in1.next().split("\\s+")
        for (word <- words) {
            m4 += (word -> (m4.getOrElse(word, 0) + 1))
        }
    }
    in1.close()

3.使用已排序的map实现单词统计

    var m5 = SortedMap[String, Int]()
    val source = Source.fromFile("g:/a/1.txt")
    val contents = source.mkString
    val words = contents.split("\\s+")
    for (word <- words) {
        m5(word) = m5.getOrElse(word, 0) + 1
    }
    source.close()

4.按照插入顺序来输出

    var m6 = LinkedHashMap[String, Int]()
    val source1 = Source.fromFile("g:/a/1.txt")
    val arr = source1.getLines().toArray
    for (a <- arr) {
        for (w <- a.split("\\s+"))
            m6(w) = m6.getOrElse(w, 0) + 1
    }
    source1.close

最后输出结果:

    println("m3 : "+m3)
    println("m4 : "+m4)
    println("m5 : "+m5)
    println("m6 : "+m6)

m3 : Map(spark -> 4, hadoop -> 3, sqoop -> 2, hadoop -> 1, 文字 -> 3, 会乱码 -> 3, zookeeper -> 2, hive -> 4, flume -> 2, 会不 -> 3, hbase -> 4)
m4 : Map(sqoop -> 2, 会不 -> 3, hadoop -> 3, spark -> 4, hive -> 4, 会乱码 -> 3, zookeeper -> 2, flume -> 2, 文字 -> 3, hbase -> 4, hadoop -> 1)
m5 : TreeMap(flume -> 2, hadoop -> 3, hbase -> 4, hive -> 4, spark -> 4, sqoop -> 2, zookeeper -> 2, 会不 -> 3, 会乱码 -> 3, 文字 -> 3, hadoop -> 1)
m6 : Map(hadoop -> 1, hadoop -> 3, spark -> 4, hbase -> 4, hive -> 4, zookeeper -> 2, flume -> 2, sqoop -> 2, 文字 -> 3, 会不 -> 3, 会乱码 -> 3)

你可能感兴趣的:(scala)