flume+kafka+SparkStreaming+mysql+ssm+高德地图热力图项目

第一步、编写python脚本,产生模拟数据


#coding=UTF-8

import random
import time

phone=[
    "13869555210",
    "18542360152",
    "15422556663",
    "18852487210",
    "13993584664",
    "18754366522",
    "15222436542",
    "13369568452",
    "13893556666",
    "15366698558"
]

location=[
    "116.191031, 39.988585",
    "116.389275, 39.925818",
    "116.287444, 39.810742",
    "116.481707, 39.940089",
    "116.410588, 39.880172",
    "116.394816, 39.91181",
    "116.416002, 39.952917"
]

def sample_phone():
    return random.sample(phone,1)[0]
def sample_location():
    return random.sample(location, 1)[0]

def generator_log(count=10):
    time_str=time.strftime("%Y-%m-%d %H:%M:%S",time.localtime())
    f=open("/opt/log.txt","a+")
    while count>=1:
        query_log="{phone}\t{location}\t{date}".format(phone=sample_phone(),location=sample_location(),date=time_str)
        f.write(query_log+"\n")
     #   print query_log
        count=count-1

if __name__=='__main__':
    generator_log(100)

第二步、配置Flume

在Flume安装目录下(conf)添加配置文件 storm_pro.conf

agent.sources = s1
agent.channels = c1
agent.sinks = k1

agent.sources.s1.type=exec
agent.sources.s1.command=tail -F /opt/log.txt
agent.sources.s1.channels=c1
agent.channels.c1.type=memory
agent.channels.c1.capacity=10000
agent.channels.c1.transactionCapacity=100

#kafka接收器  
agent.sinks.k1.type= org.apache.flume.sink.kafka.KafkaSink
#设置Kafka的broker地址和端口号
agent.sinks.k1.brokerList=hadoop3:9092,hadoop4:9092,hadoop5:9092                                                                                 
#设置Kafka的Topic
agent.sinks.k1.topic=ss_kafka
#设置序列化方式
agent.sinks.k1.serializer.class=kafka.serializer.StringEncoder
agent.sinks.k1.channel=c1

第三步、启动集群

0、将python脚本放到hadoop3里面,然后运行一下脚本,看看是否是好的,如果能出来结果,就继续下面步骤,如果不行,就检查python脚本

python kafka.py

1、启动zookeeper

zkServer.sh start

2、启动kafka

kafka-server-start.sh /opt/modules/app/kafka/config/server.properties

启动HDFS

start-dfs.sh

3、启动flume

在flume目录下

./bin/flume-ng agent -n agent -c conf -f conf/kafka-logger.conf -Dflume.root.logger=INFO,console

4、创建一个flume里面定义的topic

kafka-topics.sh --create --zookeeper hadoop03:2181 --replication-factor 1 --partitions 1 --topic ss_kafka

5、开启消费者,消费内容

kafka-console-consumer.sh --zookeeper hadoop3:2181 -from-beginning --topic ss_kafka

6、执行python脚本

python logo.py

第三步、spark程序编写

1、停止消费者消费

按ctrl+z退出

2、离开hdfs的安全模式

hdfs dfsadmin -safemode leave

3、在Intellij上面写java代码,连接hadoop3上面的flume日志上面的数据

import kafka.serializer.StringDecoder
import org.apache.spark.streaming._
import org.apache.spark.streaming.kafka._
import org.apache.spark.SparkConf
/**
  * Created by Administrator on 2019/3/7.
  */
object Kafkademo {
  def main(args: Array[String]): Unit = {
    System.setProperty("HADOOP_USER_NAME", "root")

    val sparkConf = new SparkConf().setAppName("Kafkademo").setMaster("local[2]")
    val ssc = new StreamingContext(sparkConf, Seconds(5))
    ssc.checkpoint("hdfs://hadoop3:8020/spark_check_point")
    //kafka的topic集合,即可以订阅多个topic,args传参的时候用,隔开
    val topicsSet = Set("ss_kafka")
    //设置kafka参数,定义brokers集合
    val kafkaParams = Map[String, String]("metadata.broker.list" -> "192.168.159.133:9092,192.168.159.130:9092,192.168.159.134:9092")
    val messages = KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](
      ssc, kafkaParams, topicsSet)
    print("---------:" +messages)

    val lines = messages.map(_._2)
    //    val words = lines.flatMap(_.split(" "))
    //    val wordCounts = words.map(x => (x, 1)).reduceByKey(_ + _)
    lines.print()

    ssc.start()
    ssc.awaitTermination()
  }
}

第四步、连接数据库

1、新建数据库test

2、新建表

CREATE TABLE `NewTable` (
`time`  bigint(20) NULL ,
`latitude`  double NULL ,
`longtitude`  double NULL 
)
;

3、在intellij连接数据库

package com.neusoft
import java.sql
import java.sql.DriverManager
import java.util.Date

import kafka.serializer.StringDecoder
import org.apache.spark.streaming._
import org.apache.spark.streaming.kafka._
import org.apache.spark.SparkConf
/**
  * Created by Administrator on 2019/3/7.
  */
object Kafkademo {
  def main(args: Array[String]): Unit = {
    System.setProperty("HADOOP_USER_NAME", "root")

    val sparkConf = new SparkConf().setAppName("DirectKafkaWordCount").setMaster("local[2]")
    val ssc = new StreamingContext(sparkConf, Seconds(5))
    ssc.checkpoint("hdfs://hadoop3:8020/spark_check_point")
    //kafka的topic集合,即可以订阅多个topic,args传参的时候用,隔开
    val topicsSet = Set("ss_kafka")
    //设置kafka参数,定义brokers集合
    val kafkaParams = Map[String, String]("metadata.broker.list" -> "192.168.159.133:9092,192.168.159.130:9092,192.168.159.134:9092")
    val messages = KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](
      ssc, kafkaParams, topicsSet)
    print("---------:" +messages)

    val lines = messages.map(_._2)
    //    val words = lines.flatMap(_.split(" "))
    //    val wordCounts = words.map(x => (x, 1)).reduceByKey(_ + _)
    //    lines.print()

    lines.foreachRDD(rdd => {
      //内部函数
      def func(records: Iterator[String]) {
        var conn: sql.Connection = null
        var stmt: sql.PreparedStatement = null
        try {
          val url = "jdbc:mysql://localhost:3306/test"
          val user = "root"
          val password = "root"  //笔者设置的数据库密码是hadoop,请改成你自己的mysql数据库密码
          conn = DriverManager.getConnection(url, user, password)
          //          conn.setAutoCommit(false)

          //          var sql = "delete from wordcount"
          //          stmt = conn.prepareStatement(sql);
          //          stmt.executeUpdate()
          //          13893556666   116.191031, 39.988585   2019-03-08 05:26:12
          records.foreach(p => {
            val arr = p.split("\\t")
            val phoneno = arr(0)
            val jingwei = arr(1)
            var arrjingwei = jingwei.split(",")
            //wei,jing
            var sql = "insert into location(time,latitude,longtitude) values (?,?,?)"
            stmt = conn.prepareStatement(sql);
            stmt.setLong(1, new Date().getTime)
            stmt.setDouble(2,java.lang.Double.parseDouble(arrjingwei(0).trim))
            stmt.setDouble(3,java.lang.Double.parseDouble(arrjingwei(1).trim))
            stmt.executeUpdate()
          })

        } catch {
          case e: Exception => e.printStackTrace()

        } finally {
          if (stmt != null) {
            stmt.close()
          }
          if (conn != null) {
            conn.close()
          }
        }
      }

      val repartitionedRDD = rdd.repartition(1)
      repartitionedRDD.foreachPartition(func)
    })

    ssc.start()
    ssc.awaitTermination()
  }
}

4、运行python脚本生成数据

python log.py

5、查看数据库出现数据

出现如下图所示,即为成功

image

第五步、创建热力图

1、创建一个web新项目,使用ssm框架

添加以下依赖




  4.0.0

  neusoft
  neusoft
  1.0-SNAPSHOT
  war

  neusoft Maven Webapp
  
  http://www.example.com

  
    UTF-8
    1.8
    1.8
    
    4.0.2.RELEASE
    
    3.2.6
    
    1.7.7
    1.2.17
  

  
    
      junit
      junit
      3.8.1
      test
    
    
    
      javax
      javaee-api
      7.0
    
    
      junit
      junit
      4.11
      
      test
    
    
    
      org.springframework
      spring-core
      ${spring.version}
    
    
      org.springframework
      spring-web
      ${spring.version}
    
    
      org.springframework
      spring-oxm
      ${spring.version}
    
    
      org.springframework
      spring-tx
      ${spring.version}
    
    
      org.springframework
      spring-jdbc
      ${spring.version}
    
    
      org.springframework
      spring-webmvc
      ${spring.version}
    
    
      org.springframework
      spring-aop
      ${spring.version}
    
    
      org.springframework
      spring-context-support
      ${spring.version}
    
    
      org.springframework
      spring-test
      ${spring.version}
    
    
    
      org.mybatis
      mybatis
      ${mybatis.version}
    
    
    
      org.mybatis
      mybatis-spring
      1.2.2
    
    
    
      mysql
      mysql-connector-java
      5.1.30
    
    
    
      commons-dbcp
      commons-dbcp
      1.2.2
    
    
    
      jstl
      jstl
      1.2
    
    
    
    
      log4j
      log4j
      ${log4j.version}
    
    
    
      com.alibaba
      fastjson
      1.1.41
    
    
      org.slf4j
      slf4j-api
      ${slf4j.version}
    
    
      org.slf4j
      slf4j-log4j12
      ${slf4j.version}
    
    
    
    
      org.codehaus.jackson
      jackson-mapper-asl
      1.9.13
    
    
    
      commons-fileupload
      commons-fileupload
      1.3.1
    
    
      commons-io
      commons-io
      2.4
    
    
      commons-codec
      commons-codec
      1.9
    
    
      com.alibaba
      druid
      1.0.9
    
    
      javax
      javaee-api
      8.0
      provided
    
    
      org.glassfish.web
      javax.servlet.jsp.jstl
      1.2.2
    

    
      org.apache.storm
      storm-core
      0.9.5
      
    

    
      org.apache.storm
      storm-kafka
      0.9.5
      
    

    
      org.apache.kafka
      kafka_2.11
      0.8.2.0
      
        
          org.apache.zookeeper
          zookeeper
        
        
          log4j
          log4j
        
      
    
    
      mysql
      mysql-connector-java
      5.1.31
    
  

  
    neusoft
    
      
        src/main/java
        
        
          **/*.xml
        
      
      
      
        src/main/resources
      
    
    
      
        
          maven-clean-plugin
          3.0.0
        
        
        
          maven-resources-plugin
          3.0.2
        
        
          maven-compiler-plugin
          3.7.0
        
        
          maven-surefire-plugin
          2.20.1
        
        
          maven-war-plugin
          3.2.0
        
        
          maven-install-plugin
          2.5.2
        
        
          maven-deploy-plugin
          2.8.2
        
      
    
  


2、配置web.xml



    
    
        springDispatcherServlet
        org.springframework.web.servlet.DispatcherServlet
        
            contextConfigLocation
            classpath:spring/springmvc.xml
        
        1
    

    
        springDispatcherServlet
        /
    


3、创建资源目录

在main下面创建一个资源目录resources
在该资源目录下创建一个spring文件夹
在spring文件夹下面创建一个springmv.xml文件
代码如下:




    
    
        
        
    
    
    
        
        
    

    
    



4、写后台代码

1)、创建com.neusoft.util包

然后创建MysqlUtil类

package com.neusoft.util;

import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.sql.ResultSet;

public class MysqlUtil {
    private static final String DRIVER_NAME="jdbc:mysql://localhost:3306/test?user=root&password=root";
    private static Connection connection;
    private static PreparedStatement pstm;
    private static ResultSet resultSet;

    public static Connection getConnection(){
        try {
            Class.forName("com.mysql.jdbc.Driver");
            connection=DriverManager.getConnection(DRIVER_NAME);
        }catch (Exception e){
            e.printStackTrace();
        }
        return connection;
    }
    public static void release(){
        try {
            if(resultSet!=null) {
                resultSet.close();
            }
            if (pstm != null) {
                pstm.close();
            }
            if(connection!=null){
                connection.close();
            }
        }catch (Exception e){
            e.printStackTrace();
        }finally {
            if(connection!=null){
                connection=null;    //help GC
            }
        }
    }

}

2)、创建com.neusoft.domain包

然后创建Location类

package com.neusoft.domain;

/**
 * Created by Administrator on 2019/3/11.
 */
public class Location {
    private Integer count;
    private double latitude;
    private double longitude;

    public Integer getCount() {
        return count;
    }
    public void setCount(Integer count) {
        this.count = count;
    }
    public double getLatitude() {
        return latitude;
    }
    public void setLatitude(double latitude) {
        this.latitude = latitude;
    }
    public double getLongitude() {
        return longitude;
    }
    public void setLongitude(double longitude) {
        this.longitude = longitude;
    }
}

3)、创建com.neusoft.dao包

然后创建LocationDao类

package com.neusoft.dao;

import com.neusoft.domain.Location;
import com.neusoft.util.MysqlUtil;
import org.springframework.stereotype.Component;

import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.util.ArrayList;
import java.util.List;

/**
 * Created by Administrator on 2019/3/11.
 */
@Component
public class LocationDao {

    private static MysqlUtil mysqlUtil;

    public List map() throws Exception{
        List list = new ArrayList();
        Connection connection=null;
        PreparedStatement psmt=null;
        try {
            connection = MysqlUtil.getConnection();
            psmt = connection.prepareStatement("select latitude,longtitude,count(*) num from location where "
                    + "time>unix_timestamp(date_sub(current_timestamp(),interval 20second))*1000 "
                    + "group by longtitude,latitude");
            ResultSet resultSet = psmt.executeQuery();
            while (resultSet.next()) {
                Location location = new Location();
                location.setLongitude(resultSet.getDouble(1));
                location.setLatitude(resultSet.getDouble(2));
                location.setCount(resultSet.getInt(3));
                list.add(location);
            }
        }catch (Exception e){
            e.printStackTrace();
        }finally {
            MysqlUtil.release();
        }
        return list;
    }

}

4)、创建com.neusoft.controller包

然后创建 HomeController类

package com.neusoft.controller;

import com.alibaba.fastjson.JSON;
import com.neusoft.dao.LocationDao;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.servlet.ModelAndView;

import javax.servlet.http.HttpServletResponse;

/**
 * Created by Administrator on 2019/3/11.
 */
@Controller
public class HomeController {

    @RequestMapping("/")
    public ModelAndView home()
    {
        ModelAndView modelAndView = new ModelAndView();

        modelAndView.setViewName("index");
        return modelAndView;
    }
    @RequestMapping("/get_map")
    public void getMap(HttpServletResponse response) throws Exception{
        System.out.println("getMap");
        LocationDao locationDao = new LocationDao();
        String json = JSON.toJSONString(locationDao.map());
        response.getWriter().print(json);
    }
}

5)、修改index页面

改成固定时间自动刷新ajax

<%--
  Created by IntelliJ IDEA.
  User: ttc
  Date: 2018/7/6
  Time: 14:06
  To change this template use File | Settings | File Templates.
--%>
<%@ page contentType="text/html;charset=UTF-8" language="java" %>



    
    高德地图
    





6)、修改python自动生成数据脚本
import random
import os
import sys
import time
import numpy as np

def genertor():
    Point=[random.uniform(123.449169,123.458654),random.uniform(41.740567,41.743705)]
    arr = []
    for i in range(1, random.randint(0, 500)):
        bias = np.random.randn() * pow(10,-4)
    bias = round(bias,4)
        X = Point[0] + bias
        bias1 = np.random.randn() * pow(10,-4)
    bias1 = round(bias,4)
        Y = Point[1] + bias
    time_str=time.strftime("%Y-%m-%d %H:%M:%S",time.localtime())
        arr.append(['13888888888'+'\t',str(X)+',', str(Point[1])+'\t',time_str])
    return arr

if __name__ == '__main__':
    path = sys.argv[1]
    if not os.path.isfile(path):
        open(path, 'w')
    with open(path,'a') as f:
        while True:
            arr = genertor()
            for i in range(len(arr)):
                f.writelines(arr[i])
                f.write('\n')
            time.sleep(5)

7)、 centos6.9安装/升级到python2.7并安装pip

https://www.jianshu.com/p/ccf3ba6f6d1f

最后,启动tomcat服务器,数据就谁显示在页面上了

你可能感兴趣的:(flume+kafka+SparkStreaming+mysql+ssm+高德地图热力图项目)