Pinot Realtime And ETL

Tracking:

  1. 安装schema register:

    该服务用来注册pinot需要的scheme,这样所有需要schema的地方调用该服务即可.
    
    http://docs.confluent.io/1.0/installation.html#installation
    
    修改兼容性:
    curl -X PUT -i -H "Content-Type: application/vnd.schemaregistry.v1+json" --data '{"compatibility": "FORWARD"}' http://localhost:8801{"compatibility": "FORWARD"}
    
    如何启动:
     schema-registrybin/schema-registry-start config/schema-registry.properties &
    
    在 schema-registry 服务中创建AvroSchema
    
    1. Register a new version of a schema under the subject "WechatActiveUsersV3"
    $ curl -X POST -i -H "Content-Type: application/vnd.schemaregistry.v1+json" --data '{"schema": "{\"type\": \"string\"}"}' http://localhost:8081/subjects/Kafka-key/versions
    
    2. List all subjects
    $ curl -X GET -i http://localhost:8081/subjects
    
    3. List all schema versions registered under the subject "WechatActiveUsersV3"
    $ curl -X GET -i http://localhost:8081/subjects/WechatActiveUsersV3/versions
    
    4. Fetch the most recently registered schema under subject "Kafka-value"
    $ curl -X GET -i http://localhost:8081/subjects/Kafka-value/versions/latest
    
    5. Update compatibility requirements globally
    $ curl -X PUT -i -H "Content-Type: application/vnd.schemaregistry.v1+json" --data '{"compatibility": "NONE"}' http://localhost:8081/config
    
    6. Update compatibility requirements under the subject "Kafka-value"$ curl -X PUT -i -H "Content-Type: application/vnd.schemaregistry.v1+json" --data '{"compatibility": "BACKWARD"}' http://localhost:8081/config/Kafka-value
    
    执行Pinot索引创建
    
    $ gobblin-mapreduce.sh --conf job-conf/WechaActiveUserV3.pull --jars target/parllay-pipeline-gobblin-1.0.0.jar
    
    $ hadoop jar /opt/pinot-0.016-pkg/lib/pinot-hadoop-0.016.jar SegmentCreation /path/jobs/WechatActiveUsersV3-Job.properties
    
    $ hadoop jar /opt/pinot-0.016-pkg/lib/pinot-hadoop-0.016.jar SegmentTarPush /path/*-Job.properties 
    
    创建Pinot对应的Avro元数据 
    $ curl -X POST -i -H "Content-Type: application/vnd.schemaregistry.v1+json" --data '{"schema":"{\"namespace\":\"com.parllay.wechat\",\"type\":\"record\",\"name\":\"wechatActiveUserMetrics\",\"fields\":[{\"name\":\"platId\",\"type\":\"string\"},{\"name\":\"openId\",\"type\":\"string\"},{\"name\":\"year\",\"type\":\"int\"},{\"name\":\"month\",\"type\":\"int\"},{\"name\":\"week\",\"type\":\"int\"},{\"name\":\"day\",\"type\":\"int\"},{\"name\":\"hour\",\"type\":\"int\"},{\"name\":\"counter\",\"type\":\"int\"},{\"name\":\"eventTime\",\"type\":\"long\"}]}"}' http://schema.registry:8801/subjects/wechatActiveUserMetrics/versions  $ curl -X POST -i -H "Content-Type: application/vnd.schemaregistry.v1+json" --data '{"schema":"{\"namespace\":\"com.parllay.wechat\",\"type\":\"record\",\"name\":\"wechatEventMetrics\",\"fields\":[{\"name\":\"platId\",\"type\":\"string\"},{\"name\":\"event\",\"type\":\"string\"},{\"name\":\"eventKey\",\"type\":\"string\"},{\"name\":\"eventValue\",\"type\":\"string\"},{\"name\":\"eventKey1\",\"type\":\"string\"},{\"name\":\"eventValue1\",\"type\":\"string\"},{\"name\":\"eventKey2\",\"type\":\"string\"},{\"name\":\"eventValue2\",\"type\":\"string\"},{\"name\":\"countryId\",\"type\":\"int\"},{\"name\":\"provinceId\",\"type\":\"int\"},{\"name\":\"cityId\",\"type\":\"int\"},{\"name\":\"realTimeCountryId\",\"type\":\"int\"},{\"name\":\"realTimeProvinceId\",\"type\":\"int\"},{\"name\":\"realTimeCityId\",\"type\":\"int\"},{\"name\":\"gender\",\"type\":\"int\"},{\"name\":\"groupId\",\"type\":\"int\"},{\"name\":\"languageId\",\"type\":\"int\"},{\"name\":\"tags\",\"type\":{\"type\":\"array\",\"items\":\"int\"}},{\"name\":\"year\",\"type\":\"int\"},{\"name\":\"month\",\"type\":\"int\"},{\"name\":\"week\",\"type\":\"int\"},{\"name\":\"day\",\"type\":\"int\"},{\"name\":\"hour\",\"type\":\"int\"},{\"name\":\"counter\",\"type\":\"int\"},{\"name\":\"eventTime\",\"type\":\"long\"}]}"}' http://schema.registry:8801/subjects/wechatEventMetrics/versions   $ curl -X POST -i -H "Content-Type: application/vnd.schemaregistry.v1+json" --data '{"schema":"{\"namespace\":\"com.parllay.wechat\",\"type\":\"record\",\"name\":\"wechatUserMetrics\",\"fields\":[{\"name\":\"platId\",\"type\":\"string\"},{\"name\":\"openId\",\"type\":\"string\"},{\"name\":\"event\",\"type\":\"string\"},{\"name\":\"eventKey\",\"type\":\"string\"},{\"name\":\"eventValue\",\"type\":\"string\"},{\"name\":\"eventKey1\",\"type\":\"string\"},{\"name\":\"eventValue1\",\"type\":\"string\"},{\"name\":\"eventKey2\",\"type\":\"string\"},{\"name\":\"eventValue2\",\"type\":\"string\"},{\"name\":\"countryId\",\"type\":\"int\"},{\"name\":\"provinceId\",\"type\":\"int\"},{\"name\":\"cityId\",\"type\":\"int\"},{\"name\":\"realTimeCountryId\",\"type\":\"int\"},{\"name\":\"realTimeProvinceId\",\"type\":\"int\"},{\"name\":\"realTimeCityId\",\"type\":\"int\"},{\"name\":\"gender\",\"type\":\"int\"},{\"name\":\"groupId\",\"type\":\"int\"},{\"name\":\"languageId\",\"type\":\"int\"},{\"name\":\"tags\",\"type\":{\"type\":\"array\",\"items\":\"int\"}},{\"name\":\"year\",\"type\":\"int\"},{\"name\":\"month\",\"type\":\"int\"},{\"name\":\"week\",\"type\":\"int\"},{\"name\":\"day\",\"type\":\"int\"},{\"name\":\"hour\",\"type\":\"int\"},{\"name\":\"counter\",\"type\":\"int\"},{\"name\":\"eventTime\",\"type\":\"long\"}]}"}' http://schema.registry:8801/subjects/wechatUserMetrics/versions
    
  2. 安装hadoop集群:

     启动该服务hadoop-2.7.1
    
  3. 安装spark集群:

     启动该服务spark2.0.1
    
  4. 安装azkaban:
     
    启动该服务,详细安装过程参见我的上一篇文章
    http://www.jianshu.com/p/fd187cb15674

  5. 编译pinot:

     启动controller, broker, server
    
     bin/start-server.sh -zkAddress=master:2181 -clusterName=pinot-fht-cluster-v2 -serverHost=slave1 -serverPort=8098 -dataDir=/opt/pinot-0.016-pkg/data -segmentDir=/opt/pinot-0.016-pkg/segment &
    
     bin/start-broker.sh -zkAddress=master:2181 -clusterName=pinot-fht-cluster-v2 -brokerHost=slave1 -brokerPort=8099 &
    
     bin/start-controller.sh -zkAddress=master:2181 -clusterName=pinot-fht-cluster-v2 -controllerHost=slave1 -controllerPort=9001 -dataDir=/pinot-0.016-pkg/controller &
    
  6. 编译gobblin:

    从kafka导出数据到hdfs, 每次导出都会在zookeeper记录其offset及其wartermark. 每次执行的数据段为上次offset到gobblin启动任务的时间段.
    
  7. 安装kafka集群

     启动集群kafka_2.11-0.9.0.1,创建topic
    
  1. 创建tracking schema, table等

  2. 提交schema到schema register服务.

  3. produce数据到kafka集群

  4. 测试pinot 数据接收情况. 根据web控制台

  5. 生成离线数据.

  6. 测试整个数据流程.

你可能感兴趣的:(Pinot Realtime And ETL)