flume与zk的信息传递

1.启动ZK

>zkCli.sh -server datanode1:2181

       1.1.创建一个flume znode 

             >create /flume

       1.2.查看znode

             >ls /flume

[netcat]

2.将flume配置文件保存到aa.txt

    a1.sources=r1
    a1.sinks=k1
    a1.channels=c1

    a1.sources.r1.type=netcat
    a1.sources.r1.bind=localhost
    a1.sources.r1.port=4444

    a1.sinks.k1.type=logger

    a1.channels.c1.type=memory
    a1.channels.c1.capacity=1000
    a1.channels.c1.transactionCapacity=100
    a1.sources.r1.channels=c1
    a1.sinks.k1.channel=c1
 

3.运行代码,将flume信息存放在zk的节点上

package hmr.jr.zk;

import java.io.FileInputStream;
import java.io.IOException;

import org.apache.zookeeper.CreateMode;
import org.apache.zookeeper.KeeperException;
import org.apache.zookeeper.ZooDefs;
import org.apache.zookeeper.ZooKeeper;

public class TestFlume {
	public static void main(String[] args) throws IOException, KeeperException, InterruptedException{
		
		//创建zk对象
		//5000 : 会话超时时间
		ZooKeeper zk=new ZooKeeper("datanode1:2181",5000,null);
		FileInputStream fis=new FileInputStream("g:/file/aa.txt");
		byte[] bytes=new byte[fis.available()];
		fis.read(bytes);
		fis.close();
		String path=zk.create("/flume/a1", bytes, ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT);
		System.out.println(path);
		
	}

}

 

4.在zk命令行查看存放在zk节点上的flume信息

>get /flume/a1

    [zk: datanode1:2181(CONNECTED) 36] get /flume/a1
    a1.sources=r1
    a1.sinks=k1
    a1.channels=c1

    a1.sources.r1.type=netcat
    a1.sources.r1.bind=localhost
    a1.sources.r1.port=4444

    a1.sinks.k1.type=logger

    a1.channels.c1.type=memory
    a1.channels.c1.capacity=1000
    a1.channels.c1.transactionCapacity=100
    a1.sources.r1.channels=c1
    a1.sinks.k1.channel=c1

    cZxid = 0x27200000004
    ctime = Sat May 11 23:55:13 PDT 2019
    mZxid = 0x27200000004
    mtime = Sat May 11 23:55:13 PDT 2019
    pZxid = 0x27200000004
    cversion = 0
    dataVersion = 0
    aclVersion = 0
    ephemeralOwner = 0x0
    dataLength = 306
    numChildren = 0
5.flume进行读取zk配置信息

  1) 如果文件是在linux某一个文件夹里,则可以使用如下命令

>flume-ng agent -f /soft/apache-flume-1.8.0-bin/conf/simple.conf -n a1 -Dflume.root.logger=INFO,console
2)flume读取启动zk配置信息

>flume-ng agent -z daatnode1:2181 -p /flume -name a1 -n a1 -Dflume.root.logger=INFO,console

 

开启新的终端进行测试

> netstat -ano |grep 4444

tcp        0      0 ::ffff:127.0.0.1:4444       :::*                        LISTEN      off (0.00/0/0)
 

通过nc客户端向4444发送信息:

>nc localhost 4444

如本机:

[hadoop@master conf]$ nc localhost 4444
zhang
OK
 

在flume-ng界面就可以看到有接收信息

19/05/12 00:50:42 INFO sink.LoggerSink: Event: { headers:{} body: 7A 68 61 6E 67                                  zhang }
 

[avro]

Sources
-----------------------
1. avrosource
       [conf/r_avro.conf]
       #componet
       a1.sources=r1
       a1.channels=c1
       a1.sinks=k1

       #r1
       a1.sources.r1.type=avro
       a1.sources.r1.channels=c1
       a1.sources.r1.bind=0.0.0.0
       a1.sources.r1.port=4141

       #k1
       a1.sinks.k1.type=logger

       #c1
       a1.channels.c1.type=memory
 
       #bind
       a1.sources.r1.channels=c1
       a1.sinks.k1.channel=c1
2.启动flume
    >flume-ng agent -f r_avro.conf -n a1 -Dflume.root.logger=INFO,console

3.启动avro-client发送数据
    >flume-ng avro-client --help
    >flume-ng avro-client   -F customers.txt -H localhost -p 4141

flume与zk的信息传递_第1张图片
 

[spooldir]滚动


1. r_spooldir.conf
    a1.sources=r1
    a1.channels=c1
    a1.sinks=k1

    a1.sources.r1.type=spooldir
    a1.sources.r1.channels=c1
    a1.sources.r1.spoolDir=/soft/source/logs
    a1.sources.r1.fileHeader=true

       #k1
       a1.sinks.k1.type=logger

       #c1
       a1.channels.c1.type=memory
 
       #bind
       a1.sources.r1.channels=c1
       a1.sinks.k1.channel=c1

2.创建logs目录

    >create /soft/source/logs

3.启动日志

    >flume-ng agent -f r_spooldir.conf -n a1 -Dflume.root.logger=INFO,console

4.将其它日志放入logs文件夹会看到控制台的数据变化

flume与zk的信息传递_第2张图片

 

 

你可能感兴趣的:(大数据)