kafka总结系列(五)

Implementation

Log

  • 现有一个topic“my_topic”,两个分区,则在配置选项“log.dirs”指定的存储日志文件的目录下有两个文件夹:my_topic_0和my_topic_1。
  •  每个文件夹下面有.index和.log两种类型的文件,.log由本文件内第一条消息的offset命名;每个日志文件大小不得超过指定值;
  • log文件由一系列“log entries”组成,
  • log entry:4字节的int整数表明该条消息的长度+1字节的magic value+4字节的CRC校验码+n字节的payload消息内容。
  • 每条消息由一个64-bit即8字节的offset唯一标识,同时表明该消息在partition中的位置;

Writes

  • 生产者向log追加消息,当一个log文件的大小达到指定值时(log.segment.bytes),就会产生新的文件。
  • log.flush.interval.messages:累积的消息条数,之后强制OS将消息批量写入磁盘;
  • log.flush.interval.ms:将消息写磁盘的时间间隔;消息数量或者时间任一满足条件,就会写磁盘;
  • 如果两个值设置太小,则会频繁写磁盘,影响性能,优点是当系统crash,不会丢失太多数据,增加了稳定性。

Reads

  • 读消息时指定offset和max-chunk-size,即一次消费的最大字节数。
  • 理论上max-chunk-size应该比任意一个消息大,但是如果出现异常大的消息,会尝试读多次,每次使buffer-size加倍,直到成功读取该条消息;
  • 应该可以设置消息允许的最大值以及最大的buffer size,从而让broker拒绝接受太大的消息;
  • 消费的具体过程:先根据offset定位log segment file,之后从文件中取消息。The search is done as a simple binary search variation against an in-memory range maintained for each file.
  • 发送给consumer的消息格式:
  • MessageSetSend (fetch result):
    total length : 4 bytes
    error code : 2 bytes
    message 1 : x bytes
    ...
    message n : x bytes

  • MultiMessageSetSend (multiFetch result):
    total length : 4 bytes
    error code : 2 bytes
    messageSetSend 1
    ...
    messageSetSend n

Deletes

为了在删除日志文件时可以正常生产,写文件,采用了写时复制copy-on-write技术;

Distribution

  1. Consumer Offset Tracking:kafka支持一个group内的所有consumer向一个指定的broker(称为offset manager)提交offset信息。当offset manager收到OffsetCommitRequest时,便会给一个压缩的topic“__consumer_offsets”发送请求,当该topic的所有broker都受到消息后给予commit成功的应答。如果commit失败了,consumer重试。offset manager在内存中保存着offset与consumer的映射表,从而能快速响应consumer的请求。
  2. 早期的kafka版本,offset存储在zookeeper里面,可以将offset从zookeeper转移至kafka;
  3. broker启动后,会向zookeeper注册一个临时节点(broker shutdown或者crash之后节点消息),作为 /brokers/ids的子节点。一个broker对应一个id,可以移动至其他物理主机,但是对外的id是不变的。
  4. Consumer registration algorithm:一个consumer启动之后,执行以下操作:
    • 向其所属的group注册一个新节点;
    • 在consumer id节点上注册一个watcher on changes(类似于数据库的触发器),监控consumer节点的增删,从而触发group内的rebalance(一个consumer宕掉,就会rebalance,由其他consumer继续消费该分区,新增一个consumer之后也会rebalance);
    • 在broker id节点上注册一个watcher on changes,监控broker节点的增删,触发所有consumer group中所有consumer的rebalance
    • If the consumer creates a message stream using a topic filter, it also registers a watch on changes (new topics being added) under the broker topic registry. (Each change will trigger re-evaluation of the available topics to determine which topics are allowed by the topic filter. A new allowed topic will trigger rebalancing among all consumers within the consumer group.)
    • Force itself to rebalance within in its consumer group.强制自己在组内进行rebalance;
  5. Consumer rebalancing algorithm:决定哪个consumer消费该topic的哪一个partition,目的是使每个consumer和尽可能少的broker联系。
    • 1.For each topic T that Ci subscribes to
      2. let PT be all partitions producing topic T
      3. let CG be all consumers in the same group as Ci that consume topic T
      4. sort PT (so partitions on the same broker are clustered together)
      5. sort CG
      6. let i be the index position of Ci in CG and let N = size(PT)/size(CG)
      7. assign partitions from i*N to (i+1)*N - 1 to consumer Ci
      8. remove current entries owned by Ci from the partition owner registry
      9. add newly assigned partitions to the partition owner registry
      (we may need to re-try this until the original partition owner releases its ownership)





你可能感兴趣的:(kafka)