Kafka 0.8

https://cwiki.apache.org/confluence/display/KAFKA/Kafka+0.8+Quick+Start

0.8 is a huge step forward in functionality from 0.7.x

 

This release includes the following major features:

  • Partitions are now replicated. 支持partition的复本, 避免broker失败导致的数据丢失
    Previously the topic would remain available in the case of server failure, but individual partitions within that topic could disappear when the server hosting them stopped. If a broker failed permanently any unconsumed data it hosted would be lost.
    Starting with 0.8 all partitions have a replication factor and we get the prior behavior as the special case where replication factor = 1.
    Replicas have a notion of committed messages and guarantee that committed messages won't be lost as long as at least one replica survives. Replica logs are byte-for-byte identical across replicas.
  • Producer and consumer are replication aware. 支持replica的Producer和Consumer
    When running in sync mode, by default, the producer send() request blocks until the messages sent is committed to the active replicas. As a result the sender can depend on the guarantee that a message sent will not be lost.
    Latency sensitive producers have the option to tune this to block only on the write to the leader broker or to run completely async if they are willing to forsake this guarantee.
    The consumer will only see messages that have been committed. 
  • The consumer has been moved to a "long poll" model where fetch requests block until there is data available.
    This enables low latency without frequent polling. In general end-to-end message latency from producer to broker to consumer of only a few milliseconds is now possible.
  • We now retain the key used in the producer for partitioning with each message, so the consumer knows the partitioning key.
    会保存producer用于partitioning的key, 并让consumer知道这个key
  • We have moved from directly addressing messages with a byte offset to using a logical offset (i.e. 0, 1, 2, 3...). 使用逻辑offset代替之前的物理offset
    The offset still works exactly the same - it is a monotonically increasing number that represents a point-in-time in the log - but now it is no longer tied to byte layout.
    This has several advantages:
    (1) it is aesthetically (美学观点上地) nice,
    (2) it makes it trivial to calculate the next offset or to traverse messages in reverse order,
    (3) it fixes a corner case (极端情况) interaction between consumer commit() and compressed message batches. Data is still transferred using the same efficient zero-copy mechanism as before. 
  • We have removed the zookeeper dependency from the producer and replaced it with a simple cluster metadata api.
  • We now support multiple data directories (i.e. a JBOD setup).
  • We now expose both the partition and the offset for each message in the high-level consumer.
    在high-level consumer中expose具体的partition和offset信息
  • We have substantially improved our integration testing, adding a new integration test framework and over 100 distributed regression and performance test scenarios that we run on every checkin.

 

在我看来, 主要的改动

1. 增加broker的安全性, 原来的方案, broker的fail就会导致数据丢失, 确实有点太说不过去, 所以replica feature是必须的

2. 使用逻辑offset, 上面说了些优点, 但是之前使用物理offset时, 也说了一堆优点
    其实就是效率和易用性的balance, 之前出于对效率的追求, 所以使用物理offset
    而现在考虑到物理offset实在用的太麻烦, 做出妥协, 改为逻辑offset, 本质没有区别, 只是需要增加一个逻辑offset到物理offset的映射, 以使物理offset对用户透明

3. 对python更好的支持, kafka-python

Pure Python implementation with full protocol support. Consumer and Producer implementations included, GZIP and Snappy compression supported.

Maintainer: David Arthur
License: Apache v.2.0

https://github.com/mumrah/kafka-python

 

Kafka Replication High-level Design

https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Replication

参考,Apache Kafka Replication Design – High level

你可能感兴趣的:(kafka)