Tail Latency即尾延迟

实际生产中的Latency是一种(概率)分布,实际上被描述为百分位数。 延迟可以在 75% 百分位处翻倍,在 99% 之后高出 100 倍。

什么导致了尾延迟

  • 磁盘老化。Disk just get slowdown time to time for no reason. The Tail at Store gives more in-depth analysis. Also, disks may degrade significantly when they get old.
  • 超时。Failure tolerance and retry is a common design pattern in distributed systems. But one retry is enough to send current request to latency tail. Google SRE Book chapter 21 to 22 discuss it in detail, such as,
    • Reduce remaining timeout quota and pass it down each layer of the request processing chain.
    • Be aware of the chained retry amplification (layer1 3 retries, layer2 3*3 retries, …).
  • 后台任务。Almost every services, from software to even hardware/firmware, have backgroud tasks. Background task may temporarily slowdown the world. The most notorious one is GC (garbage collection,垃圾回收).
  • 超负载运行。The customer may be sending you too many/big requests, and upper layer throttling is not working well. Overprovisioned customer VMs may compete with each other resulting slow experience. Some small piece of data may be extremly hot, e.g. many OS images are forked from a small shared base. A large request may be pegging your CPU/network/disk, and make the others queuing up. Or something went wrong, as a dead loop stuck your cpu.

缓解尾延迟

延迟可以分为low、middle和tail。控制和缓解延迟方法总结:

  • 缓解low, middle部分:P提供更多资源、削减和并行化任务、消除 “head-of-line” 阻塞和缓存将有所帮助。这是我们应用于横向扩展分布式系统的常用技术。
  • 缓解tail部分:基本思想是hedging。 即使我们已经并行化了服务,最慢的实例也将决定我们的请求何时完成。 您可以使用概率数学对组合延迟分布进行建模。
    • 发送比必要更多的请求,只收集最快的返回,有助于减少尾部。Send 2 instread of 1. Send 11 instead of 10 (e.g. in erasure-coding 10 fragment reconstruct read). Send backup requests at 95% percentile latency.
    • 金丝雀请求,,i.e. send normal requests but fallback to sending hedged requests if the canary did’t finish in reasonable time.
    • 通常,较小的任务分区(微分区)将有助于实现更平滑的延迟分布百分位数。
    • 减缓 head-of-line blocking. 少量开销较大的查询可能会增加大量并发开销较低的查询的延迟。Uniformly smaller tasks partitioning camn help.
    • 处理超时
      • 首先尝试a non-block try 读取(读取但不等待),然后进行尽力读取(读取并等待超时)。
      • 当发现超时时,将相关资源标记为known slow。 并告知其他请求绕过这个资源。
      • 要设置合适的超时值,我们可以设置为99.9% ,并动态调整它。 任意超时值可能有害。
    • 更细粒度的调度,甚至是平衡延迟和成本的管理框架。(e.g. Bing’s Kwiken, also attached below.)

监控

有两种监控指标:

  • Single operation
  • Percentile statistics

监控应该能够:

  • 提供可以从用户请求入口跟踪到硬件操作的trace id
  • 涵盖每个级别的细分
  • 覆盖容易出问题的地方

有几个方面需要监控:

  • 与故障直接相关的错误,例如虚拟机停止/重新启动
  • 直接影响用户体验的超时错误计数和自动限制
  • Operation slowdown
  • 典型的硬件性能,如CPU、网络、磁盘
  • 提供从用户进入的跟踪、每个级别的细分以及最终到硬件的跟踪

你可能感兴趣的:(服务器)