boost.asio: deadline_timer源码剖析

文章目录

    • deadline_timer相关类介绍
    • 从示例程序看源码
      • 同步调用——wait
      • 异步调用——async_wait
    • 异步定时器示例
      • gdb异步定时器流程

deadline_timer相关类介绍

deadline_timer众所周知,是asio的一个核心定时器,支持同步定时触发和异步定时触发。具体有什么功能如何使用这里不作介绍,本文主要从deadline_timer的wait和async_wait入手,解释deadline_timer的实现逻辑。
先解释下deadline_timer的大致结构。deadline_timer实际上是别名,它的真正名字叫basic_deadline_timer:

/// Typedef for the typical usage of timer. Uses a UTC clock.
typedef basic_deadline_timer<boost::posix_time::ptime> deadline_timer;

template <typename Time,
    typename TimeTraits = asio::time_traits<Time>
    ASIO_SVC_TPARAM_DEF2(= deadline_timer_service<Time, TimeTraits>)>
class basic_deadline_timer
  : ASIO_SVC_ACCESS basic_io_object<ASIO_SVC_T>

# define ASIO_SVC_T detail::deadline_timer_service<TimeTraits>

这里的模板类表示表达时间的类型,对于这个时间类型这里不深究,知道它用来处理时间就行了。
而basic_deadline_timer继承自basic_io_object
这个deadline_timer_service是deadline_timer的服务类,众所周知,想在io_service中搞事,都得通过服务类来进行操作,说简单点,就是basic_deadline_timer中的函数实际上都是调用deadline_timer_service的接口。

basic_io_object中实际上持有一个deadline_timer_service对象和另一个数据对象(我也不知道怎么形容这个,暂称为数据对象)
下面源码中的IoObjectService模板参数就是deadline_timer_service

template <typename IoObjectService, bool Movable = detail::service_has_move<IoObjectService>::value>
class basic_io_object
{
public:
  /// The type of the service that will be used to provide I/O operations.
  typedef IoObjectService service_type;

  /// The underlying implementation type of I/O object.  reactive_socket_service::implementation_type
  typedef typename service_type::implementation_type implementation_type;

protected:
  /// Construct a basic_io_object.
  /**
   * Performs:
   * @code get_service().construct(get_implementation()); @endcode
   */
  explicit basic_io_object(asio::io_context& io_context)
    : service_(asio::use_service<IoObjectService>(io_context))
  {
    service_.construct(implementation_);
  }
  //...
private:
  basic_io_object(const basic_io_object&);
  basic_io_object& operator=(const basic_io_object&);

  // The service associated with the I/O object.
  service_type& service_;

  /// The underlying implementation of the I/O object.
  // 数据对象
  //底层实现 //对应stream_protocol,见TransportLayerASIO::setup
  //指定协议,mongod对应stream_protocol(见TransportLayerASIO::setup)及fd和对应的epoll私有参数信息
  implementation_type implementation_;
};

implementation_type在deadline_timer_service中的定义:

  // The implementation type of the timer. This type is dependent on the
  // underlying implementation of the timer service.
  struct implementation_type
    : private asio::detail::noncopyable
  {
    time_type expiry;
    bool might_have_pending_waits;
    typename timer_queue<Time_Traits>::per_timer_data timer_data;
  };

per_timer_data可能不是很好理解,
deadline_timer调用async_wait后会往epoll_reactor的定时器队列中存一个元素,这个元素的类型就是这个per_timer_data。而这个epoll_reactor就是一个触发器,基于篇幅原因,这里就不多说了,仅需要知道这是一个类似epoll的包装类就行了。(类似epoll只是说部分功能相似,这两者区别还是很大的)为了方便理解,还是上源码:

//timer_queue_base接口的实现类  timer_queue_set.first_成员为该类型
class timer_queue
  : public timer_queue_base
{
public:
  // The time type.
  //posix_time::ptime
  typedef typename Time_Traits::time_type time_type;

  // The duration type.
  typedef typename Time_Traits::duration_type duration_type;

  // Per-timer data.   //timer_queue.timers_
  class per_timer_data
  {
  public:
    per_timer_data() :
      heap_index_((std::numeric_limits<std::size_t>::max)()),
      next_(0), prev_(0)
    {
    }

  private:
    friend class timer_queue;

    // The operations waiting on the timer.
    //该定时器定时时间到的时候需要执行的op操作
    op_queue<wait_op> op_queue_;

    // The index of the timer in the heap.
    std::size_t heap_index_;

    // Pointers to adjacent timers in a linked list.
    //把timer链接在一起组成双向链表,加入到timers_头部
    per_timer_data* next_;
    per_timer_data* prev_;
  }private:
    //...

    // The head of a linked list of all active timers.
    //所有的timer定时器都通过给链表链接在一起(双向链表)
    per_timer_data* timers_;

    //每个timer有2个成员,一个记录时间,一个记录timer的私有数据
    struct heap_entry
    {
        // The time when the timer should fire.
        //time_traits.hpp中定义typedef boost::posix_time::ptime time_type;
        //记录定时器某个时间点过期
        time_type time_; //也就是posix_time::ptimet

        // The associated timer with enqueued operations.
        //记录本timer的一些私有数据
        per_timer_data* timer_;
    };

    // The heap of timers, with the earliest timer at the front.
    //每个timer对应一个heap_entry节点,方便更加heap_entry.heap_进入快速排序,目的是可以快速定位那个timer到期
    std::vector<heap_entry> heap_;
}

可以看到在这个time_queue中,per_timer_data以两种方式维护,在per_timer_data类中是一个双向链表,表头就是timers_,另一个是最小堆,就是heap_。
这样设计的原因:用双向链表维护可以保证添加定时器的顺序性,而最小堆是以定时器触发的事件维护的,堆首的定时器将是最快触发的

从示例程序看源码

首先来看看当我们写下如下代码时具体发生了什么:

/**
 * @brief 测试同步定时器
 */
#include 
#include 
#include 

inline int64_t GetCurrentTime()
{
   struct timespec ts;
   clock_gettime(CLOCK_REALTIME, &ts);
   return ts.tv_nsec + ts.tv_sec * 1000000000ULL;
}

int main() {
  boost::asio::io_service io;
  boost::asio::deadline_timer t(io, boost::posix_time::seconds(5));
  std::cout << "ts1: " << GetCurrentTime() << std::endl;
  std::cout << t.expires_at() << std::endl;
  t.wait();
  std::cout << "ts2: " << GetCurrentTime() << std::endl;
  std::cout << "Hello, world!" << std::endl;
  return 0;
}

// output:
// ts1: 1677602292636395453
// 2023-Feb-28 16:38:17.636309
// ts2: 1677602297641368114
// Hello, world!

抽出看boost::asio::deadline_timer t(io, boost::posix_time::seconds(5))执行


(gdb) s
boost::asio::basic_deadline_timer<boost::posix_time::ptime, boost::asio::time_traits<boost::posix_time::ptime>, boost::asio::deadline_timer_service<boost::posix_time::ptime, boost::asio::time_traits<boost::posix_time::ptime> > >::basic_deadline_timer (this=0x7fffffffdd80, io_service=...,
    expiry_time=...) at /root/3rd/boost_1_62_0/boost/asio/basic_deadline_timer.hpp:184
184         : basic_io_object<TimerService>(io_service)
(gdb) l
179        * @param expiry_time The expiry time to be used for the timer, relative to
180        * now.
181        */
182       basic_deadline_timer(boost::asio::io_service& io_service,
183           const duration_type& expiry_time)
184         : basic_io_object<TimerService>(io_service)
185       {
186         boost::system::error_code ec;
187         this->service.expires_from_now(this->implementation, expiry_time, ec);
188         boost::asio::detail::throw_error(ec, "expires_from_now");
(gdb) bt
#0  boost::asio::basic_deadline_timer, boost::asio::deadline_timer_service > >::basic_deadline_timer (this=0x7fffffffdd80,
    io_service=..., expiry_time=...) at /root/3rd/boost_1_62_0/boost/asio/basic_deadline_timer.hpp:184
#1  0x000000000041889f in main () at test_deadline_timer.cpp:17
(gdb) p io_service
$1 = (boost::asio::io_service &) @0x7fffffffddc0: {<boost::asio::detail::noncopyable> = {<No data fields>}, service_registry_ = 0x653f40,
  impl_ = @0x653c20}
(gdb) s
boost::asio::basic_io_object<boost::asio::deadline_timer_service<boost::posix_time::ptime, boost::asio::time_traits<boost::posix_time::ptime> >, false>::basic_io_object (this=0x7fffffffdd80, io_service=...) at /root/3rd/boost_1_62_0/boost/asio/basic_io_object.hpp:91
91          : service(boost::asio::use_service<IoObjectService>(io_service))
(gdb) l
86        /**
87         * Performs:
88         * @code get_service().construct(get_implementation()); @endcode
89         */
90        explicit basic_io_object(boost::asio::io_service& io_service)
91          : service(boost::asio::use_service<IoObjectService>(io_service))
92        {
93          service.construct(implementation);
94        }
95
(gdb) n
93          service.construct(implementation);
(gdb) p implementation
$2 = {<boost::asio::detail::noncopyable> = {<No data fields>},
  expiry = {<boost::date_time::base_time<boost::posix_time::ptime, boost::date_time::counted_time_system<boost::date_time::counted_time_rep<boost::posix_time::millisec_posix_time_system_config> > >> = {<boost::operators_impl::less_than_comparable<boost::posix_time::ptime, boost::operators_impl::equality_comparable<boost::posix_time::ptime, boost::posix_time::ptime, boost::operators_impl::operators_detail::empty_base<boost::posix_time::ptime>, boost::operators_impl::operators_detail::false_t>, boost::operators_impl::operators_detail::empty_base<boost::posix_time::ptime>, boost::operators_impl::operators_detail::true_t>> = {<boost::operators_impl::less_than_comparable1<boost::posix_time::ptime, boost::operators_impl::equality_comparable<boost::posix_time::ptime, boost::posix_time::ptime, boost::operators_impl::operators_detail::empty_base<boost::posix_time::ptime>, boost::operators_impl::operators_detail::false_t> >> = {<boost::operators_impl::equality_comparable<boost::posix_time::ptime, boost::posix_time::ptime, boost::operators_impl::operators_detail::empty_base<boost::posix_time::ptime>, boost::operators_impl::operators_detail::false_t>> = {<boost::operators_impl::equality_comparable1<boost::posix_time::ptime, boost::operators_impl::operators_detail::empty_base<boost::posix_time::ptime> >> = {<boost::operators_impl::operators_detail::empty_base<boost::posix_time::ptime>> = {<No data fields>}, <No data fields>}, <No data fields>}, <No data fields>}, <No data fields>}, time_ = {time_count_ = {value_ = 9223372036854775806}}}, <No data fields>}, might_have_pending_waits = false,
  timer_data = {op_queue_ = {<boost::asio::detail::noncopyable> = {<No data fields>}, front_ = 0x0, back_ = 0x0}, heap_index_ = 4295771,
    next_ = 0x0, prev_ = 0x0}}
(gdb) ptype implementation
type = struct boost::asio::detail::deadline_timer_service<boost::asio::time_traits<boost::posix_time::ptime> >::implementation_type
        : private boost::asio::detail::noncopyable {
    boost::asio::detail::deadline_timer_service<boost::asio::time_traits<boost::posix_time::ptime> >::time_type expiry;
    bool might_have_pending_waits;
    boost::asio::detail::timer_queue<boost::asio::time_traits<boost::posix_time::ptime> >::per_timer_data timer_data;
}

这里的get_service就得到了我们前面提到的basic_io_object(basic_deadline_timer的父类)中的service_成员,即该定时器的服务类。expires_from_now,甚至包括其它的expires_XXX都是获取或者修改这个定时器的时间信息,也就是存取我前面所说的数据对象implementation_(跟service_放一起的)

throw_error的写法贯穿于整个asio,不同于往常我们习惯的try-catch写法,这里采用的类似于errno的写法,许多操作函数都需要传入一个error_code对象,返回时会重新初始化该对象,如果该对象非空则会抛出异常。

basic_deadline_timer(asio::io_context& io_context,
    const duration_type& expiry_time)
    : basic_io_object<ASIO_SVC_T>(io_context)
{
    asio::error_code ec;
    this->get_service().expires_from_now(
        this->get_implementation(), expiry_time, ec);
    asio::detail::throw_error(ec, "expires_from_now");
}

inline void throw_error(const asio::error_code& err,
    const char* location)
{
    if (err)
        do_throw_error(err, location);
}

同步调用——wait

(gdb) bt
#0  0x00007ffff5f24b23 in __select_nocancel () from /lib64/libc.so.6
#1  0x000000000041b74b in boost::asio::detail::socket_ops::select (nfds=0, readfds=0x0, writefds=0x0, exceptfds=0x0, timeout=0x7fffffffdca0,
    ec=...) at /root/3rd/boost_1_62_0/boost/asio/detail/impl/socket_ops.ipp:1780
#2  0x0000000000425daa in boost::asio::detail::deadline_timer_service >::do_wait (this=0x653d28, timeout=..., ec=...) at /root/3rd/boost_1_62_0/boost/asio/detail/deadline_timer_service.hpp:212
#3  0x0000000000424b07 in boost::asio::detail::deadline_timer_service >::wait (this=0x653d28,
    impl=..., ec=...) at /root/3rd/boost_1_62_0/boost/asio/detail/deadline_timer_service.hpp:171
#4  0x0000000000421edf in boost::asio::deadline_timer_service >::wait (this=0x653d00, impl=..., ec=...) at /root/3rd/boost_1_62_0/boost/asio/deadline_timer_service.hpp:135
#5  0x000000000041e578 in boost::asio::basic_deadline_timer, boost::asio::deadline_timer_service > >::wait (this=0x7fffffffdd80)
    at /root/3rd/boost_1_62_0/boost/asio/basic_deadline_timer.hpp:458
#6  0x0000000000418921 in main () at test_deadline_timer.cpp:20

正如前面所说,basic_deadline_timer自己是干不了啥事的,还是得调用deadline_timer_service的接口方法:

  // basic_deadline_timer
  void wait()
  {
    asio::error_code ec;
    this->get_service().wait(this->get_implementation(), ec);
    asio::detail::throw_error(ec, "wait");
  }

  // deadline_timer_service
  void wait(implementation_type& impl, asio::error_code& ec)
  {
    time_type now = Time_Traits::now();
    ec = asio::error_code();
    while (Time_Traits::less_than(now, impl.expiry) && !ec)
    {
      this->do_wait(Time_Traits::to_posix_duration(
            Time_Traits::subtract(impl.expiry, now)), ec);
      now = Time_Traits::now();
    }
  }

  //deadline_timer_service中wait-->do_wait
  template <typename Duration>
  void do_wait(const Duration& timeout, asio::error_code& ec)
  {
#if defined(ASIO_WINDOWS_RUNTIME)
    std::this_thread::sleep_for(
        std::chrono::seconds(timeout.total_seconds())
        + std::chrono::microseconds(timeout.total_microseconds()));
    ec = asio::error_code();
#else // defined(ASIO_WINDOWS_RUNTIME)
    ::timeval tv;
    tv.tv_sec = timeout.total_seconds();
    tv.tv_usec = timeout.total_microseconds() % 1000000;
    socket_ops::select(0, 0, 0, 0, &tv, ec);
#endif // defined(ASIO_WINDOWS_RUNTIME)
  }

  // The queue of timers.
  timer_queue<Time_Traits> timer_queue_;

  // The object that schedules and executes timers. Usually a reactor.
  timer_scheduler& scheduler_;
};

其实源码已经很好理解了,用了类似pthread_cond_t条件变量触发的写法,在循环判断内进行阻塞等待操作。
在Linux系统上阻塞等待用的是select的定时机制,这里的select实际上是没有传入任何描述符的。
在windows上使用的是std::this_thread::sleep_for方法

异步调用——async_wait

异步调用相对就复杂一点了,因为要传入回调函数,还需要epoll_reactor的协助。
首先看basic_deadline_timer中的async_wait函数:

  template <typename WaitHandler>
  BOOST_ASIO_INITFN_RESULT_TYPE(WaitHandler,
      void (boost::system::error_code))
  async_wait(WaitHandler&& handler)
  {
    // If you get an error on the following line it means that your handler does
    // not meet the documented type requirements for a WaitHandler.
    BOOST_ASIO_WAIT_HANDLER_CHECK(WaitHandler, handler) type_check;

    async_completion<WaitHandler,
      void (boost::system::error_code)> init(handler);

    this->get_service().async_wait(this->get_implementation(),
        init.completion_handler);

    return init.result.get();
  }

看起来很长,实际上逻辑很简单。第8行的宏函数,实际上就是检验传进来的WaitHandler这个函数是否符合要求,具体代码有点复杂,这里就不贴了,大致思路是采用static_cast能否正常转化来验证,这里面用到了静态断言。
当然,这个函数的主体还是得去调用deadline_timer_service的async_wait。此外,还有async_completion的处理,这里引用下async_completion构造函数的官方注释:

   /**
   * The constructor creates the concrete completion handler and makes the link
   * between the handler and the asynchronous result.
   */

大致意思就是对这个传进来的回调函数进行下处理,再将其与一个异步返回结果进行绑定。颇像future的机制啊。
然而实际上,让我很纳闷的是,在return语句那一行,返回的init.result.get(),这个get()函数实际上是空的…

template <typename Handler>
class async_result<Handler>
{
public:
  typedef void type;
  explicit async_result(Handler&) {}
  type get() {}
};

不知道是不是我理解有误,反正我是没弄懂这写法。。
再看到deadline_timer_service的async_wait:

  template <typename Handler>
  void async_wait(implementation_type& impl, Handler& handler)
  {
    // Allocate and construct an operation to wrap the handler.
    typedef wait_handler<Handler> op;
    typename op::ptr p = { boost::asio::detail::addressof(handler),
      op::ptr::allocate(handler), 0 };
    p.p = new (p.v) op(handler);

    impl.might_have_pending_waits = true;

	  // shceduler_是定时器服务的调度器,是epoll_reactor对象
    scheduler_.schedule_timer(timer_queue_, impl.expiry, impl.timer_data, p.p);
    p.v = p.p = 0;
  }

op::ptr这一段大致意思应该是对handler这个回调函数进行一层包装,这个包装对象是动态分配的,可以看到p指针最后被清0,因为在schedule_timer中,该包装对象的负责权已经被托管给传入该函数的deadline_timer_service的timer_queue_了。因此在异步操作中需要保证对象始终有效,否则异步回调时可能会出现段错误;
在此再补充一下,实际上deadline_timer_service有两个成员:

  // The queue of timers.
  timer_queue<Time_Traits> timer_queue_;   // 维护所有的定时器

  // The object that schedules and executes timers. Usually a reactor.
  timer_scheduler& scheduler_;   // deadline_timer_service服务的异步调度器。这里timer_scheduler就是epoll_reacotr

接下来再看schedule_timer函数:

template <typename Time_Traits>
void epoll_reactor::schedule_timer(timer_queue<Time_Traits>& queue,
    const typename Time_Traits::time_type& time,
    typename timer_queue<Time_Traits>::per_timer_data& timer, wait_op* op)
{
  mutex::scoped_lock lock(mutex_);

  if (shutdown_)
  {
    scheduler_.post_immediate_completion(op, false);
    return;
  }

  bool earliest = queue.enqueue_timer(time, timer, op);//将定时器添加进队列,这个队列是deadline_timer_service的timer_queue_成员
  scheduler_.work_started(); //epoll_reactor::work_started
  if (earliest)
    update_timeout();  // 如果当前定时器的触发时间最早,则更新epoll_reactor的timer_fd
}

void epoll_reactor::update_timeout()
{
  if (timer_fd_ != -1)
  {
    itimerspec new_timeout;
    itimerspec old_timeout;
    int flags = get_timeout(new_timeout);
    timerfd_settime(timer_fd_, flags, &new_timeout, &old_timeout);
    return;
  }
  interrupt();
}

// Notify that some work has started.
//scheduler::post_immediate_completion   scheduler::post_immediate_completion
//epoll_reactor::schedule_timer  epoll_reactor::start_op
//work_finished和work_started对应      io_context::work::work中调用
void epoll_reactor::work_started() //计数,实际上代表的是accept获取到的链接数
{
  ++outstanding_work_;
}

前面用于包装回调函数的wait_handler是wait_op的子类,而wait_op是scheduler_operation的子类。scheduler_operation代表所有这种回调函数的包装。
需要注意的是这里的scheduler_不再是前面的scheduler_了,这里的scheduler_是epoll_reactor的成员,是scheduler对象(scheduler就是io_service的实现类)。
这个函数先判断当前的epoll_reactor是否处于关闭状态,epoll_reactor关闭时是不会进行任何异步监听的。如果epoll_reactor已关闭,则把该操作函数交给scheduler(即io_service)来处理。具体如何处理我们后面的博客再讲,这个逻辑有点复杂。如果epoll_reactor处于正常的开启状态,则将该定时器添加到deadline_timer_service的timer_queue_队列中,然后告诉scheduler(io_service)有新的任务来了(就是那句scheduler_.work_started()),后面scheduler会自动处理。如果当前定时器的触发时间点最早,还要更新epoll_reactor的定时器。

异步定时器示例

/**
 * @brief 测试异步定时器
 */
#include 
#include 
#include 

void callback(const boost::system::error_code&) {
  std::cout << "Hello, world!" << std::endl;
}

void callback2(const boost::system::error_code&) {
  std::cout << "second call but first run" << std::endl;
}

int main() {
  boost::asio::io_service io;
  boost::asio::steady_timer st(io);
    st.expires_from_now(std::chrono::seconds(5));
  st.vim(callback);

  boost::asio::deadline_timer dt(io, boost::posix_time::seconds(3));
  dt.async_wait(callback2);

  std::cout << "first run\n";
  io.run();
  return 0;
}

gdb异步定时器流程


# aysnc_wait 程序调用堆栈
(gdb) bt
#0  boost::asio::detail::epoll_reactor::schedule_timer > > (this=0x639d70, queue=..., time=..., timer=..., op=0x63a2a0)
    at /root/3rd/boost_1_62_0/boost/asio/detail/impl/epoll_reactor.hpp:52
#1  0x0000000000416b6c in boost::asio::detail::deadline_timer_service > >::async_wait (this=0x639d28, impl=...,
    handler=@0x7fffffffe240: 0x40ce12 <callback(boost::system::error_code const&)>)
    at /root/3rd/boost_1_62_0/boost/asio/detail/deadline_timer_service.hpp:192
#2  0x000000000041506c in boost::asio::waitable_timer_service >::async_wait (this=0x639d00, impl=...,
    handler=@0x40ce12: {void (const boost::system::error_code &)} 0x40ce12 <callback(boost::system::error_code const&)>)
    at /root/3rd/boost_1_62_0/boost/asio/waitable_timer_service.hpp:149
#3  0x0000000000413080 in boost::asio::basic_waitable_timer, boost::asio::waitable_timer_service > >::async_wait (this=0x7fffffffe2c0,
    handler=@0x40ce12: {void (const boost::system::error_code &)} 0x40ce12 <callback(boost::system::error_code const&)>)
    at /root/3rd/boost_1_62_0/boost/asio/basic_waitable_timer.hpp:511
#4  0x000000000040cf07 in main () at asio_async_timer_test.cpp:22


# run
#0  boost::asio::detail::epoll_reactor::run (this=0x639d70, block=true, ops=...)
    at /root/3rd/boost_1_62_0/boost/asio/detail/impl/epoll_reactor.ipp:438
#1  0x0000000000410748 in boost::asio::detail::task_io_service::do_run_one (this=0x639c20, lock=..., this_thread=..., ec=...)
    at /root/3rd/boost_1_62_0/boost/asio/detail/impl/task_io_service.ipp:356
#2  0x0000000000410337 in boost::asio::detail::task_io_service::run (this=0x639c20, ec=...)
    at /root/3rd/boost_1_62_0/boost/asio/detail/impl/task_io_service.ipp:149
#3  0x0000000000410a7f in boost::asio::io_service::run (this=0x7fffffffe300) at /root/3rd/boost_1_62_0/boost/asio/impl/io_service.ipp:59
#4  0x000000000040cf70 in main () at asio_async_timer_test.cpp:28


(gdb) bt
#0  callback2 () at asio_async_timer_test.cpp:14
#1  0x0000000000419c26 in boost::asio::detail::binder1::operator() (
    this=0x7fffffffe080) at /root/3rd/boost_1_62_0/boost/asio/detail/bind_handler.hpp:47
#2  0x000000000041947a in boost::asio::asio_handler_invoke > (function=...) at /root/3rd/boost_1_62_0/boost/asio/handler_invoke_hook.hpp:69
#3  0x0000000000418e4b in boost_asio_handler_invoke_helpers::invoke, void (*)(boost::system::error_code const&)> (function=...,
    context=@0x7fffffffe080: 0x40ce4c <callback2(boost::system::error_code const&)>)
    at /root/3rd/boost_1_62_0/boost/asio/detail/handler_invoke_helpers.hpp:37
#4  0x00000000004183ec in boost::asio::detail::wait_handler::do_complete (owner=0x639c20,
    base=0x63a060) at /root/3rd/boost_1_62_0/boost/asio/detail/wait_handler.hpp:70
#5  0x000000000040e9ac in boost::asio::detail::task_io_service_operation::complete (this=0x63a060, owner=..., ec=..., bytes_transferred=0)
    at /root/3rd/boost_1_62_0/boost/asio/detail/task_io_service_operation.hpp:38
#6  0x00000000004107cc in boost::asio::detail::task_io_service::do_run_one (this=0x639c20, lock=..., this_thread=..., ec=...)
    at /root/3rd/boost_1_62_0/boost/asio/detail/impl/task_io_service.ipp:372
#7  0x0000000000410337 in boost::asio::detail::task_io_service::run (this=0x639c20, ec=...)
    at /root/3rd/boost_1_62_0/boost/asio/detail/impl/task_io_service.ipp:149
#8  0x0000000000410a7f in boost::asio::io_service::run (this=0x7fffffffe300) at /root/3rd/boost_1_62_0/boost/asio/impl/io_service.ipp:59
#9  0x000000000040cf70 in main () at asio_async_timer_test.cpp:28

(gdb) bt
#0  callback () at asio_async_timer_test.cpp:10
#1  0x0000000000419c26 in boost::asio::detail::binder1::operator() (
    this=0x7fffffffe080) at /root/3rd/boost_1_62_0/boost/asio/detail/bind_handler.hpp:47
#2  0x000000000041947a in boost::asio::asio_handler_invoke > (function=...) at /root/3rd/boost_1_62_0/boost/asio/handler_invoke_hook.hpp:69
#3  0x0000000000418e4b in boost_asio_handler_invoke_helpers::invoke, void (*)(boost::system::error_code const&)> (function=...,
    context=@0x7fffffffe080: 0x40ce12 <callback(boost::system::error_code const&)>)
    at /root/3rd/boost_1_62_0/boost/asio/detail/handler_invoke_helpers.hpp:37
#4  0x00000000004183ec in boost::asio::detail::wait_handler::do_complete (owner=0x639c20,
    base=0x63a2a0) at /root/3rd/boost_1_62_0/boost/asio/detail/wait_handler.hpp:70
#5  0x000000000040e9ac in boost::asio::detail::task_io_service_operation::complete (this=0x63a2a0, owner=..., ec=..., bytes_transferred=0)
    at /root/3rd/boost_1_62_0/boost/asio/detail/task_io_service_operation.hpp:38
#6  0x00000000004107cc in boost::asio::detail::task_io_service::do_run_one (this=0x639c20, lock=..., this_thread=..., ec=...)
    at /root/3rd/boost_1_62_0/boost/asio/detail/impl/task_io_service.ipp:372
#7  0x0000000000410337 in boost::asio::detail::task_io_service::run (this=0x639c20, ec=...)
    at /root/3rd/boost_1_62_0/boost/asio/detail/impl/task_io_service.ipp:149
#8  0x0000000000410a7f in boost::asio::io_service::run (this=0x7fffffffe300) at /root/3rd/boost_1_62_0/boost/asio/impl/io_service.ipp:59
#9  0x000000000040cf70 in main () at asio_async_timer_test.cpp:28


boost::asio::io_service::run 调用分析
run其实就是一直循环执行do_one,并且是以阻塞方式进行的(参数为true),而run_one同样是以阻塞方式进行的,但只执行一次do_one;poll和run几乎完全相同,只是它是以非阻塞方式执行do_one(参数为false),poll_one是以非阻塞方式执行一次do_one。
run()会以阻塞的方式等待所有异步操作(包括post的回调函数)完成然后返回
run_one()也会以阻塞的方式完成一个异步操作(包括post的回调函数)就返回。
poll()以非阻塞的方式检测所有的异步操作,如果有已经完成的,或者是通过post加入的异步操作,直接调用其回调函数,处理完之后,然后返回。不在处理其他的需要等待的异步操作。
poll_one() 以非阻塞方式完成一个异步操作(已经完成的)就立即返回,不在处理其他已经完成的异步操作和仍需等待的异步操作。

// /root/3rd/boost_1_62_0/boost/asio/impl/io_service.ipp:59
std::size_t io_service::run()
{
  boost::system::error_code ec;
  std::size_t s = impl_.run(ec);
  boost::asio::detail::throw_error(ec);
  return s;
}


// /root/3rd/boost_1_62_0/boost/asio/detail/impl/task_io_service.ipp:149
std::size_t task_io_service::run(boost::system::error_code& ec)
{
  ec = boost::system::error_code();
  if (outstanding_work_ == 0)
  {
    stop();
    return 0;
  }

  thread_info this_thread;
  // 本线程私有private_op_queue队列中op任务数,"outstanding_work"表示未完成的工作,并非"杰出"的工作
  this_thread.private_outstanding_work = 0;
  thread_call_stack::context ctx(this, this_thread);

  mutex::scoped_lock lock(mutex_);

  std::size_t n = 0;
  for (; do_run_one(lock, this_thread, ec); lock.lock())
    if (n != (std::numeric_limits<std::size_t>::max)())
      ++n;
  return n;
}

thead_info结构的定义:

  // Structure containing thread-specific data.
  //线程私有队列,该结构中包含一个队列成员
  typedef scheduler_thread_info thread_info;

struct scheduler_thread_info : public thread_info_base
{
  scheduler::do_wait_one->epoll_reactor::run 获取对应op,
  // 最终再通过scheduler::task_cleanup和scheduler::work_cleanup析构函数入队到scheduler::op_queue_
  //epoll相关的网络事件任务首先入队到私有队列private_op_queue,然后再入队到全局op_queue_队列,这样就可以一次性把获取到的网络事件任务入队到全局队列,只需要加锁一次
  //private_op_queue队列成员的op类型为descriptor_state,
  op_queue<scheduler_operation> private_op_queue;

  //本线程私有private_op_queue队列中op任务数,"outstanding_work"表示未完成的工作,并非"杰出"的工作
  long private_outstanding_work;
};

thread_call_stack::context结构:

// ~/workspace/asio-annotation/asio/include/asio/detail/thread_context.hpp
class thread_context
{
public:
  // Per-thread call stack to track the state of each thread in the context.
  //参考scheduler.ipp里面搜索thread_call_stack    所以工作线程都会加入到call_stack.top_链表中
  typedef call_stack<thread_context, thread_info_base> thread_call_stack;
};

//~/asio-annotation/asio/include/asio/detail/call_stack.hpp
call_stack 是一个模板链表结构,ctx 构造函数会把任务线程入队到top_队列,ctx对象析构 会移动top_头指针

// Helper class to determine whether or not the current thread is inside an
// invocation of io_context::run() for a specified io_context object.
template <typename Key, typename Value = unsigned char>
class call_stack
{
public:
  // Context class automatically pushes the key/value pair on to the stack.
  class context
    : private noncopyable
  {
  public:
    // Push the key on to the stack.
    explicit context(Key* k)
      : key_(k),
        next_(call_stack<Key, Value>::top_)
    {
      value_ = reinterpret_cast<unsigned char*>(this);
      call_stack<Key, Value>::top_ = this;
    }

    // Push the key/value pair on to the stack.
    //KV入队到top_队列,例如线程的入队可以参考scheduler::wait_one
    context(Key* k, Value& v)
      : key_(k),
        value_(&v),
        next_(call_stack<Key, Value>::top_)
    {
      call_stack<Key, Value>::top_ = this;
    }

    // Pop the key/value pair from the stack.
    ~context()
    {
      call_stack<Key, Value>::top_ = next_;
    }

    // Find the next context with the same key.
    Value* next_by_key() const
    {
      context* elem = next_;
      while (elem)
      {
        if (elem->key_ == key_)
          return elem->value_;
        elem = elem->next_;
      }
      return 0;
    }

  private:
    friend class call_stack<Key, Value>;

    // The key associated with the context.
    Key* key_;

    // The value associated with the context.
    Value* value_;

    // The next element in the stack.
    //kv通过next_指针链接在一起
    context* next_;
  };

  friend class context;

  // Determine whether the specified owner is on the stack. Returns address of
  // key if present, 0 otherwise.
  //K是否在top队列中,可以参考thread_call_stack::contains(this);
  static Value* contains(Key* k)
  {
    context* elem = top_;
    while (elem)
    {
      if (elem->key_ == k)
        return elem->value_;
      elem = elem->next_;
    }
    return 0;
  }

  // Obtain the value at the top of the stack.
  static Value* top()
  {
    context* elem = top_;
    return elem ? elem->value_ : 0;
  }

private:
  // The top of the stack of calls for the current thread.
  //线程队列头部
  //KV入队到top_队列,例如线程的入队可以参考scheduler::wait_one
  static tss_ptr<context> top_;
};

do_run_onetask_cleanup 结构 和 work_cleanup 结构:

// task_cleanup结构
// 通过task_cleanup析构入队task_io_service_->op_queue_
struct task_io_service::task_cleanup
{
  ~task_cleanup()
  {
    if (this_thread_->private_outstanding_work > 0)
    {
      boost::asio::detail::increment(
          task_io_service_->outstanding_work_,
          this_thread_->private_outstanding_work);
    }
    this_thread_->private_outstanding_work = 0;

    // Enqueue the completed operations and reinsert the task at the end of
    // the operation queue.
    // 将线程上的私有队列入队全局op队列,并在op队列末尾重新插入task_operation_
    lock_->lock();
    task_io_service_->task_interrupted_ = true;
    task_io_service_->op_queue_.push(this_thread_->private_op_queue);
    task_io_service_->op_queue_.push(&task_io_service_->task_operation_);
  }

  task_io_service* task_io_service_;
  mutex::scoped_lock* lock_;
  thread_info* this_thread_;
};

// work_cleanup结构
// 通过work_cleanup析构入队task_io_service_->op_queue_
struct task_io_service::work_cleanup
{
  ~work_cleanup()
  {
    if (this_thread_->private_outstanding_work > 1)
    {
      boost::asio::detail::increment(
          task_io_service_->outstanding_work_,
          this_thread_->private_outstanding_work - 1);
    }
    else if (this_thread_->private_outstanding_work < 1)
    {
      task_io_service_->work_finished();
    }
    this_thread_->private_outstanding_work = 0;

#if defined(BOOST_ASIO_HAS_THREADS)
    if (!this_thread_->private_op_queue.empty())
    {
      lock_->lock();
      task_io_service_->op_queue_.push(this_thread_->private_op_queue);
    }
#endif // defined(BOOST_ASIO_HAS_THREADS)
  }

  task_io_service* task_io_service_;
  mutex::scoped_lock* lock_;
  thread_info* this_thread_;
};s
// /root/3rd/boost_1_62_0/boost/asio/detail/impl/task_io_service.ipp:372
std::size_t task_io_service::do_run_one(mutex::scoped_lock& lock,
    task_io_service::thread_info& this_thread,
    const boost::system::error_code& ec)
{
  while (!stopped_)
  {
    if (!op_queue_.empty())
    {
      // Prepare to execute first handler from queue.
      operation* o = op_queue_.front();
      op_queue_.pop();
      bool more_handlers = (!op_queue_.empty());

      if (o == &task_operation_)
      {
        task_interrupted_ = more_handlers;

        if (more_handlers && !one_thread_)
          wakeup_event_.unlock_and_signal_one(lock);
        else
          lock.unlock();

        task_cleanup on_exit = { this, &lock, &this_thread };
        (void)on_exit;

        // Run the task. May throw an exception. Only block if the operation
        // queue is empty and we're not polling, otherwise we want to return
        // as soon as possible.
        task_->run(!more_handlers, this_thread.private_op_queue);
      }
      else
      {
        std::size_t task_result = o->task_result_;

        if (more_handlers && !one_thread_)
          wake_one_thread_and_unlock(lock);
        else
          lock.unlock();

        // Ensure the count of outstanding work is decremented on block exit.
        work_cleanup on_exit = { this, &lock, &this_thread };
        (void)on_exit;

        // Complete the operation. May throw an exception. Deletes the object.
        o->complete(*this, ec, task_result);

        return 1;
      }
    }
    else
    {
      wakeup_event_.clear(lock);
      wakeup_event_.wait(lock);
    }
  }

  return 0;
}


你可能感兴趣的:(STL/Boost,linux,后端,c++,服务器)