trafficserver的ICP初始化和启动流程源码注释

icp.config配置文件说明
文件中的每一行描述一个icp peer的名字和配置信息,格式如下:
host : host_IP : peer_type : proxy_port : icp_port : MC_on : MC_IP : MC_TTL :
host:icp peer的主机名,可选项
host_IP:icp peer的ip地址,可选项,但是host和host_IP必须至少指定一个
ctype:1表示icp父节点缓存,2表示icp子节点缓存
proxy_port:icp peer用来代理通信的tcp端口
icp_port:icp peer用来icp通信的udp端口
MC_on:0表示关闭多播,1表示开启多播
MC_IP:多播的ip地址
MC_ttl:1只能在一个子网内组播,2可以在多个子网内组播

records.config中ICP相关的配置如下:
  proxy.config.icp.enabled
  proxy.config.icp.icp_port
  proxy.config.icp.icp_interface
  proxy.config.icp.multicast_enabled
  proxy.config.icp.query_timeout
  proxy.config.icp.lookup_local
  proxy.config.icp.stale_icp_enabled
  proxy.config.icp.reply_to_unknown_peer
  proxy.config.icp.default_reply_port

在分析ATS的ICP相关源码之前要看一下ICP协议,网上这方面的资料很少,建议直接查看RFC:http://www.cse.ohio-state.edu/cgi-bin/rfc/rfc2186.html

http://chenpiaoping.blog.51cto.com/5631143/1368653

   由于ICP协议本身比较简单,所以ATS和ICP相关的源码也比较简单(默认没有开启,也没用过,应该还有问题),今天花了一下午的时间看了一下源码,下面简单注释一下
和其他模块一样ICP相关处理的类是class ICPProcessorExt,其实例为icpProcessor

extern ICPProcessorExt icpProcessor

class ICPProcessorExt的定义如下:

class ICPProcessorExt
{
public:
 ICPProcessorExt(ICPProcessor *);
 ~ICPProcessorExt();
 void start();
 Action *ICPQuery(Continuation *, URL *);
private:
   ICPProcessor * _ICPpr;
};

从这个类可以看到,要分析ICP源码直接从该类提供的两个外部接口 start()和 ICPQuery()入手即可,
start()的功能是初始化ICP的相关配置并启动ICP模块
ICPQuery()是对外提供的唯一的ICP查询接口(怪不得这个类的名字中有Ext呢,Externet的缩写嘛)

代码的开始还是在Main.cc中
if (icp_enabled)
  icpProcessor.start();

main()--->ICPProcessorExt::start()
封装了一层直接调用ICPProcessor的start()
void
ICPProcessorExt::start()
{
 _ICPpr->start();
}
main()--->ICPProcessorExt::start()--->ICPProcessor::start()
该函数的功能是读取records.config中和ICP相关的配置,以及ICP的配置文件icp.config并初始化相关的数据结构,然后设置定时处理ICP的配置修改以及定时接收和处理ICP的查询请求
void
ICPProcessor::start()
{
 if (_Initialized)    
   return;
//创建锁实例
 _l = NEW(new AtomicLock());
//用于ICP分配内存的索引(ATS的内存分配机制)
 ICPHandlerCont::ICPDataBuf_IOBuffer_sizeindex = iobuffer_size_to_index(MAX_ICP_MSGSIZE, MAX_BUFFER_SIZE_INDEX);
//注册状态计数信息和回调方法
 InitICPStatCallbacks();
//创建 ICPConfiguration的实例,并读取ICP的配置文件来初始化相关的数据结构
 _ICPConfig = NEW(new ICPConfiguration());
//这个目前没有什么用
 _mcastCB_handler = NEW(new ICPHandlerCont(this));
 SET_CONTINUATION_HANDLER(_mcastCB_handler, (ICPHandlerContHandler) & ICPHandlerCont::TossEvent);
//创建ICP的peer列表,并启动监听的socket
 if (_ICPConfig->globalConfig()->ICPconfigured()) {
   if (BuildPeerList() == 0) {
     if (SetupListenSockets() == 0) {
       _AllowIcpQueries = 1;   // allow receipt of queries
     }
   }
 }
//打印ICP的配置信息
 DumpICPConfig();
//创建ICP配置监控实例,并设置定时调用的函数来处理ICP配置的改变
 _ICPPeriodic = NEW(new ICPPeriodicCont(this));
 SET_CONTINUATION_HANDLER(_ICPPeriodic, (ICPPeriodicContHandler) & ICPPeriodicCont::PeriodicEvent);
 _PeriodicEvent = eventProcessor.schedule_every(_ICPPeriodic, HRTIME_MSECONDS(ICPPeriodicCont::PERIODIC_INTERVAL), ET_ICP);
//创建ICP的Handler实例,并设置定时调用的函数来接收并处理ICP请求,一两句话说不清楚,往下看就知道了
 _ICPHandler = NEW(new ICPHandlerCont(this));
 SET_CONTINUATION_HANDLER(_ICPHandler, (ICPHandlerContHandler) & ICPHandlerCont::PeriodicEvent);
 _ICPHandlerEvent = eventProcessor.schedule_every(_ICPHandler,
                                                  HRTIME_MSECONDS(ICPHandlerCont::ICP_HANDLER_INTERVAL), ET_ICP);
//创建http请求结构,作为参数传递给cache的查询接口,即查询cache要用该结构
 if (!gclient_request.valid()) {
   gclient_request.create(HTTP_TYPE_REQUEST);
 }
 _Initialized = 1;
}

main()--->ICPProcessorExt::start()--->ICPProcessor::start()--->ICPConfiguration::ICPConfiguration()
该函数的功能是读取records.config中和ICP相关的配置以及icp.config配置文件,并初始化相关的相应的数据结构

ICPConfiguration::ICPConfiguration():_icp_config_callouts(0)
{
//ICPConfigData是ICP的配置封装类,简单的理解就是records.condig里读取的和ICP相关的配置都放在这个结构里,这里有两个主要是用来实现ICP配置的修改和更新的
 _icp_cdata = NEW(new ICPConfigData());
 _icp_cdata_current = NEW(new ICPConfigData());
//读取records.config中的配置到_icp_cdata_current中
 ICP_EstablishStaticConfigInteger(_icp_cdata_current->_icp_enabled, "proxy.config.icp.enabled");
 ICP_EstablishStaticConfigInteger(_icp_cdata_current->_icp_port, "proxy.config.icp.icp_port");
 ICP_EstablishStaticConfigStringAlloc(_icp_cdata_current->_icp_interface, "proxy.config.icp.icp_interface");
 ICP_EstablishStaticConfigInteger(_icp_cdata_current->_multicast_enabled, "proxy.config.icp.multicast_enabled");
 ICP_EstablishStaticConfigInteger(_icp_cdata_current->_icp_query_timeout, "proxy.config.icp.query_timeout");
 ICP_EstablishStaticConfigInteger(_icp_cdata_current->_cache_lookup_local, "proxy.config.icp.lookup_local");
 ICP_EstablishStaticConfigInteger(_icp_cdata_current->_stale_lookup, "proxy.config.icp.stale_icp_enabled");
 ICP_EstablishStaticConfigInteger(_icp_cdata_current->_reply_to_unknown_peer,
                                  "proxy.config.icp.reply_to_unknown_peer");
 ICP_EstablishStaticConfigInteger(_icp_cdata_current->_default_reply_port, "proxy.config.icp.default_reply_port");
//看,更新的时候就是把icp_cdata_current赋给_icp_cdata
 UpdateGlobalConfig();      
//PeerConfigData是一个peer的配置信息的封装类,最多有MAX_DEFINED_PEERS(64个
 for (int n = 0; n <= MAX_DEFINED_PEERS; ++n) {
   _peer_cdata[n] = NEW(new PeerConfigData);
   _peer_cdata_current[n] = NEW(new PeerConfigData);
 }
 char
   icp_config_filename[PATH_NAME_MAX] = "";
 ICP_ReadConfigString(icp_config_filename, "proxy.config.icp.icp_configuration", sizeof(icp_config_filename) �C 1);
//读取并解析icp.config的配置信息,并把所有peer的配置信息放到数组_peer_cdata_current[]中
//解析的过程很简单,把每行的每个字段取出来赋给PeerConfigData相应的字段即可
 (void) icp_config_change_callback((void *) this, (void *) icp_config_filename, 1);
//_peer_cdata_current[n]赋给_peer_cdata[n]
 UpdatePeerConfig();      
//设置ICP配置更新的回调函数  
 ICP_RegisterConfigUpdateFunc("proxy.config.icp.icp_configuration", mgr_icp_config_change_callback, (void *) this);
}

main()--->ICPProcessorExt::start()--->ICPProcessor::start()--->ICPProcessor::BuildPeerList()


该函数的功能是遍历数组_peer_cdata[n]中的每个peer,按照以下规则加入到相应的list(数组)中
所有peer的list:存放所有peer
父peer的list:存放所有类型为PARENT的peer
发送peer的list:初始化时存放所有peer
接收peer的list:初始化时如果多播开启则存放所有peer,否则为空
在这之前会创建一个表示本地的peer加到所有peer的list和接收peer的list中

int
ICPProcessor::BuildPeerList()
{
 PeerConfigData *Pcfg;
 Peer *P;
 Peer *mcP;
 int index;
 int status;
 PeerType_t type;
//创建本地peer,_peer_cdata[n]数组中的第0个元素类给本地peer用
 Pcfg = _ICPConfig->indexToPeerConfigData(0);
//本地peer的名字:localhost
 ink_strlcpy(Pcfg->_hostname, "localhost", sizeof(Pcfg->_hostname));
//本地peer的类型: CTYPE_LOCAL
 Pcfg->_ctype = PeerConfigData::CTYPE_LOCAL;
 IpEndpoint tmp_ip;
//根据配置的interface获取本地peer的ip地址
 if (!mgmt_getAddrForIntr(GetConfig()->globalConfig()->ICPinterface(), &tmp_ip.sa)) {
   Pcfg->_ip_addr._family = AF_UNSPEC;
   Warning("ICP interface [%s] has no IP address", GetConfig()->globalConfig()->ICPinterface());
   REC_SignalWarning(REC_SIGNAL_CONFIG_ERROR, "ICP interface has no IP address");
 } else {
   Pcfg->_my_ip_addr = Pcfg->_ip_addr = tmp_ip;
 }
//获取配置的TCP和UDP端口
 Pcfg->_proxy_port = 0;
 Pcfg->_icp_port = GetConfig()->globalConfig()->ICPport();
//不开启多播(或组播)
 Pcfg->_mc_member = 0;
//地址族: AF_UNSPEC
 Pcfg->_mc_ip_addr._family = AF_UNSPEC;
//只能在一个子网内组播
 Pcfg->_mc_ttl = 0;
 P = NEW(new ParentSiblingPeer(PEER_LOCAL, Pcfg, this));
//加入所有peer的list中
 status = AddPeer(P);
 ink_release_assert(status);
//加入接收peer的list
 status = AddPeerToRecvList(P);
 ink_release_assert(status);
 _LocalPeer = P;
//遍历数组_peer_cdata[n]中的每个peer,按照以下规则加入到相应的list中
所有peer的list:存放所有peer
父peer的list:存放所有类型为PARENT的peer
发送peer的list:存放所有peer
接收peer的list:如果多播开启则存放所有peer,否则为空

 for (index = 1; index < MAX_DEFINED_PEERS; ++index) {
   Pcfg = _ICPConfig->indexToPeerConfigData(index);
   type = PeerConfigData::CTypeToPeerType_t(Pcfg->GetCType());

   if (Pcfg->GetIPAddr() == _LocalPeer->GetIP())
     continue;                 // ignore
   if ((type == PEER_PARENT) || (type == PEER_SIBLING)) {

     if (Pcfg->MultiCastMember()) {
       mcP = FindPeer(Pcfg->GetMultiCastIPAddr(), Pcfg->GetICPPort());
       if (!mcP) {
         mcP = NEW(new MultiCastPeer(Pcfg->GetMultiCastIPAddr(), Pcfg->GetICPPort(), Pcfg->GetMultiCastTTL(), this));
         status = AddPeer(mcP);
         ink_assert(status);
         status = AddPeerToSendList(mcP);
         ink_assert(status);
         status = AddPeerToRecvList(mcP);
         ink_assert(status);
       }
       P = NEW(new ParentSiblingPeer(type, Pcfg, this));
       status = AddPeer(P);
       ink_assert(status);
       status = ((MultiCastPeer *) mcP)->AddMultiCastChild(P);
       ink_assert(status);

     } else {
       P = NEW(new ParentSiblingPeer(type, Pcfg, this));
       status = AddPeer(P);
       ink_assert(status);
       status = AddPeerToSendList(P);
       ink_assert(status);
     }
     if (type == PEER_PARENT) {
       status = AddPeerToParentList(P);
       ink_assert(status);
     }
   }
 }
 return 0;                     // Success
}

main()--->ICPProcessorExt::start()--->ICPProcessor::start()--->ICPProcessor::SetupListenSockets()
该函数的主要功能是开启组播的peer创建并初始化发送和接收的socket,其他不开启组播的peer(包括本地peer)只在_chan上设置了自己的ip地址,用于填充到ICP报文里
int
ICPProcessor::SetupListenSockets()
{
 int allow_null_configuration;
//ICP的模式:0表示禁止,1表示只接收ICP查询,2表示接收和发送ICP查询
 if ((_ICPConfig->globalConfig()->ICPconfigured() == ICP_MODE_RECEIVE_ONLY)
     && _ICPConfig->globalConfig()->ICPReplyToUnknownPeer()) {
   allow_null_configuration = 1;
 } else {
   allow_null_configuration = 0;
 }
 if (!_LocalPeer) {
   Warning("ICP setup, no defined local Peer");
   REC_SignalWarning(REC_SIGNAL_CONFIG_ERROR, "ICP setup, no defined local Peer");
   return 1;    
 }
 if (GetSendPeers() == 0) {
   if (!allow_null_configuration) {
     Warning("ICP setup, no defined send Peer(s)");
     REC_SignalWarning(REC_SIGNAL_CONFIG_ERROR, "ICP setup, no defined send Peer(s)");
     return 1;                
   }
 }
 if (GetRecvPeers() == 0) {
   if (!allow_null_configuration) {
     Warning("ICP setup, no defined receive Peer(s)");
     REC_SignalWarning(REC_SIGNAL_CONFIG_ERROR, "ICP setup, no defined receive Peer(s)");
     return 1;                
   }
 }
 Peer *P;
 int status;
 int index;
 for (index = 0; index < (_nPeerList + 1); ++index) {
   ip_port_text_buffer ipb, ipb2;
   if ((P = _PeerList[index])) {
//没有开启组播的peer只要设置一下_chan的ip地址就好了
     if ((P->GetType() == PEER_PARENT)
         || (P->GetType() == PEER_SIBLING)) {
       ParentSiblingPeer *pPS = (ParentSiblingPeer *) P;
   pPS->GetChan()->setRemote(pPS->GetIP());
//开启组播的peer要分别创建并初始化发送和接收的socket
     } else if (P->GetType() == PEER_MULTICAST) {
       MultiCastPeer *pMC = (MultiCastPeer *) P;
       ink_assert(_mcastCB_handler != NULL);
       status = pMC->GetSendChan()->setup_mc_send(pMC->GetIP(), _LocalPeer->GetIP(), NON_BLOCKING, pMC->GetTTL(), DISABLE_MC_LOOPBACK, _mcastCB_handler);
       if (status) {
         Warning("ICP MC send setup failed, res=%d, ip=%s bind_ip=%s",
           status,
           ats_ip_nptop(pMC->GetIP(), ipb, sizeof(ipb)),
           ats_ip_nptop(_LocalPeer->GetIP(), ipb2, sizeof(ipb2))
         );
         REC_SignalWarning(REC_SIGNAL_CONFIG_ERROR, "ICP MC send setup failed");
         return 1;      
       }
       status = pMC->GetRecvChan()->setup_mc_receive(pMC->GetIP(),
                                                     NON_BLOCKING, pMC->GetSendChan(), _mcastCB_handler);
       if (status) {
         Warning("ICP MC recv setup failed, res=%d, ip=%s",
           status, ats_ip_nptop(pMC->GetIP(), ipb, sizeof(ipb)));
         REC_SignalWarning(REC_SIGNAL_CONFIG_ERROR, "ICP MC recv setup failed");
         return 1;    
       }
     }
   }
 }
//没有开启组播的peer只要设置一下_chan的ip地址就好了
 ParentSiblingPeer *pPS = (ParentSiblingPeer *) ((Peer *) _LocalPeer);
 pPS->GetChan()->setRemote(pPS->GetIP());
 return 0;                  
}

main()--->ICPProcessorExt::start()--->ICPProcessor::start()--->ICPPeriodicCont::PeriodicEvent()

这个函数的功能是当ICP的配置改变时重新初始化相应的数据结构,相当于又重走了一下以上的流程,这里不是主要的而且流程和前面差不多,就不深入地注释了
int
ICPPeriodicCont::PeriodicEvent(int /* event ATS_UNUSED */, Event * /* e ATS_UNUSED */)
{
 int do_reconfig = 0;
 ICPConfiguration *C = _ICPpr->GetConfig();
 if (C->GlobalConfigChange())
   do_reconfig = 1;
 int configcallouts = C->ICPConfigCallouts();
 if (_last_icp_config_callouts != configcallouts) {
   _last_icp_config_callouts = configcallouts;
   do_reconfig = 1;
 }
 if (do_reconfig) {
   ICPPeriodicCont *rc = NEW(new ICPPeriodicCont(_ICPpr));
   SET_CONTINUATION_HANDLER(rc, (ICPPeriodicContHandler) & ICPPeriodicCont::DoReconfigAction);
   eventProcessor.schedule_imm(rc);
 }
 return EVENT_CONT;
}

main()--->ICPProcessorExt::start()--->ICPProcessor::start()--->ICPHandlerCont::PeriodicEvent()
该函数的功能是遍历接收列表,从上次接收的位置开始为每个peer创建表示读操作的continuation:ICPPeerReadCont,初始化后调度ICPPeerReadCont的ICPPeerReadCont::ICPPeerReadEvent()函数进行读操作

int
ICPHandlerCont::PeriodicEvent(int event, Event * /* e ATS_UNUSED */)
{
 int n_peer, valid_peers;
 Peer *P;
 valid_peers = _ICPpr->GetRecvPeers();
 switch (event) {
 case EVENT_POLL:
 case EVENT_INTERVAL:
   {
       //遍历接收列表中的peer
     for (n_peer = 0; n_peer < valid_peers; ++n_peer) {
       //从上次接收的位置开始    
       P = _ICPpr->GetNthRecvPeer(n_peer, _ICPpr->GetLastRecvPeerBias());
       //peer不为空且为在线状态
       if (!P || (P && !P->IsOnline()))
         continue;
       //必须得是第一次读,读过了就不读了
       if (P->shouldStartRead()) {
           //设置为已经读过了
         P->startingRead();
           //创建读操作的continuation:ICPPeerReadCont并初始化
         ICPPeerReadCont *s = ICPPeerReadContAllocator.alloc();
         int local_lookup = _ICPpr->GetConfig()->globalConfig()->ICPLocalCacheLookup();
         s->init(_ICPpr, P, local_lookup);
         RECORD_ICP_STATE_CHANGE(s, event, ICPPeerReadCont::READ_ACTIVE);
           //调度s的ICPPeerReadCont::ICPPeerReadEvent()函数执行
         s->handleEvent(EVENT_INTERVAL, (Event *) 0);
       }
     }
     break;
   }
 default:
   {
     ink_release_assert(!"unexpected event");
     break;
   }
 }                          
 return EVENT_CONT;
}

main()--->ICPProcessorExt::start()--->ICPProcessor::start()--->ICPHandlerCont::PeriodicEvent()--->ICPPeerReadCont::init()

该函数的功能是初始化表示读操作的continuation:ICPPeerReadCont,并设置其handler为 ICPPeerReadCont::ICPPeerReadEvent()

void
ICPPeerReadCont::init(ICPProcessor * ICPpr, Peer * p, int lookup_local)
{
//读操作状态机
 PeerReadData *s = PeerReadDataAllocator.alloc();
 s->init();
//开始时间
 s->_start_time = ink_get_hrtime();
//指向peer
 s->_peer = p;
//开始状态为READ_ACTIVE
 s->_next_state = READ_ACTIVE;
//是否只读本地cache(cache还有cluster的cache,到cache再分析)
 s->_cache_lookup_local = lookup_local;
//设置handler
 SET_HANDLER((ICPPeerReadContHandler) & ICPPeerReadCont::ICPPeerReadEvent);
//指向ICPProcessor
 _ICPpr = ICPpr;
//指向状态机s
 _state = s;
//递归深度
 _recursion_depth = -1;
//cache操作的continuation
 _object_vc = NULL;
//读到的对象
 _object_read = NULL;
//指向cache请求头处理结构
 _cache_req_hdr_heap_handle = NULL;
//指向cache响应头处理结构
 _cache_resp_hdr_heap_handle = NULL;
 mutex = new_ProxyMutex();
}
main()--->ICPProcessorExt::start()--->ICPProcessor::start()--->ICPHandlerCont::PeriodicEvent()--->ICPPeerReadCont::ICPPeerReadEvent()
该函数的功能是启动状态机进行读操作,并根据返回结果判断读操作是否完成,完成则释放相应的结构,否则继续启动状态机进行读操作
int
ICPPeerReadCont::ICPPeerReadEvent(int event, Event * e)
{
 switch (event) {
 case EVENT_INTERVAL:
 case EVENT_IMMEDIATE:
   {
     break;
   }
 case NET_EVENT_DATAGRAM_WRITE_COMPLETE:
 case NET_EVENT_DATAGRAM_READ_COMPLETE:
 case NET_EVENT_DATAGRAM_READ_ERROR:
 case NET_EVENT_DATAGRAM_WRITE_ERROR:
   {
     ink_assert((event != NET_EVENT_DATAGRAM_READ_COMPLETE)
                || (_state->_next_state == READ_DATA_DONE));
     ink_assert((event != NET_EVENT_DATAGRAM_WRITE_COMPLETE)
                || (_state->_next_state == WRITE_DONE));

     ink_release_assert(this == (ICPPeerReadCont *)
                        completionUtil::getHandle(e));
     break;
   }
 case CACHE_EVENT_LOOKUP_FAILED:
 case CACHE_EVENT_LOOKUP:
   {
     ink_assert(_state->_next_state == AWAITING_CACHE_LOOKUP_RESPONSE);
     break;
   }
 default:
   {
     ink_release_assert(!"unexpected event");
   }
 }          
//启动状态机进行读操作                
 if (PeerReadStateMachine(_state, e) == EVENT_CONT) {
   //读操作继续
   eventProcessor.schedule_in(this, RETRY_INTERVAL, ET_ICP);
   return EVENT_DONE;
//读操作完成则释放相应的结构
 } else if (_state->_next_state == READ_PROCESSING_COMPLETE) {
   _state->_peer->cancelRead();
   this->reset(1);            
   ICPPeerReadContAllocator.free(this);
   return EVENT_DONE;
 } else {
   return EVENT_DONE;
 }
}

main()--->ICPProcessorExt::start()--->ICPProcessor::start()--->ICPHandlerCont::PeriodicEvent()--->ICPPeerReadCont::ICPPeerReadEvent()--->ICPPeerReadCont::PeerReadStateMachine()
五百行的函数,吓死你。别怕,这个函数逻辑很清楚的,待我把注释和空行去掉,行数减少点。
这个函数的功能是一个读操作状态的所有状态的处理,一次读操作在状态机里对应的状态和相应的处理如下:
READ_ACTIVE:判断是否允许ICP查询,等待查询计数,切换到READ_DATA状态
READ_DATA:调用recvfrom()接收peer发送过来的数据,切换到READ_DATA_DONE状态
READ_DATA_DONE:判断接收的数据是否大于0,切换到PROCESS_READ_DATA状态
PROCESS_READ_DATA/ADD_PEER:解析ICP报文,版本号等一大堆的检查,判断是否为查询请求,是则进行cache查询后切换到AWAITING_CACHE_LOOKUP_RESPONSE状态
AWAITING_CACHE_LOOKUP_RESPONSE:根据查询金结果构建ICP响应报文,写日志,切换到SEND_REPLY状态
SEND_REPLY:发送响应报文,切换到WRITE_DONE状态
WRITE_DONE:写日志,切换到READ_NOT_ACTIVE状态

GET_ICP_REQUEST:这是接收到一个ICP响应的处理,从ICP请求队列中找到该请求对应的continuation,切换到GET_ICP_REQUEST_MUTEX状态

GET_ICP_REQUEST_MUTEX:写日志,获取响应报文信息传给下面分析的发送ICP查询请求处理,切换到READ_NOT_ACTIVE状态

READ_NOT_ACTIVE/READ_NOT_ACTIVE_EXIT:等待查询计数--,解锁,切换到READ_PROCESSING_COMPLETE状态
READ_PROCESSING_COMPLETE:nothing

int
ICPPeerReadCont::PeerReadStateMachine(PeerReadData * s, Event * e)
{
 AutoReference l(&_recursion_depth);
 ip_port_text_buffer ipb;
 MUTEX_TRY_LOCK(lock, this->mutex, this_ethread());
 if (!lock) {
   return EVENT_CONT;        
 }
 while (1) {                
   switch (s->_next_state) {
   case READ_ACTIVE:
     {
       ink_release_assert(_recursion_depth == 0);
       if (!_ICPpr->Lock())
         return EVENT_CONT;  
       bool valid_peer = (_ICPpr->IdToPeer(s->_peer->GetPeerID()) == s->_peer);

       if (valid_peer && _ICPpr->AllowICPQueries()
           && _ICPpr->GetConfig()->globalConfig()->ICPconfigured()) {
         _ICPpr->IncPendingQuery();
         _ICPpr->Unlock();
         s->_next_state = READ_DATA;
         RECORD_ICP_STATE_CHANGE(s, 0, READ_DATA);
         break;              
       } else {
         _ICPpr->Unlock();
        s->_next_state = READ_PROCESSING_COMPLETE;
         RECORD_ICP_STATE_CHANGE(s, 0, READ_PROCESSING_COMPLETE);
         return EVENT_DONE;
       }
     }
   case READ_DATA:
     {
       ink_release_assert(_recursion_depth == 0);
       ink_assert(s->_peer->buf == NULL);
       Ptr<IOBufferBlock> buf = s->_peer->buf = new_IOBufferBlock();
       buf->alloc(ICPHandlerCont::ICPDataBuf_IOBuffer_sizeindex);
       s->_peer->fromaddrlen = sizeof(s->_peer->fromaddr);
       buf->fill(sizeof(ICPMsg_t));    
       char *be = buf->buf_end() - 1;
       be[0] = 0;              
       s->_next_state = READ_DATA_DONE;
       RECORD_ICP_STATE_CHANGE(s, 0, READ_DATA_DONE);
       ink_assert(s->_peer->readAction == NULL);
       Action *a = s->_peer->RecvFrom_re(this, this, buf,
                                         buf->write_avail() - 1,
                                         &s->_peer->fromaddr.sa,
                                         &s->_peer->fromaddrlen);
       if (!a) {
         a = ACTION_IO_ERROR;
       }
       if (a == ACTION_RESULT_DONE) {
         ink_assert(s->_next_state == PROCESS_READ_DATA);
         break;
       } else if (a == ACTION_IO_ERROR) {
         ICP_INCREMENT_DYN_STAT(no_data_read_stat);
         s->_peer->buf = NULL;
         s->_next_state = READ_NOT_ACTIVE_EXIT;
         RECORD_ICP_STATE_CHANGE(s, 0, READ_NOT_ACTIVE_EXIT);
         break;
       } else {
         s->_peer->readAction = a;
         return EVENT_DONE;
       }
     }
   case READ_DATA_DONE:
     {
       if (s->_peer->readAction != NULL) {
         ink_assert(s->_peer->readAction == e);
         s->_peer->readAction = NULL;
       }
       s->_bytesReceived = completionUtil::getBytesTransferred(e);

       if (s->_bytesReceived >= 0) {
         s->_next_state = PROCESS_READ_DATA;
         RECORD_ICP_STATE_CHANGE(s, 0, PROCESS_READ_DATA);
       } else {
         ICP_INCREMENT_DYN_STAT(no_data_read_stat);
         s->_peer->buf = NULL;
         s->_next_state = READ_NOT_ACTIVE_EXIT;
         RECORD_ICP_STATE_CHANGE(s, 0, READ_NOT_ACTIVE_EXIT);
       }
       if (_recursion_depth > 0) {
         return EVENT_DONE;
       } else {
         break;
       }
     }
   case PROCESS_READ_DATA:
   case ADD_PEER:
     {
       ink_release_assert(_recursion_depth == 0);

       Ptr<IOBufferBlock> bufblock = s->_peer->buf;
       char *buf = bufblock->start();
       if (s->_next_state == PROCESS_READ_DATA) {
         ICPRequestCont::NetToHostICPMsg((ICPMsg_t *)
                                         (buf + sizeof(ICPMsg_t)), (ICPMsg_t *) buf);
         bufblock->reset();
         bufblock->fill(s->_bytesReceived);
         if (s->_bytesReceived < ((ICPMsg_t *) buf)->h.msglen) {
           ICP_INCREMENT_DYN_STAT(short_read_stat);
           s->_peer->buf = NULL;
           s->_next_state = READ_NOT_ACTIVE;
           RECORD_ICP_STATE_CHANGE(s, 0, READ_NOT_ACTIVE);
           break;              
         }
       }
       IpEndpoint from;
       if (!s->_peer->ExtToIntRecvSockAddr(&s->_peer->fromaddr.sa, &from.sa)) {
         int status;
         ICPConfigData *cfg = _ICPpr->GetConfig()->globalConfig();
         ICPMsg_t *ICPmsg = (ICPMsg_t *) buf;

         if ((cfg->ICPconfigured() == ICP_MODE_RECEIVE_ONLY) &&
             cfg->ICPReplyToUnknownPeer() &&
             ((ICPmsg->h.version == ICP_VERSION_2) ||
              (ICPmsg->h.version == ICP_VERSION_3)) && (ICPmsg->h.opcode == ICP_OP_QUERY)) {
           if (!_ICPpr->GetConfig()->Lock()) {
             s->_next_state = ADD_PEER;
             RECORD_ICP_STATE_CHANGE(s, 0, ADD_PEER);
             return EVENT_CONT;
           }
           if (!_ICPpr->GetFreePeers() || !_ICPpr->GetFreeSendPeers()) {
             Warning("ICP Peer limit exceeded");
             REC_SignalWarning(REC_SIGNAL_CONFIG_ERROR, "ICP Peer limit exceeded");
             _ICPpr->GetConfig()->Unlock();
             goto invalid_message;
           }
           int icp_reply_port = cfg->ICPDefaultReplyPort();
           if (!icp_reply_port) {
             icp_reply_port = ntohs(ats_ip_port_cast(&s->_peer->fromaddr));
           }
           PeerConfigData *Pcfg = NEW(new PeerConfigData(
               PeerConfigData::CTYPE_SIBLING,
               IpAddr(s->_peer->fromaddr),
               0,
               icp_reply_port
           ));
           ParentSiblingPeer *P = NEW(new ParentSiblingPeer(PEER_SIBLING, Pcfg, _ICPpr, true));
           status = _ICPpr->AddPeer(P);
           ink_release_assert(status);
           status = _ICPpr->AddPeerToSendList(P);
           ink_release_assert(status);

       P->GetChan()->setRemote(P->GetIP());
           Note("ICP Peer added ip=%s", ats_ip_nptop(P->GetIP(), ipb, sizeof(ipb)));
           from = s->_peer->fromaddr;
         } else {
         invalid_message:
           ICP_INCREMENT_DYN_STAT(invalid_sender_stat);
           Debug("icp", "Received msg from invalid sender [%s]",
             ats_ip_nptop(&s->_peer->fromaddr, ipb, sizeof(ipb)));

           s->_peer->buf = NULL;
           s->_next_state = READ_NOT_ACTIVE;
           RECORD_ICP_STATE_CHANGE(s, 0, READ_NOT_ACTIVE);
           break;              
         }
       }
       s->_sender = from;
       s->_rICPmsg_len = s->_bytesReceived;
       ink_assert(s->_buf == NULL);
       s->_buf = s->_peer->buf;
       s->_rICPmsg = (ICPMsg_t *) s->_buf->start();
       s->_peer->buf = NULL;
       if ((s->_rICPmsg->h.version != ICP_VERSION_2)
           && (s->_rICPmsg->h.version != ICP_VERSION_3)) {
         ICP_INCREMENT_DYN_STAT(read_not_v2_icp_stat);
         Debug("icp", "Received (v=%d) !v2 && !v3 msg from sender [%s]",
           (uint32_t) s->_rICPmsg->h.version, ats_ip_nptop(&from, ipb, sizeof(ipb)));

         s->_rICPmsg = NULL;
         s->_buf = NULL;
         s->_next_state = READ_NOT_ACTIVE;
         RECORD_ICP_STATE_CHANGE(s, 0, READ_NOT_ACTIVE);
         break;                
       }
       if (s->_rICPmsg->h.opcode == ICP_OP_QUERY) {
         ICP_INCREMENT_DYN_STAT(icp_remote_query_requests_stat);
         ink_assert(!s->_mycont);
         s->_next_state = AWAITING_CACHE_LOOKUP_RESPONSE;
         RECORD_ICP_STATE_CHANGE(s, 0, AWAITING_CACHE_LOOKUP_RESPONSE);
         if (ICPPeerQueryCont(0, (Event *) 0) == EVENT_DONE) {
           break;              
         } else {
           return EVENT_DONE;  
         }
       } else {
         Debug("icp", "Response for Id=%d, from [%s]",
           s->_rICPmsg->h.requestno, ats_ip_nptop(&s->_sender, ipb, sizeof(ipb)));
         ICP_INCREMENT_DYN_STAT(icp_remote_responses_stat);
         s->_next_state = GET_ICP_REQUEST;
         RECORD_ICP_STATE_CHANGE(s, 0, GET_ICP_REQUEST);
         break;                
       }
     }
   case AWAITING_CACHE_LOOKUP_RESPONSE:
     {
       int status = 0;
       void *data = s->_rICPmsg->un.query.URL;
       int datalen = strlen((const char *) data) + 1;

       if (s->_queryResult == CACHE_EVENT_LOOKUP) {
         Debug("icp", "Sending ICP_OP_HIT for id=%d, [%.*s] to [%s]",
           s->_rICPmsg->h.requestno, datalen, (const char *)data, ats_ip_nptop(&s->_sender, ipb, sizeof(ipb)));
         ICP_INCREMENT_DYN_STAT(icp_cache_lookup_success_stat);
         status = ICPRequestCont::BuildICPMsg(ICP_OP_HIT,
                                              s->_rICPmsg->h.requestno, 0 /* optflags */ , 0 /* optdata */ ,
                                              0 /* shostid */ ,
                                              data, datalen, &s->_mhdr, s->_iov, s->_rICPmsg);
       } else if (s->_queryResult == CACHE_EVENT_LOOKUP_FAILED) {
         Debug("icp", "Sending ICP_OP_MISS for id=%d, [%.*s] to [%s]",
           s->_rICPmsg->h.requestno, datalen, (const char *)data, ats_ip_nptop(&s->_sender, ipb, sizeof(ipb)));
         ICP_INCREMENT_DYN_STAT(icp_cache_lookup_fail_stat);
         status = ICPRequestCont::BuildICPMsg(ICP_OP_MISS,
                                              s->_rICPmsg->h.requestno, 0 /* optflags */ , 0 /* optdata */ ,
                                              0 /* shostid */ ,
                                              data, datalen, &s->_mhdr, s->_iov, s->_rICPmsg);
       } else {
         Warning("Bad cache lookup event: %d", s->_queryResult);
         ink_release_assert(!"Invalid cache lookup event");
       }
       ink_assert(status == 0);
       ICPlog logentry(s);
       LogAccessICP accessor(&logentry);
       Log::access(&accessor);
       s->_next_state = SEND_REPLY;
       RECORD_ICP_STATE_CHANGE(s, 0, SEND_REPLY);

       if (_recursion_depth > 0) {
         return EVENT_DONE;
       } else {
         break;
       }
     }
   case SEND_REPLY:
     {
       ink_release_assert(_recursion_depth == 0);
       s->_next_state = WRITE_DONE;
       RECORD_ICP_STATE_CHANGE(s, 0, WRITE_DONE);
       ink_assert(s->_peer->writeAction == NULL);
       Action *a = s->_peer->SendMsg_re(this, this,
                                        &s->_mhdr, &s->_sender.sa);
       if (!a) {
         a = ACTION_IO_ERROR;
       }
       if (a == ACTION_RESULT_DONE) {
         break;
       } else if (a == ACTION_IO_ERROR) {
         ICP_INCREMENT_DYN_STAT(query_response_partial_write_stat);
         Debug("icp_warn", "ICP response send, sent=%d res=%d, ip=%s",
           ntohs(s->_rICPmsg->h.msglen), -1, ats_ip_ntop(&s->_sender, ipb, sizeof(ipb)));
         s->_next_state = READ_NOT_ACTIVE;
         RECORD_ICP_STATE_CHANGE(s, 0, READ_NOT_ACTIVE);
         break;
       } else {
         s->_peer->writeAction = a;
         return EVENT_DONE;
       }
     }
   case WRITE_DONE:
     {
       s->_peer->writeAction = NULL;
       int len = completionUtil::getBytesTransferred(e);

       if (len == (int)ntohs(s->_rICPmsg->h.msglen)) {
         ICP_INCREMENT_DYN_STAT(query_response_write_stat);
         s->_peer->LogSendMsg(s->_rICPmsg, &s->_sender.sa);      
       } else {
         ICP_INCREMENT_DYN_STAT(query_response_partial_write_stat);
         Debug("icp_warn", "ICP response send, sent=%d res=%d, ip=%s",
           ntohs(s->_rICPmsg->h.msglen), len, ats_ip_ntop(&s->_sender, ipb, sizeof(ipb)));
       }
       s->_next_state = READ_NOT_ACTIVE;
       RECORD_ICP_STATE_CHANGE(s, 0, READ_NOT_ACTIVE);
       Debug("icp", "state->READ_NOT_ACTIVE");

       if (_recursion_depth > 0) {
         return EVENT_DONE;
       } else {
         break;            
       }
     }
   case GET_ICP_REQUEST:
     {
       ink_release_assert(_recursion_depth == 0);
       ink_assert(s->_rICPmsg && s->_rICPmsg_len);    
       s->_ICPReqCont = ICPRequestCont::FindICPRequest(s->_rICPmsg->h.requestno);
       if (s->_ICPReqCont) {
         s->_next_state = GET_ICP_REQUEST_MUTEX;
         RECORD_ICP_STATE_CHANGE(s, 0, GET_ICP_REQUEST_MUTEX);
         break;                
       }
       Debug("icp", "No ICP Request for Id=%d", s->_rICPmsg->h.requestno);
       ICP_INCREMENT_DYN_STAT(no_icp_request_for_response_stat);
       Peer *p = _ICPpr->FindPeer(s->_sender);
       p->LogRecvMsg(s->_rICPmsg, 0);
       s->_next_state = READ_NOT_ACTIVE;
       RECORD_ICP_STATE_CHANGE(s, 0, READ_NOT_ACTIVE);
       break;                  
     }
   case GET_ICP_REQUEST_MUTEX:
     {
       ink_release_assert(_recursion_depth == 0);
       ink_assert(s->_ICPReqCont);
       Ptr<ProxyMutex> ICPReqContMutex(s->_ICPReqCont->mutex);
       EThread *ethread = this_ethread();
       ink_hrtime request_start_time;
       if (!MUTEX_TAKE_TRY_LOCK(ICPReqContMutex, ethread)) {
         ICP_INCREMENT_DYN_STAT(icp_response_request_nolock_stat);
         s->_ICPReqCont = (ICPRequestCont *) 0;
         s->_next_state = GET_ICP_REQUEST;
         RECORD_ICP_STATE_CHANGE(s, 0, GET_ICP_REQUEST);
         return EVENT_CONT;
       }
       Peer *p = _ICPpr->FindPeer(s->_sender);
       p->LogRecvMsg(s->_rICPmsg, 1);
       ICPRequestCont::ICPRequestEventArgs_t args;
       args.rICPmsg = s->_rICPmsg;
       args.rICPmsg_len = s->_rICPmsg_len;
       args.peer = p;
       if (!s->_ICPReqCont->GetActionPtr()->cancelled) {
         request_start_time = s->_ICPReqCont->GetRequestStartTime();
         Debug("icp", "Passing Reply for ICP Id=%d", s->_rICPmsg->h.requestno);
         s->_ICPReqCont->handleEvent((int) ICP_RESPONSE_MESSAGE, (void *) &args);
       } else {
         request_start_time = 0;
         delete s->_ICPReqCont;
         Debug("icp", "User cancelled ICP request Id=%d", s->_rICPmsg->h.requestno);
       }
       s->_ICPReqCont = 0;
       MUTEX_UNTAKE_LOCK(ICPReqContMutex, ethread);
       if (request_start_time) {
         ICP_SUM_DYN_STAT(total_icp_response_time_stat, (ink_get_hrtime() - request_start_time));
       }
       RECORD_ICP_STATE_CHANGE(s, 0, READ_NOT_ACTIVE);
       s->_next_state = READ_NOT_ACTIVE;
       break;                
     }
   case READ_NOT_ACTIVE:
   case READ_NOT_ACTIVE_EXIT:
     {
       ink_release_assert(_recursion_depth == 0);
       if (!_ICPpr->Lock())
         return EVENT_CONT;  
       _ICPpr->DecPendingQuery();
       _ICPpr->Unlock();
       s->_buf = 0;
       if (s->_next_state == READ_NOT_ACTIVE_EXIT) {
         s->_next_state = READ_PROCESSING_COMPLETE;
         return EVENT_DONE;
       } else {
         s->reset();
         s->_start_time = ink_get_hrtime();
         s->_next_state = READ_ACTIVE;
         RECORD_ICP_STATE_CHANGE(s, 0, READ_ACTIVE);
         break;                
       }
     }
   case READ_PROCESSING_COMPLETE:
   default:
     ink_release_assert(0);    

   }                          

 }                            
}

    到此为止,ICP的初始化和启动流程以及启动后进入读流程的源码注释完毕。总结一下,其实ICP的初始化所做的事情就是读取ICP相关的配置来初始化相应的数据结构,而ICP的启动则是定时得去读取ICP的查询请求,然后去查cache,最后把查询结果发送给请求端。由于时间有限不可能注释地全面到位,写这个的目的只是当笔记用,防止以后长时间没看忘记了,如果你正在看这篇文章的话,注释不清楚或有误是在所难免的,请多多谅解。

你可能感兴趣的:(trafficserver的ICP初始化和启动流程源码注释)