对于开发一款高性能服务器程序,广大服务器开发人员在一直为之奋斗和努力.其中一个影响服务器的重要瓶颈就是服务器的网络处理模块.如果一款服务器程序不能及时的处理用户的数据.则服务器的上层业务逻辑再高效也是徒劳.所以一个服务器程序的网络处理能力直接影响到整个服务器的性能, 本文主要介绍在windows平台下开发高性能的网络处理模块以及自己在设计开发服务器网络模块遇到的一些问题和开发心得.本篇主要介绍TCP服务器的设计, 下一篇将主要介绍UDP服务器的设计.
众所周知, 对于服务器来说windows下网络I/O处理的最佳方式就是完成端口, 因此本服务器的开发主要基于完成端口的模式.完成端口(completion port)是应用程序使用线程池处理异步I/O请求的一种机制.将创建好的socket和完成端口绑定后就可以向该socket上投递相应的I/O操作, 当操作完成后I/O系统会向完成端口发送一个通知包;应用程序通过GetQueuedCompletionStatus()函数获取这些通知包并进行相应的处理.下面切入正题谈谈TCP服务器的开发.
本人在开发TCP服务器的经过了两个阶段, 第一次设计出来的TCP服务器网络层只能支持5000 – 8000个在线用户同时和服务器交互, 但是会出现一些莫名其妙的系统异常.所以网络层不是很稳定.这次开发主要用到一个系统的I/O线程池函数BindIoCompletionCallback() 该函数在win2000以后都支持, BindIoCompletion-
Callback()是一个I/O线程池函数,其主要功能是采用系统线程池进行I/O处理,优点是用户无需自己创建完成端口和线程池,完成端口和工作者线程池的创建和管理都由系统维护.给用户带了很大的方便.用户只需要将自己创建的socket和I/O回调函数传递给BindIoCompletionCallback()函数即可, 然后就可以在该socket上投递相应的操作.当操作完成后系统会调用用户的回调函数通知用户.这种方式给开发者带来了很大的方便, 开发者甚至不需要去了解完成端口的工作机制就可以开发出一个较高性能的网络程序.但同时也带来了很多麻烦,用户无法知道在完成端口上到底有多少个工作者线程, 而且当连接到服务器上的用户量过大时会出现线程堆栈错误等异常,同时有1000-2000个用户断开连接后, 服务器就无法让后续用户连接到服务器. 在这种方式下的服务器网络层最多只支持4000 – 5000用户同时连接到服务器.用户量再大时就会出现一些系统异常而且无法解决.
借鉴于第一次开发的经验和教训, 在第二次开发服务器TCP层时决定自己创建完成端口和工作者线程池, 并对其进行维护和管理.这样做的好处是出了问题好定位和处理.下面将我开发的代码拿出来和大家切磋切磋, 如果什么地方写得问题还希望能够指正出来, 欢迎邮件联系我: [email protected], QQ: 24633959, MSN: [email protected]
1. 首先介绍网络上下文(NET_CONTEXT)的定义:
class NET_CONTEXT
{
public:
WSAOVERLAPPED m_ol;
SOCKET m_hSock;
CHAR* m_pBuf; //接收或发送数据的缓冲区
INT m_nOperation; //在该网络上下文上进行的操作 OP_ACCEPT…
static DWORD S_PAGE_SIZE; //缓冲区的最大容量
NET_CONTEXT();
virtual ~NET_CONTEXT();
static void InitReource();
static void ReleaseReource();
private:
void* operator new (size_t nSize);
void operator delete(void* p);
static HANDLE s_hDataHeap;
static vector<char * > s_IDLQue; //无效数据缓冲区的队列
static CRITICAL_SECTION s_IDLQueLock; //访问s_IDLQue的互斥锁
};
NET_CONTEXT 是所有网络上下文的基类, 对于TCP的recv, send, accep, connect的上下文都继承自该类.UDP的send和recv的网络上下文也继承自该类. m_ol 必须放置在第一个位置否则当从完成封包取net_context不能得到正确的结果. S_PAGE_SIZE 为数据缓冲区m_pBuf的大小,其大小和相应的操作系统平台有关, win32下其值为4096, win64下其值为8192, 即为操作系统的一个内存页的大小.设置为一个内存页的原因是因为在投递重叠操作时系统会锁定投递的缓冲区, 在锁定时是按照内存页的边界来锁定的.因此即使你只发送一个1K字节数据系统也会锁定整个内存页(4096/8192). s_hDataHeap 为自定义的BUF申请的堆.其优点是用户可以自己对堆进行管理和操作. s_IDLQue 为用过的BUF队列, 当用户用完相应的NET_CONTEXT后在执行析构操作时并不会真正把m_pBuf所占的内存释放掉而是将其放入到s_IDLQue队列中, 当下次申请新的NET_CONTEXT时只需从s_IDLQue中取出就可以使用, 避免频繁的new和delete操作.
2. 数据包头的定义:
struct PACKET_HEAD
{
LONG nTotalLen; //数据包的总长度
ULONG nSerialNum; //数据包的序列号
WORD nCurrentLen; //当前数据包的长度
WORD nType; //数据包的类型
};
数据包头位于每一个接收到的或待发送的数据包的首部,用于确定接收到的数据包是否合法以及该数据包是做什么用的.用户可以定义自己包头.
3. TCP_CONTEXT主要用于定义接收和发送数据的缓冲区, 继承自NET_CONTEXT
class TCP_CONTEXT : public NET_CONTEXT
{
friend class TcpServer;
protected:
DWORD m_nDataLen; //TCP模式下累计发送和接收数据的长度
TCP_CONTEXT()
: m_nDataLen(0)
{
}
virtual ~TCP_CONTEXT() {}
void* operator new(size_t nSize);
void operator delete(void* p);
enum
{
E_TCP_HEAP_SIZE = 1024 * 1024* 10,
MAX_IDL_DATA = 20000,
};
private:
static vector<TCP_CONTEXT* > s_IDLQue; //无效的数据队列
static CRITICAL_SECTION s_IDLQueLock; //访问s_IDLQue的互斥锁
static HANDLE s_hHeap; //TCP_CONTEXT的数据申请堆
};
TCP_CONTEXT类主要用在网络上发送和接收数据的上下文.每个连接到服务器的SOCKET都会有一个发送和接收数据的TCP_CONTEXT.这里重载了new和delete函数.这样做的优点在于当申请一个新的TCP_CONTEXT对象时会先判断无效的数据队列中是否有未使用的TCP_CONTEXT,若有则直接取出来使用否则从s_hHeap堆上新申请一个.new 函数的定义如下
void* TCP_CONTEXT::operator new(size_t nSize)
{
void* pContext = NULL;
try
{
if (NULL == s_hHeap)
{
throw ((long)(__LINE__));
}
//为新的TCP_CONTEXT申请内存, 先从无效队列中找, 如无效队列为空则从堆上申请
EnterCriticalSection(&s_IDLQueLock);
vector<TCP_CONTEXT* >::iterator iter = s_IDLQue.begin();
if (iter != s_IDLQue.end())
{
pContext = *iter;
s_IDLQue.erase(iter);
}
else
{
pContext = HeapAlloc(s_hHeap, HEAP_ZERO_MEMORY | HEAP_NO_SERIALIZE, nSize);
}
LeaveCriticalSection(&s_IDLQueLock);
if (NULL == pContext)
{
throw ((long)(__LINE__));
}
}
catch (const long& iErrCode)
{
pContext = NULL;
_TRACE("\r\nExcept : %s--%ld", __FILE__, iErrCode);
}
return pContext;
}
当使用完TCP_CONTEXT时调用delete函数进行对内存回收, 在进行内存回收时先查看无效队列已存放的数据是否达到MAX_IDL_DATA, 若没有超过MAX_IDL_DATA则将其放入到s_IDLQue中否则将其释放掉.delete函数的实现如下:
void TCP_CONTEXT::operator delete(void* p)
{
if (p)
{
//若空闲队列的长度小于MAX_IDL_DATA, 则将其放入无效队列中否则释
//放之
EnterCriticalSection(&s_IDLQueLock);
const DWORD QUE_SIZE = (DWORD)(s_IDLQue.size());
TCP_CONTEXT* const pContext = (TCP_CONTEXT*)p;
if (QUE_SIZE <= MAX_IDL_DATA)
{
s_IDLQue.push_back(pContext);
}
else
{
HeapFree(s_hHeap, HEAP_NO_SERIALIZE, p);
}
LeaveCriticalSection(&s_IDLQueLock);
}
return;
}
4. ACCEPT_CONTEXT 主要用于投递AcceptEx操作, 继承自NET_CONTEXT类
class ACCEPT_CONTEXT : public NET_CONTEXT
{
friend class TcpServer;
protected:
SOCKET m_hRemoteSock; //连接本服务器的客户端SOCKET
ACCEPT_CONTEXT()
: m_hRemoteSock(INVALID_SOCKET)
{
}
virtual ~ACCEPT_CONTEXT() {}
void* operator new(size_t nSize);
void operator delete(void* p);
private:
static vector<ACCEPT_CONTEXT* > s_IDLQue; //无效的数据队列
static CRITICAL_SECTION s_IDLQueLock; //访问s_IDLQueµ互斥锁
static HANDLE s_hHeap; //ACCEPT_CONTEXT的自定义堆
};
5. TCP_RCV_DATA, 当服务器的某个socket从网络上收到数据后并且数据合法便为收到的数据申请一个新的TCP_RCV_DATA实例存储收到的数据.其定义如下:
class DLLENTRY TCP_RCV_DATA
{
friend class TcpServer;
public:
SOCKET m_hSocket; //与该数据相关的socket
CHAR* m_pData; //数据缓冲区地址
INT m_nLen; //收到的数据的长度
TCP_RCV_DATA(SOCKET hSock, const CHAR* pBuf, INT nLen);
~TCP_RCV_DATA();
void* operator new(size_t nSize);
void operator delete(void* p);
enum
{
HEAP_SIZE = 1024 *1024* 50,
DATA_HEAP_SIZE = 1024 *1024 * 100,
MAX_IDL_DATA = 100000,
};
private:
static vector<TCP_RCV_DATA* > s_IDLQue; //无效数据队列
static CRITICAL_SECTION s_IDLQueLock; //访问s_IDLQue的互斥锁
static HANDLE s_hHeap;
static HANDLE s_DataHeap;
};
6. 前面讲的相关的数据结构都是为下面要探讨的TcpServer类服务的. TcpServer类是本文要探讨的核心数据结构;主要用于启动服务, 管理连接等操作.
class DLLENTRY TcpServer
{
public:
TcpServer();
~TcpServer();
/************************************************************************
* Desc : 初始化相关静态资源,在申请TCP实例之前必须先调用该方法对相关资
* 源进行初始化
************************************************************************/
static void InitReource();
/************************************************************************
* Desc : 释放相应的静态资源
************************************************************************/
static void ReleaseReource();
/****************************************************
* Name : StartServer()
* Desc : 启动TCP服务
****************************************************/
BOOL StartServer(
const char *szIp //要启动服务的本地地址, 若为NULL则采用默认地址
, INT nPort //要启动服务的端口
, LPCLOSE_ROUTINE pCloseFun //客户端socket关闭的通知函数
, LPVOID pParam //close函数的参数
);
/****************************************************
* Name : CloseServer()
* Desc : 关闭TCP服务
****************************************************/
void CloseServer();
/****************************************************
* Name : SendData()
* Desc : 对客户端hSock发送长度为nDataLen的数据
****************************************************/
BOOL SendData(SOCKET hSock, const CHAR* szData, INT nDataLen);
/****************************************************
* Name : GetRcvData()
* Desc : 从接收数据队列中获取一个接收数据包
* pQueLen 不为NULL时返回其长度
****************************************************/
TCP_RCV_DATA* GetRcvData(
DWORD* const pQueLen
);
protected:
enum
{
LISTEN_EVENTS = 2, //监听socket的事件个数
MAX_ACCEPT = 50, //每次最多投递的accept操作的个数
_SOCK_NO_RECV = 0xf0000000, //客户端socket已连接上但为发送数据
_SOCK_RECV = 0xf0000001 //客户端socket已连接上并也收到数据
};
vector<TCP_RCV_DATA* > m_RcvDataQue; //接收到的数据缓冲区队列
CRITICAL_SECTION m_RcvQueLock; //访问m_RcvDataQue的互斥锁
vector<SOCKET> m_SocketQue; //连接本服务器的客户端socket队列
CRITICAL_SECTION m_SockQueLock; //访问m_SocketQue的互斥锁
LPCLOSE_ROUTINE m_pCloseFun; //客户端socket关闭的通知函数
LPVOID m_pCloseParam; //传递给m_pCloseFun的用户参数
SOCKET m_hSock; //要进行服务器监听的socket
long volatile m_bThreadRun; //是否允许后台线程继续运行
long volatile m_nAcceptCount; //当前已投递的accept操作的个数
BOOL m_bSerRun; //服务是否正在运行
//accept的事件
HANDLE m_ListenEvents[LISTEN_EVENTS];
HANDLE *m_pThreads; //创建的后台线程的句柄
HANDLE m_hCompletion; //完成端口句柄
static LPFN_ACCEPTEX s_pfAcceptEx; //AcceptEx地址
// GetAcceptExSockaddrs的地址
static LPFN_GETACCEPTEXSOCKADDRS s_pfGetAddrs;
/****************************************************
* Name : AcceptCompletionProc()
* Desc : acceptEx操作完成后回调函数
****************************************************/
void AcceptCompletionProc(BOOL bSuccess, DWORD dwNumberOfBytesTransfered, LPOVERLAPPED lpOverlapped);
/****************************************************
* Name : RecvCompletionProc()
* Desc : 接收操作完成后的回调函数
****************************************************/
void RecvCompletionProc(BOOL bSuccess, DWORD dwNumberOfBytesTransfered, LPOVERLAPPED lpOverlapped);
/****************************************************
* Name : SendCompletionProc()
* Desc : 发送操作完成后的回调函数
****************************************************/
void SendCompletionProc(BOOL bSuccess, DWORD dwNumberOfBytesTransfered, LPOVERLAPPED lpOverlapped);
/****************************************************
* Name : ListenThread()
* Desc : 监听线程
****************************************************/
static UINT WINAPI ListenThread(LPVOID lpParam);
/****************************************************
* Name : WorkThread()
* Desc : 在完成端口上工作的后台线程
****************************************************/
static UINT WINAPI WorkThread(LPVOID lpParam);
/****************************************************
* Name : AideThread()
* Desc : 后台辅助线程
****************************************************/
static UINT WINAPI AideThread(LPVOID lpParam);
};
下面将对相关实现细节作详细介绍.
也许您已经注意到本类只提供了客户端socket关闭的接口, 而没有提供客户端连接到服务器的相关接口;这样做的主要原因是因为当一个客户端连接成功需要在完成端口的I/O线程中进行通知, 若用户在该接口中进行复杂的运算操作将会使I/O工作线程阻塞.所以此处没有提供连接成功的通知接口, 其实用户可以根据客户端发来的特定数据包(例如登陆数据包)确定用户是否连接到本服务器.
当有客户端连接服务器投递的accept操作就会完成, m_ListenEvents[1] 事件对象就会授信这时ListenThread线程将被唤醒并投递一个accept操作. 若有大量的客户端连接到本服务器而没有足够的accept接受连接此时m_ListenEvents[0]事件就会受信此时ListenThread线程会再次投递MAX_ACCEPT个accept操作已接受更多的连接.
ListenThread线程主要用来投递aeecptex操作, 当m_ListenEvents[0]或者m_ListenEvents[1]受信时就会投递一定量的AcceptEx操作以接受更多的客户端连接.
WorkThread 线程工作在完成端口上, 当相关的操作完成时该线程组负责从完成端口队列上取得相应的完成封包进行处理. AideThread线程主要用于维护连接本服务器的socket队列, 如果客户端连接到服务器但长时间没有进行发送数据便断开该客户端, 防止客户端恶意连接.当有客户端断开连接时也在该线程中调用关闭接口通知用户.
相关函数介绍如下:
l TcpServer(), 该函数主要对相关成员变量进行初始化, 创建完成端口和相关线程.
TcpServer::TcpServer()
: m_pCloseFun(NULL)
, m_hSock(INVALID_SOCKET)
, m_pCloseParam(NULL)
, m_bThreadRun(TRUE)
, m_bSerRun(FALSE)
, m_nAcceptCount(0)
{
m_RcvDataQue.reserve(10000 * sizeof(void *));
m_SocketQue.reserve(50000 * sizeof(SOCKET));
InitializeCriticalSection(&m_RcvQueLock);
InitializeCriticalSection(&m_SockQueLock);
//创建监听事件
for (int nIndex = 0; nIndex < LISTEN_EVENTS; nIndex ++)
{
m_ListenEvents[nIndex] = CreateEvent(NULL, FALSE, FALSE, NULL);
}
//创建完成端口
m_hCompletion = CreateIoCompletionPort(INVALID_HANDLE_VALUE, NULL, 0, 0);
//创建辅助线程, 监听线程, 工作者线程. 工作者线程的数目为CPU数目*2+2
SYSTEM_INFO sys_info;
GetSystemInfo(&sys_info);
const DWORD MAX_THREAD = sys_info.dwNumberOfProcessors * 2 +2 + 2;
m_pThreads = new HANDLE[MAX_THREAD];
assert(m_pThreads);
m_pThreads[0] = (HANDLE)_beginthreadex(NULL, 0, ListenThread, this, 0, NULL);
m_pThreads[1] = (HANDLE)_beginthreadex(NULL, 0, AideThread, this, 0, NULL);
for (DWORD nIndex = 2; nIndex < MAX_THREAD; nIndex++)
{
m_pThreads[nIndex] = (HANDLE)_beginthreadex(NULL, 0, WorkThread, this, 0, NULL);
}
}
l StartServer(), 该函数主要启动服务并投递MAX_ACCEPT个操作接受客户端连接.
BOOL TcpServer::StartServer( const char *szIp , INT nPort , LPCLOSE_ROUTINE pCloseFun , LPVOID pParam )
{
BOOL bSucc = TRUE;
int nRet = 0;
DWORD dwBytes = 0;
ULONG ul = 1;
int nOpt = 1;
try
{
//若服务已运行则不允许启动新的服务
if (m_bSerRun || m_nAcceptCount)
{
THROW_LINE;
}
m_pCloseFun = pCloseFun;
m_pCloseParam = pParam;
m_bSerRun = TRUE;
//创建监听socket
m_hSock = WSASocket(AF_INET, SOCK_STREAM, IPPROTO_TCP, NULL, 0, WSA_FLAG_OVERLAPPED);
if (INVALID_SOCKET == m_hSock)
{
THROW_LINE;
}
//加载AcceptEx函数
GUID guidProc = WSAID_ACCEPTEX;
if (NULL == s_pfAcceptEx)
{
nRet = WSAIoctl(m_hSock, SIO_GET_EXTENSION_FUNCTION_POINTER, &guidProc, sizeof(guidProc)
, &s_pfAcceptEx, sizeof(s_pfAcceptEx), &dwBytes, NULL, NULL);
}
if (NULL == s_pfAcceptEx || SOCKET_ERROR == nRet)
{
THROW_LINE;
}
//加载GetAcceptExSockaddrs函数
GUID guidGetAddr = WSAID_GETACCEPTEXSOCKADDRS;
dwBytes = 0;
if (NULL == s_pfGetAddrs)
{
nRet = WSAIoctl(m_hSock, SIO_GET_EXTENSION_FUNCTION_POINTER, &guidGetAddr, sizeof(guidGetAddr)
, &s_pfGetAddrs, sizeof(s_pfGetAddrs), &dwBytes, NULL, NULL);
}
if (NULL == s_pfGetAddrs)
{
THROW_LINE;
}
ioctlsocket(m_hSock, FIONBIO, &ul);
//设置地址重用, 当服务关闭后可以立即在该端口上启动服务
setsockopt(m_hSock, SOL_SOCKET, SO_REUSEADDR, (char*)&nOpt, sizeof(nOpt));
sockaddr_in LocalAddr;
LocalAddr.sin_family = AF_INET;
LocalAddr.sin_port = htons(nPort);
if (szIp)
{
LocalAddr.sin_addr.s_addr = inet_addr(szIp);
}
else
{
LocalAddr.sin_addr.s_addr = htonl(INADDR_ANY);
}
nRet = bind(m_hSock, (sockaddr*)&LocalAddr, sizeof(LocalAddr));
if (SOCKET_ERROR == nRet)
{
THROW_LINE;
}
nRet = listen(m_hSock, 200);
if (SOCKET_ERROR == nRet)
{
THROW_LINE;
}
//将监听socket绑定完成端口上
CreateIoCompletionPort((HANDLE)m_hSock, m_hCompletion, 0, 0);
WSAEventSelect(m_hSock, m_ListenEvents[0], FD_ACCEPT);
//投递MAX_ACCEPT个AcceptEx操作
for (int nIndex = 0; nIndex < MAX_ACCEPT; )
{
SOCKET hClient = WSASocket(AF_INET, SOCK_STREAM, IPPROTO_TCP, NULL, 0, WSA_FLAG_OVERLAPPED);
if (INVALID_SOCKET == hClient)
{
continue;
}
ul = 1;
ioctlsocket(hClient, FIONBIO, &ul);
ACCEPT_CONTEXT* pAccContext = new ACCEPT_CONTEXT();
if (NULL == pAccContext)
{
THROW_LINE;
}
pAccContext->m_hSock = m_hSock;
pAccContext->m_hRemoteSock = hClient;
pAccContext->m_nOperation = OP_ACCEPT;
nRet = s_pfAcceptEx(m_hSock, hClient, pAccContext->m_pBuf, 0
, sizeof(sockaddr_in) +16, sizeof(sockaddr_in) +16, &dwBytes, &(pAccContext->m_ol));
if (FALSE == nRet && ERROR_IO_PENDING != WSAGetLastError())
{
closesocket(hClient);
delete pAccContext;
pAccContext = NULL;
THROW_LINE;
}
else
{
InterlockedExchangeAdd(&m_nAcceptCount, 1);
}
nIndex++;
}
}
catch (const long &lErrLine)
{
bSucc = FALSE;
m_bSerRun = FALSE;
_TRACE("Exp : %s -- %ld", __FILE__, lErrLine);
}
return bSucc;
}
l ListenThread() 该函数用于投递AcceptEx操作以接受客户端的连接.
UINT WINAPI TcpServer::ListenThread(LPVOID lpParam)
{
TcpServer *pThis = (TcpServer *)lpParam;
try
{
int nRet = 0;
DWORD nEvents = 0;
DWORD dwBytes = 0;
int nAccept = 0;
while (TRUE)
{
nEvents = WSAWaitForMultipleEvents(LISTEN_EVENTS, pThis->m_ListenEvents, FALSE, WSA_INFINITE, FALSE);
//等待失败线程退出
if (WSA_WAIT_FAILED == nEvents)
{
THROW_LINE;
}
else
{
nEvents = nEvents - WAIT_OBJECT_0;
if (0 == nEvents)
{
nAccept = MAX_ACCEPT;
}
else if (1 == nEvents)
{
nAccept = 1;
}
//最多只能投递200个AcceptEx操作
if (InterlockedExchangeAdd(&(pThis->m_nAcceptCount), 0) > 200)
{
nAccept = 0;
}
for (int nIndex = 0; nIndex < nAccept; )
{
SOCKET hClient = WSASocket(AF_INET, SOCK_STREAM, IPPROTO_TCP, NULL, 0, WSA_FLAG_OVERLAPPED);
if (INVALID_SOCKET == hClient)
{
continue;
}
ULONG ul = 1;
ioctlsocket(hClient, FIONBIO, &ul);
ACCEPT_CONTEXT* pAccContext = new ACCEPT_CONTEXT();
if (pAccContext && pAccContext->m_pBuf)
{
pAccContext->m_hSock = pThis->m_hSock;
pAccContext->m_hRemoteSock = hClient;
pAccContext->m_nOperation = OP_ACCEPT;
nRet = s_pfAcceptEx(pThis->m_hSock, hClient, pAccContext->m_pBuf, 0
, sizeof(sockaddr_in) +16, sizeof(sockaddr_in) +16, &dwBytes, &(pAccContext->m_ol));
if (FALSE == nRet && ERROR_IO_PENDING != WSAGetLastError())
{
closesocket(hClient);
delete pAccContext;
pAccContext = NULL;
}
else
{
InterlockedExchangeAdd(&(pThis->m_nAcceptCount), 1);
}
}
else
{
delete pAccContext;
}
nIndex++;
}
}
if (FALSE == InterlockedExchangeAdd(&(pThis->m_bThreadRun), 0))
{
THROW_LINE;
}
}
}
catch ( const long &lErrLine)
{
_TRACE("Exp : %s -- %ld", __FILE__, lErrLine);
}
return 0;
}
l CloseServer(), 关闭服务
void TcpServer::CloseServer()
{
//关闭所有的socket
closesocket(m_hSock);
EnterCriticalSection(&m_SockQueLock);
for (vector<SOCKET>::iterator iter_sock = m_SocketQue.begin(); m_SocketQue.end() != iter_sock; iter_sock++)
{
closesocket(*iter_sock);
}
LeaveCriticalSection(&m_SockQueLock);
m_bSerRun = FALSE;
}
l SendData() 发送数据
BOOL TcpServer::SendData(SOCKET hSock, const CHAR* szData, INT nDataLen)
{
#ifdef _XML_NET_
//数据长度非法
if (((DWORD)nDataLen > TCP_CONTEXT::S_PAGE_SIZE) || (NULL == szData))
{
return FALSE;
}
#else
//数据长度非法
if ((nDataLen > (int)(TCP_CONTEXT::S_PAGE_SIZE)) || (NULL == szData) || (nDataLen < sizeof(PACKET_HEAD)))
{
return FALSE;
}
#endif //#ifdef _XML_NET_
BOOL bResult = TRUE;
DWORD dwBytes = 0;
WSABUF SendBuf;
TCP_CONTEXT *pSendContext = new TCP_CONTEXT();
if (pSendContext && pSendContext->m_pBuf)
{
pSendContext->m_hSock = hSock;
pSendContext->m_nDataLen = 0;
pSendContext->m_nOperation = OP_WRITE;
memcpy(pSendContext->m_pBuf, szData, nDataLen);
SendBuf.buf = pSendContext->m_pBuf;
SendBuf.len = nDataLen;
assert(szData);
INT iErr = WSASend(pSendContext->m_hSock, &SendBuf, 1, &dwBytes, 0, &(pSendContext->m_ol), NULL);
if (SOCKET_ERROR == iErr && ERROR_IO_PENDING != WSAGetLastError())
{
delete pSendContext;
pSendContext = NULL;
_TRACE("\r\n%s : %ld LAST_ERROR = %ld", __FILE__, __LINE__, WSAGetLastError());
bResult = FALSE;
}
}
else
{
delete pSendContext;
bResult = FALSE;
}
return bResult;
}
l GetRcvData(), 从接收到的数据队列中取出数据.
TCP_RCV_DATA * TcpServer::GetRcvData( DWORD* const pQueLen )
{
TCP_RCV_DATA* pRcvData = NULL;
EnterCriticalSection(&m_RcvQueLock);
vector<TCP_RCV_DATA*>::iterator iter = m_RcvDataQue.begin();
if (m_RcvDataQue.end() != iter)
{
pRcvData = *iter;
m_RcvDataQue.erase(iter);
}
if (NULL != pQueLen)
{
*pQueLen = (DWORD)(m_RcvDataQue.size());
}
LeaveCriticalSection(&m_RcvQueLock);
return pRcvData;
}
l WorkThread(), 工作者线程
UINT WINAPI TcpServer::WorkThread(LPVOID lpParam)
{
TcpServer *pThis = (TcpServer *)lpParam;
DWORD dwTrans = 0, dwKey = 0, dwSockSize = 0;
LPOVERLAPPED pOl = NULL;
NET_CONTEXT *pContext = NULL;
BOOL bRun = TRUE;
while (TRUE)
{
BOOL bOk = GetQueuedCompletionStatus(pThis->m_hCompletion, &dwTrans, &dwKey, (LPOVERLAPPED *)&pOl, WSA_INFINITE);
pContext = CONTAINING_RECORD(pOl, NET_CONTEXT, m_ol);
if (pContext)
{
switch (pContext->m_nOperation)
{
case OP_ACCEPT:
pThis->AcceptCompletionProc(bOk, dwTrans, pOl);
break;
case OP_READ:
pThis->RecvCompletionProc(bOk, dwTrans, pOl);
break;
case OP_WRITE:
pThis->SendCompletionProc(bOk, dwTrans, pOl);
break;
}
}
EnterCriticalSection(&(pThis->m_SockQueLock));
dwSockSize = (DWORD)(pThis->m_SocketQue.size());
if (FALSE == InterlockedExchangeAdd(&(pThis->m_bThreadRun), 0) && 0 == dwSockSize
&& 0 == InterlockedExchangeAdd(&(pThis->m_nAcceptCount), 0))
{
bRun = FALSE;
}
LeaveCriticalSection(&(pThis->m_SockQueLock));
if (FALSE == bRun)
{
break;
}
}
return 0;
}
l AcceptCompletionProc(), 当客户端连接到服务器时调用该函数.
void TcpServer::AcceptCompletionProc(BOOL bSuccess, DWORD dwNumberOfBytesTransfered, LPOVERLAPPED lpOverlapped)
{
ACCEPT_CONTEXT *pContext = CONTAINING_RECORD(lpOverlapped, ACCEPT_CONTEXT, m_ol);
INT nZero = 0;
int nPro = _SOCK_NO_RECV;
IP_ADDR* pClientAddr = NULL;
IP_ADDR* pLocalAddr = NULL;
INT nClientLen = 0;
INT nLocalLen = 0;
int iErrCode;
DWORD nFlag = 0;
DWORD nBytes = 0;
WSABUF RcvBuf;
if (bSuccess)
{
setsockopt(pContext->m_hRemoteSock, SOL_SOCKET, SO_SNDBUF, (char*)&nZero, sizeof(nZero));
setsockopt(pContext->m_hRemoteSock, SOL_SOCKET, SO_RCVBUF, (CHAR*)&nZero, sizeof(nZero));
setsockopt(pContext->m_hRemoteSock, SOL_SOCKET, SO_UPDATE_ACCEPT_CONTEXT, (char*)&(pContext->m_hSock), sizeof(pContext->m_hSock));
setsockopt(pContext->m_hRemoteSock, SOL_SOCKET, SO_GROUP_PRIORITY, (char *)&nPro, sizeof(nPro));
s_pfGetAddrs(pContext->m_pBuf, 0, sizeof(sockaddr_in) +16, sizeof(sockaddr_in) +16
, (LPSOCKADDR*)&pLocalAddr, &nLocalLen, (LPSOCKADDR*)&pClientAddr, &nClientLen);
//为新来的连接投递读操作
TCP_CONTEXT *pRcvContext = new TCP_CONTEXT;
if (pRcvContext && pRcvContext->m_pBuf)
{
pRcvContext->m_hSock = pContext->m_hRemoteSock;
pRcvContext->m_nOperation = OP_READ;
CreateIoCompletionPort((HANDLE)(pRcvContext->m_hSock), m_hCompletion, NULL, 0);
RcvBuf.buf = pRcvContext->m_pBuf;
RcvBuf.len = TCP_CONTEXT::S_PAGE_SIZE;
iErrCode = WSARecv(pRcvContext->m_hSock, &RcvBuf, 1, &nBytes, &nFlag, &(pRcvContext->m_ol), NULL);
//投递失败
if (SOCKET_ERROR == iErrCode && WSA_IO_PENDING != WSAGetLastError())
{
closesocket(pRcvContext->m_hSock);
delete pRcvContext;
pRcvContext = NULL;
_TRACE("\r\n%s : %ld SOCKET = 0x%x LAST_ERROR = %ld", __FILE__, __LINE__, pContext->m_hRemoteSock, WSAGetLastError());
}
else
{
EnterCriticalSection(&m_SockQueLock);
m_SocketQue.push_back(pRcvContext->m_hSock);
LeaveCriticalSection(&m_SockQueLock);
}
}
else
{
delete pRcvContext;
}
SetEvent(m_ListenEvents[1]);
}
else
{
closesocket(pContext->m_hRemoteSock);
_TRACE("\r\n %s -- %ld accept 操作失败_FILE__, __LINE__);
}
InterlockedExchangeAdd(&m_nAcceptCount, -1);
delete pContext;
pContext = NULL;
}
l RecvCompletionProc(),读操作完成后的回调函数
void TcpServer::RecvCompletionProc(BOOL bSuccess, DWORD dwNumberOfBytesTransfered, LPOVERLAPPED lpOverlapped)
{
TCP_CONTEXT* pRcvContext = CONTAINING_RECORD(lpOverlapped, TCP_CONTEXT, m_ol);
DWORD dwFlag = 0;
DWORD dwBytes = 0;
WSABUF RcvBuf;
int nErrCode = 0;
int nPro = _SOCK_RECV;
try
{
if ((FALSE == bSuccess || 0 == dwNumberOfBytesTransfered) && (WSA_IO_PENDING != WSAGetLastError()))
{
closesocket(pRcvContext->m_hSock);
THROW_LINE;
}
setsockopt(pRcvContext->m_hSock, SOL_SOCKET, SO_GROUP_PRIORITY, (char *)&nPro, sizeof(nPro));
#ifndef _XML_NET_ //处理二进制流
//非法而客户端发来的数据包, 关闭该客户端.
if (0 == pRcvContext->m_nDataLen && dwNumberOfBytesTransfered < sizeof(PACKET_HEAD))
{
THROW_LINE;
}
#endif //#ifndef _XML_NET_
#ifdef _XML_NET_ //处理XML流
TCP_RCV_DATA* pRcvData = new TCP_RCV_DATA(
pRcvContext->m_hSock
, pRcvContext->m_pBuf
, dwNumberOfBytesTransfered
);
if (pRcvData && pRcvData->m_pData)
{
EnterCriticalSection(&m_RcvQueLock);
m_RcvDataQue.push_back(pRcvData);
LeaveCriticalSection(&m_RcvQueLock);
}
pRcvContext->m_nDataLen = 0;
RcvBuf.buf = pRcvContext->m_pBuf;
RcvBuf.len = TCP_CONTEXT::S_PAGE_SIZE;
#else //处理二进制数据流
//解析数据包头信息中应接收的数据包的长度
pRcvContext->m_nDataLen += dwNumberOfBytesTransfered;
PACKET_HEAD* pHeadInfo = (PACKET_HEAD*)(pRcvContext->m_pBuf);
//数据包长度合法才处理
if ((pHeadInfo->nCurrentLen <= TCP_CONTEXT::S_PAGE_SIZE)
//&& (0 == dwErrorCode)
&& ((WORD)(pRcvContext->m_nDataLen) <= pHeadInfo->nCurrentLen + sizeof(PACKET_HEAD)))
{
//该包的所有数据以读取完毕, 将其放入到数据队列中
if ((WORD)(pRcvContext->m_nDataLen) == pHeadInfo->nCurrentLen + sizeof(PACKET_HEAD))
{
TCP_RCV_DATA* pRcvData = new TCP_RCV_DATA(
pRcvContext->m_hSock
, pRcvContext->m_pBuf
, pRcvContext->m_nDataLen
);
if (pRcvData && pRcvData->m_pData)
{
EnterCriticalSection(&m_RcvQueLock);
m_RcvDataQue.push_back(pRcvData);
LeaveCriticalSection(&m_RcvQueLock);
}
pRcvContext->m_nDataLen = 0;
RcvBuf.buf = pRcvContext->m_pBuf;
RcvBuf.len = TCP_CONTEXT::S_PAGE_SIZE;
}
//数据没有接收完毕继续接收
else
{
RcvBuf.buf = pRcvContext->m_pBuf +pRcvContext->m_nDataLen;
RcvBuf.len = pHeadInfo->nCurrentLen - pRcvContext->m_nDataLen +sizeof(PACKET_HEAD);
}
}
//数据非法, 直接进行下一次读操作
else
{
pRcvContext->m_nDataLen = 0;
RcvBuf.buf = pRcvContext->m_pBuf;
RcvBuf.len = TCP_CONTEXT::S_PAGE_SIZE;
}
#endif //#ifdef _XML_NET_
//继续投递读操作
nErrCode = WSARecv(pRcvContext->m_hSock, &RcvBuf, 1, &dwBytes, &dwFlag, &(pRcvContext->m_ol), NULL);
if (SOCKET_ERROR == nErrCode && WSA_IO_PENDING != WSAGetLastError())
{
closesocket(pRcvContext->m_hSock);
THROW_LINE;
}
}
catch (const long &lErrLine)
{
_TRACE("Exp : %s -- %ld SOCKET = 0x%x ERR_CODE = 0x%x", __FILE__, lErrLine, pRcvContext->m_hSock, WSAGetLastError());
delete pRcvContext;
}
}
l SendCompletionProc(), 当发送操作完成后调用该接口
void TcpServer::SendCompletionProc(BOOL bSuccess, DWORD dwNumberOfBytesTransfered, LPOVERLAPPED lpOverlapped)
{
TCP_CONTEXT* pSendContext = CONTAINING_RECORD(lpOverlapped, TCP_CONTEXT, m_ol);
delete pSendContext;
pSendContext = NULL;
}
l AideThread(), 后台辅助线程主要负责对连接本服务器的客户端SOCKET队列进行管理
UINT WINAPI TcpServer::AideThread(LPVOID lpParam)
{
TcpServer *pThis = (TcpServer *)lpParam;
try
{
const int SOCK_CHECKS = 10000;
int nSockTime = 0;
int nPro = 0;
int nTimeLen = 0;
vector<SOCKET>::iterator sock_itre = pThis->m_SocketQue.begin();
while (TRUE)
{
for (int index = 0; index < SOCK_CHECKS; index++)
{
nPro = 0;
nSockTime = 0x0000ffff;
// 检查socket队列
EnterCriticalSection(&(pThis->m_SockQueLock));
if (pThis->m_SocketQue.end() != sock_itre)
{
nTimeLen = sizeof(nPro);
getsockopt(*sock_itre, SOL_SOCKET, SO_GROUP_PRIORITY, (char *)&nPro, &nTimeLen);
if (_SOCK_RECV != nPro)
{
nTimeLen = sizeof(nSockTime);
getsockopt(*sock_itre, SOL_SOCKET, SO_CONNECT_TIME, (char *)&nSockTime, &nTimeLen);
if (nSockTime > 120)
{
closesocket(*sock_itre);
pThis->m_pCloseFun(pThis->m_pCloseParam, *sock_itre);
pThis->m_SocketQue.erase(sock_itre);
_TRACE("%s -- %ld SOCKET = 0x%x出现错误S_ERR = 0x%x, nPro = 0x%x, TIME = %ld", __FILE__, __LINE__, *sock_itre, WSAGetLastError(), nPro, nSockTime);
}
else
{
sock_itre++;
}
}
else
{
sock_itre ++;
}
}
else
{
sock_itre = pThis->m_SocketQue.begin();
LeaveCriticalSection(&(pThis->m_SockQueLock));
break;
}
LeaveCriticalSection(&(pThis->m_SockQueLock));
}
if (FALSE == InterlockedExchangeAdd(&(pThis->m_bThreadRun), 0))
{
THROW_LINE;
}
Sleep(100);
}
}
catch (const long &lErrLine)
{
_TRACE("Exp : %s -- %ld", __FILE__, lErrLine);
}
return 0;
}
本服务器的测试程序在WindowsXP的32位平台下测试时可以同时接受20K个客户端同时连接到服务器, CPU只占10%, 内存占20M左右.