一.Think库
提供跨平台的C语言库,各类C、C++程序都可以用到其中的东西,已支持AIX、HP-UX、Solaris、FreeBSD、Linux、Mac OS X和Windows操作系统
本人辛苦了四年,颠覆多次,终成这个发布版,现在作为unix-center的开源项目,任何非册用户进入此链接都可以下载即对每个连接置相应的侦听标志,然后调用think_netselect进行侦听
二.一个跨平台内存分配器
昨天一个同事一大早在群里推荐了一个google project上的开源内存分配器(http://code.google.com/p/google-perftools/),据说google的很多产品都用到了这个内存分配库,而且经他测试,我们的游戏客户端集成了这个最新内存分配器后,FPS足足提高了将近10帧左右,这可是个了不起的提升,要知道3D组的兄弟忙了几周也没见这么大的性能提升。
如果我们自己本身用的crt提供的内存分配器,这个提升也算不得什么。问题是我们内部系统是有一个小内存管理器的,一般来说小内存分配的算法都大同小异,现成的实现也很多,比如linux内核的slab、SGI STL的分配器、ogre自带的内存分配器,我们自己的内存分配器也和前面列举的实现差不多。让我们来看看这个项目有什么特别的吧。
小内存分配器主要作用是“减小内存碎片化趋势,减小薄记内存比例,提高小内存利用率”,从性能上说,系统内存分配器已针对小内存分配进行优化,单纯使用自定义的小内存分配器,对性能帮助不会很大。内置分配器意义还是体现在,实现无锁分配,避免API调用切换开销。
CRT自身new-delete会用到500个时钟周期,而一个CS会消耗50个时钟周期,一个mutex会用到2000个时钟周期,以上是无竞争的情况。所以,如果用mutex做互斥,那还不如用系统的分配器;如果用CS,也不见会好多少,因为CS会随锁竞争加剧大幅增加时间,甚至会超过mutex。
所以结论是,对于单线程,内置分配器有一定的价值;对于多线程,带锁内置分配器基本上可以无视了(至少对于winxp以后是这样,win2k好像要打补丁)呵呵,从你说的情况来看,很有可能你们原来的分配器用mutex帮倒忙了。
tcmalloc中的唯一亮点应该是,如何做到跨线程归还内存,又能保持高性能,猜想可能使用了某种二级分配策略,内存块可以属于任何线程的内存池,归还到那个线程内存池,就由这个内存池管理。由于各个线程的分配和释放多半不平衡,有线程池会撑满,有的会不足。估计撑满的就会归还到公共内存池。第一级分配无锁,如果内存池不足了,就进入第二级带锁批量分配,而且第二级分配会先从公共内存池获取,如果还不够,这才使用系统内存分配,这该算是第三级分配了。
最后,tcmalloc也是可以用于MT版本的哦,详见(要才能看见)http://groups.google.com/group/google-perftools/browse_thread/thread/41cd3710af85e57b
三、跨平台的USB设备访问C语言库,libusb
libusb的是一个C库,它提供通用的访问USB设备。它支持Linux,Mac OS X,Windows,Windows CE,Android,和OpenBSD/ NetBSD。
版本说明:此版本标志着将libusbx项目合并成libusb。
四、可直接商用的跨平台c,c++动态线程池,任务池stpool库
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
|
1. 创建一个线程池,服务线程最多不超过5, 预留服务线程为2
hp = stpool_create(5, 2, 0, 10);
2. 往任务池中添加执行任务.
struct
sttask_t *ptsk = stpool_new_task(
"mytask"
, mytask_run, mytask_complete, mytask_arg);
stpool_add_task(hp, ptsk);
或者直接可往线程池中添加一个回调或执行路径
stpool_add_routine(hp, callbak, callback_complete, callback_arg, NULL);
任务添加后,将会被尽快执行. 同时可使用
stpool_task_setschattr(ptsk, &attr);
设置优先级任务,优先级越高的任务越优先调度
3. 等待任务被完成.(stpool 提供对任务的控制,当任务成功加入线程池后,
可以使用stpool_task_wait来等待任务完成,当用户的task_complete被
调用完毕后, stpool_task_wait才返回)
stpool_task_wait(hp, ptask, ms);
4. 暂停线程池(线程池被暂停后,除正在被调度的任务外,线程池将不继续执行任务,
但仍然可以往线程池中添加任务,只是这些任务都处于排队状态)
stpool_suspend(hp, 0).
5. 恢复线程池的运行.
stpool_resume(hp);
6. 禁止用户继续向线程池投递任务(用户调用@tpool_add_task时会返回POOL_ERR_THROTTLE错误码)
tpool_throttle_enable(hp, 1)
7. 等待何时可以投递任务
stpool_throttle_wait(hp, ms);
8. 线程池服务线程数量控制
.) 重新设置线程池最大数量为2,预留线程为0
stpool_adjust_abs(hp, 2, 0);
.)在原来的基础上,将线程池最大服务线程数量+1, 最小数量-1
(这并不代表stpool马上会创建线程,而只是在任务繁重的时候内部精心调度开启)
stpool_adjust(hp, 1, -1)
9. 获取任务池的状态
.)获取stpool内服务线程数目,任务执行情况
struct
stpool_stat_t stat;
stpool_getstat(hp, &stat);
.)获取任务的状态
long
stat = stpool_gettskstat(hp, &mytask);
.) 访问线程池中的所有任务状态
stpool_mark_task(hp, mark_walk, arg)
10.移除所有在等待的任务
stpool_remove_pending_task(hp, NULL);
11.提供引用计数,线程可被其它模块使用,确保线程池对象的生命周期.
第三方模块使用线程池.
stpool_addref(hp)
//保证线程池对象不会被销毁
stpool_adjust(hp, 2, 0);
//添加本模块的需求(增大最大服务线程数+2)
//投递本模块的任务
stpool_release(hp)
//释放线程池
12. 销毁线程池.(当引用计数为0时候,线程池对象会被自动释放,
@stpool_create成功后其用户引用计数为1)
stpool_release(hp)
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
|
/* COPYRIGHT (C) 2014 - 2020, piggy_xrh */
#include
using
namespace
std;
#include "CTaskPool.h"
#ifdef _WIN
#ifdef _DEBUG
#ifdef _WIN64
#pragma comment(lib, "../../../lib/Debug/x86_64_win/libmsglog.lib")
#pragma comment(lib, "../../../lib/Debug/x86_64_win/libstpool.lib")
#pragma comment(lib, "../../../lib/Debug/x86_64_win/libstpoolc++.lib")
#else
#pragma comment(lib, "../../../lib/Debug/x86_32_win/libmsglog.lib")
#pragma comment(lib, "../../../lib/Debug/x86_32_win/libstpool.lib")
#pragma comment(lib, "../../../lib/Debug/x86_32_win/libstpoolc++.lib")
#endif
#else
#ifdef _WIN64
#pragma comment(lib, "../../../lib/Release/x86_64_win/libmsglog.lib")
#pragma comment(lib, "../../../lib/Release/x86_64_win/libstpool.lib")
#pragma comment(lib, "../../../lib/Release/x86_64_win/libstpoolc++.lib")
#else
#pragma comment(lib, "../../../lib/Release/x86_32_win/libmsglog.lib")
#pragma comment(lib, "../../../lib/Release/x86_32_win/libstpool.lib")
#pragma comment(lib, "../../../lib/Release/x86_32_win/libstpoolc++.lib")
#endif
#endif
#endif
/* (log library) depends (task pool library) depends (task pool library for c++)
* libmsglog.lib <-------------libstpool.lib <--------------------libstpoolc++.lib
*/
class
myTask:
public
CTask
{
public
:
/* We can allocate a block manually for the proxy object.
* and we can retreive its address by @getProxy()
*/
myTask(): CTask(
/*new char[getProxySize()]*/
NULL,
"mytask"
) {}
~myTask()
{
/* NOTE: We are responsible for releasing the proxy object if
* the parameter @cproxy passed to CTask is NULL */
if
(isProxyCreatedBySystem())
freeProxy(getProxy());
else
delete
[]
reinterpret_cast
<
char
*>(getProxy());
}
private
:
virtual
int
onTask()
{
cout << taskName() <<
": onTask.\n"
;
return
0;
}
virtual
void
onTaskComplete(
long
sm,
int
errCode)
{
if
(CTask::sm_DONE & sm)
cout << taskName() <<
" has been done with code:"
<< dec << errCode
<<
" stat:0x"
<< hex << stat() <<
" sm:0x"
<< sm << endl;
else
cerr << taskName() <<
" has not been done. reason:"
<< dec << errCode
<<
" stat:0x"
<< hex << stat() <<
" sm:0x"
<< sm << endl;
static
int
slTimes = 0;
/* We reschedule the task again.
* NOTE:
* task->wait() will not return until the task
* does not exist in both the pending pool and the
* scheduling queue.
*/
if
(++ slTimes < 5)
queue();
/* The task will be marked with @sm_ONCE_AGAIN if user calls
* @queue to reschedule it while it is being scheduled. and
* @sm_ONCE_AGAIN will be removed by the pool after it having
* been delived into the pool. */
cout << dec << slTimes <<
" sm:0x"
<< hex <<
this
->sm() << endl << endl;
}
};
int
main()
{
/* Create a pool instance with 1 servering thread */
CTaskPool *pool = CTaskPool::createInstance(1, 0,
false
);
/* Test running the task */
myTask *task =
new
myTask;
/* Set the task's parent before our's calling @queue */
task->setParent(pool);
/* Deliver the task into the pool */
task->queue();
/* Wait for the task's being done */
task->wait();
cout <<
"\ntask has been done !"
<< endl;
/* Free the task object */
delete
task;
/* Shut down the pool */
pool->release();
cin.get();
return
0;
}
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
|
/* COPYRIGHT (C) 2014 - 2020, piggy_xrh */
#include
#include "stpool.h"
#ifdef _WIN
#ifdef _DEBUG
#ifdef _WIN64
#pragma comment(lib, "../../../lib/Debug/x86_64_win/libmsglog.lib")
#pragma comment(lib, "../../../lib/Debug/x86_64_win/libstpool.lib")
#else
#pragma comment(lib, "../../../lib/Debug/x86_32_win/libmsglog.lib")
#pragma comment(lib, "../../../lib/Debug/x86_32_win/libstpool.lib")
#endif
#else
#ifdef _WIN64
#pragma comment(lib, "../../../lib/Release/x86_64_win/libmsglog.lib")
#pragma comment(lib, "../../../lib/Release/x86_64_win/libstpool.lib")
#else
#pragma comment(lib, "../../../lib/Release/x86_32_win/libmsglog.lib")
#pragma comment(lib, "../../../lib/Release/x86_32_win/libstpool.lib")
#endif
#endif
#else
#include
#include
#endif
/* (log library) depends (task pool library)
* libmsglog.lib <-------------libstpool.lib
*/
static
void
do_work(
int
*val) {
*val += 100;
*val *= 0.371;
}
int
task_run(
struct
sttask_t *ptsk) {
size_t
i, j, sed = 20;
for
(i=0; i
for
(j=0; j
do_work((
int
*)ptsk->task_arg);
/* Do not call @printf in the test since it will waste our
* so much time on competing the IO.
*/
return
0;
}
void
task_complete(
struct
sttask_t *ptsk,
long
vmflags,
int
code) {
}
int
main()
{
time_t
now;
int
i, c, times, j=0;
int
sum, *arg;
HPOOL hp;
/* Creat a task pool */
hp = stpool_create(50,
/* max servering threads */
0,
/* 0 servering threads that reserved for waiting for tasks */
1,
/* suspend the pool */
0);
/* default number of priority queue */
printf
(
"%s\n"
, stpool_status_print(hp, NULL, 0));
/* Add tasks */
times = 90000;
arg = (
int
*)
malloc
(times *
sizeof
(
int
));
for
(i=0; i
/* It may take a long time to load a large amount of tasks
* if the program is linked with the debug library */
if
(i % 4000 == 0 || (i + 1) ==times) {
printf
(
"\rLoading ... %.2f%% "
, (
float
)i * 100/ times);
fflush
(stdout);
}
arg[i] = i;
stpool_add_routine(hp,
"sche"
, task_run, task_complete, (
void
*)&arg[i], NULL);
}
printf
(
"\nAfter having executed @stpool_add_routine for %d times:\n"
"--------------------------------------------------------\n%s\n"
,
times, stpool_status_print(hp, NULL, 0));
printf
(
"Press any key to resume the pool.\n"
);
getchar
();
/* Wake up the pool to schedule tasks */
stpool_resume(hp);
stpool_task_wait(hp, NULL, -1);
/* Get the sum */
for
(i=0, sum=0; i
sum += arg[i];
free
(arg);
now =
time
(NULL);
printf
(
"--OK. finished.
,
sum,
ctime
(&now), stpool_status_print(hp, NULL, 0));
#if 0
/* You can use debug library to watch the status of the pool */
while
(
'q'
!=
getchar
()) {
for
(i=0; i<40; i++)
stpool_add_routine(hp,
"debug"
, task_run, NULL, &sum, NULL);
}
/* Clear the stdio cache */
while
((c=
getchar
()) && c !=
'\n'
&& c != EOF)
;
#endif
getchar
();
/* Release the pool */
printf
(
"Shut down the pool now.\n"
);
stpool_release(hp);
getchar
();
return
0;
}
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
|
线程和任务峰值统计(现仅统计win32/linux)
win32(xp) linux(ubuntu 10.04)
----------------------------------------------------
threads_peak: 9 threads_peak: 6
tasks_peak: 90000 tasks_peak: 90000
------------------------------------------------------
完成90000个任务,stpool最高峰线程数目为9(win32),6(linux),根据任务
执行情况智能调度任务,90000个任务都在1s内完成.
(ubuntu为xp的vmware虚拟机, 运行设置为2核2线程)
root@ubuntu_xrh:~/localhost/task/stpool# cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 42
model name : Intel(R) Pentium(R) CPU G620 @ 2.60GHz
stepping : 7
cpu MHz : 2594.108
cache size : 3072 KB
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss nx rdtscp constant_tsc arch_perfmon pebs bts xtopology tsc_reliable nonstop_tsc aperfmperf pni pclmulqdq ssse3 sse4_1 sse4_2 popcnt hypervisor arat
bogomips : 5188.21
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits
virtual
power management:
processor : 1
vendor_id : GenuineIntel
cpu family : 6
model : 42
model name : Intel(R) Pentium(R) CPU G620 @ 2.60GHz
stepping : 7
cpu MHz : 2594.108
cache size : 3072 KB
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss nx rdtscp constant_tsc arch_perfmon pebs bts xtopology tsc_reliable nonstop_tsc aperfmperf pni pclmulqdq ssse3 sse4_1 sse4_2 popcnt hypervisor arat
bogomips : 5188.21
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits
virtual
power management:
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
|