Linux monitor tool

Tool          Description                      Base      Repository
vmstat      all purpose performance tool              yes      yes
mpstat      provides statistics per CPU              no      yes
sar          all purpose performance monitoring tool      no      yes
iostat      provides disk statistics              no      yes
netstat      provides network statistics              yes      yes
dstat          monitoring statistics aggregator          no      in most distributions
iptraf      traffic monitoring dashboard              no      yes
netperf      Network bandwidth tool              no      In some distributions
ethtool      reports on Ethernet interface configuration      yes      yes
iperf          Network bandwidth tool              no      yes
tcptrace      Packet analysis tool                  no      yes


CPU

内核调度器将负责调度2种资源种类:线程(单一或者多路)和中断.调度器去定义不同资源的不同优先权.以下列表从优先级高到低排列:

Interrupts - 设备通知内核,他们完成一次数据处理的过程.例子,当一块网卡设备递送网络数据包或者一块硬件提供了一次IO 请求.

Kernel(System) Processes(内核处理过程) - 所有内核处理过程就是控制优先级别.

User Processes(译注:用户进程) - 这块涉及”userland”.所有软件程序都运行在这个user space.这块在内核调度机制中处于低优先级.

上下文切换

多数现代处理器都能够运行一个进程(单一线程)或者线程.多路超线程处理器有能力运行多个线程.然而,Linux 内核还是把每个处理器核心的双核心芯片作为独立的处理器.比如,以Linux 内核的系统在一个双核心处理器上,是报告显示为两个独立的处理器.

一个标准的Linux 内核可以运行50 至 50,000 的处理线程.在只有一个CPU时,内核将调度并均衡每个进程线程.每个线程都分配一个在处理器中被开销的时间额度.一个线程要么就是获得时间额度或已抢先获得一些具有较高优先级(比如硬件中断),其中较高优先级的线程将从区域重新放置回处理器的队列中.这种线程的转换关系就是我们提到的上下文切换.

每次内核的上下文切换,资源被用于关闭在CPU寄存器中的线程和放置在队列中.系统中越多的上下文切换,在处理器的调度管理下,内核将得到更多的工作.

运行队列

每个CPU 都维护一个线程的运行队列.理论上,调度器应该不断的运行和执行线程.进程线程不是在sleep 状态中(阻塞中和等待IO中)或就是在可运行状态中.如果CPU 子系统处于高负荷下,那就意味着内核调度将无法及时响应系统请求.导致结果,可运行状态进程拥塞在运行队列里.当运行队列越来越巨大,进程线程将花费更多的时间获取被执行.

CPU 利用率

比较流行的术语就是”load”,它提供当前运行队列的详细状态.系统 load 就是指在CPU 队列中有多少数目的线程,以及其中当前有多少进程线程数目被执行的组合.如果一个双核系统执行了2个线程,还有4个在运行队列中,则 load 应该为 6. top 这个程序里显示的load averages 是指1,5,15 分钟以内的load 情况.

CPU 利用率就是定义CPU 使用的百分比.评估系统最重要的一个度量方式就是CPU 的利用率.多数性能监控工具关于CPU 利用率的分类有以下几种:

User Time(译注:用户进程时间) - 关于在user space中被执行进程在CPU 开销时间百分比.

System Time(译注:内核线程以及中断时间) - 关于在kernel space中线程和中断在CPU 开销时间百分比.

Wait IO(译注:IO 请求等待时间) - 所有进程线程被阻塞等待完成一次IO 请求所占CPU 开销idle的时间百分比.

Idle(译注:空闲) - 一个完整空闲状态的进程在CPU 处理器中开销的时间百分比.

CPU 性能监控

Run Queues -  每个处理器应该运行队列不超过1-3 个线程.例子,一个双核处理器应该运行队列不要超过6 个线程.

CPU Utiliation - 如果一个CPU 被充分使用,利用率分类之间均衡的比例应该是
65% - 70% User Time
30% - 35% System Time
0% - 5%   Idle Time

Context Switches - 上下文切换的数目直接关系到CPU 的使用率,如果CPU 利用率保持在上述均衡状态时,大量的上下文切换是正常的.

很多Linux 上的工具可以得到这些状态值,首先就是 vmstat 和 top 这2个工具.

vmstat 工具的使用

vmstat 工具提供了一种低开销的系统性能观察方式.因为 vmstat 本身就是低开销工具,在非常高负荷的服务器上,你需要查看并监控系统的健康情况,在控制窗口还是能够使用vmstat 输出结果.这个工具运行在2种模式下:average 和 sample 模式.sample 模式通过指定间隔时间测量状态值.这个模式对于理解在持续负荷下的性能表现,很有帮助.下面就是

vmstat 运行1秒间隔的示例:

# vmstat 1
procs ―――�Cmemory―――- ―swap�C ―�Cio―- �Csystem�C ―-cpu―-
r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy id wa
0  0 104300  16800  95328  72200    0    0     5    26    7    14  4  1 95  0
0  0 104300  16800  95328  72200    0    0     0    24 1021    64  1  1 98  0
0  0 104300  16800  95328  72200    0    0     0     0 1009    59  1  1 98  0

r         The amount of threads in the run queue. These are threads that are runnable, but the CPU is not available to execute them.
当前运行队列中线程的数目.代表线程处于可运行状态,但CPU 还未能执行.
b          This is the number of processes blocked and waiting on IO requests to finish.
当前进程阻塞并等待IO 请求完成的数目
in          This is the number of interrupts being processed.
当前中断被处理的数目
cs          This is the number of context switches currently happening on the system.
当前kernel system中,发生上下文切换的数目
us          This is the percentage of user CPU utilization.
CPU 利用率的百分比
sys          This is the percentage of kernel and interrupts utilization.
内核和中断利用率的百分比
wa         This is the percentage of idle processor time due to the fact that ALL runnable threads are blocked waiting on IO.
所有可运行状态线程被阻塞在等待IO 请求的百分比
id          This is the percentage of time that the CPU is completely idle.
CPU 空闲时间的百分比


sysstat packages

    http://sebastien.godard.pagesperso-orange.fr/download.html

Following are the other sysstat utilities.

  • sar collects and displays ALL system activities statistics.

  • sadc stands for “system activity data collector”. This is the sar backend tool that does the data collection.

  • sa1 stores system activities in binary data file. sa1 depends on sadc for this purpose. sa1 runs from cron.

  • sa2 creates daily summary of the collected statistics. sa2 runs from cron.

  • sadf can generate sar report in CSV, XML, and various other formats. Use this to integrate sar data with other tools.

  • iostat generates CPU, I/O statistics

  • mpstat displays CPU statistics.

  • pidstat reports statistics based on the process id (PID)

  • nfsiostat displays NFS I/O statistics.

  • cifsiostat generates CIFS statistics.

1. CPU Usage of ALL CPUs (sar -u)

This gives the cumulative real-time CPU usage of all CPUs. “1 3″ reports for every 1 seconds a total of 3 times. Most likely you’ll focus on the last field “%idle” to see the cpu load.

$ sar -u 1 3
Linux 2.6.18-194.el5PAE (dev-db)        03/26/2011      _i686_  (8 CPU)

01:27:32 PM       CPU     %user     %nice   %system   %iowait    %steal     %idle
01:27:33 PM       all      0.00      0.00      0.00      0.00      0.00    100.00
01:27:34 PM       all      0.25      0.00      0.25      0.00      0.00     99.50
01:27:35 PM       all      0.75      0.00      0.25      0.00      0.00     99.00
Average:          all      0.33      0.00      0.17      0.00      0.00     99.50

Following are few variations:

  • sar -u Displays CPU usage for the current day that was collected until that point.

  • sar -u 1 3 Displays real time CPU usage every 1 second for 3 times.

  • sar -u ALL Same as “sar -u” but displays additional fields.

  • sar -u ALL 1 3 Same as “sar -u 1 3″ but displays additional fields.

  • sar -u -f /var/log/sa/sa10 Displays CPU usage for the 10day of the month from the sa10 file.

2. CPU Usage of Individual CPU or Core (sar -P)

If you have 4 Cores on the machine and would like to see what the individual cores are doing, do the following.

“-P ALL” indicates that it should displays statistics for ALL the individual Cores.

In the following example under “CPU” column 0, 1, 2, and 3 indicates the corresponding CPU core numbers.

$ sar -P ALL 1 1
Linux 2.6.18-194.el5PAE (dev-db)        03/26/2011      _i686_  (8 CPU)

01:34:12 PM       CPU     %user     %nice   %system   %iowait    %steal     %idle
01:34:13 PM       all     11.69      0.00      4.71      0.69      0.00     82.90
01:34:13 PM         0     35.00      0.00      6.00      0.00      0.00     59.00
01:34:13 PM         1     22.00      0.00      5.00      0.00      0.00     73.00
01:34:13 PM         2      3.00      0.00      1.00      0.00      0.00     96.00
01:34:13 PM         3      0.00      0.00      0.00      0.00      0.00    100.00

“-P 1″ indicates that it should displays statistics only for the 2nd Core. (Note that Core number starts from 0).

$ sar -P 1 1 1
Linux 2.6.18-194.el5PAE (dev-db)        03/26/2011      _i686_  (8 CPU)

01:36:25 PM       CPU     %user     %nice   %system   %iowait    %steal     %idle
01:36:26 PM         1      8.08      0.00      2.02      1.01      0.00     88.89

Following are few variations:

  • sar -P ALL Displays CPU usage broken down by all cores for the current day.

  • sar -P ALL 1 3 Displays real time CPU usage for ALL cores every 1 second for 3 times (broken down by all cores).

  • sar -P 1 Displays CPU usage for core number 1 for the current day.

  • sar -P 1 1 3 Displays real time CPU usage for core number 1, every 1 second for 3 times.

  • sar -P ALL -f /var/log/sa/sa10 Displays CPU usage broken down by all cores for the 10day day of the month from sa10 file.

3. Memory Free and Used (sar -r)

This reports the memory statistics. “1 3″ reports for every 1 seconds a total of 3 times. Most likely you’ll focus on “kbmemfree” and “kbmemused” for free and used memory.

$ sar -r 1 3
Linux 2.6.18-194.el5PAE (dev-db)        03/26/2011      _i686_  (8 CPU)

07:28:06 AM kbmemfree kbmemused  %memused kbbuffers  kbcached  kbcommit   %commit  kbactive   kbinact
07:28:07 AM   6209248   2097432     25.25    189024   1796544    141372      0.85   1921060     88204
07:28:08 AM   6209248   2097432     25.25    189024   1796544    141372      0.85   1921060     88204
07:28:09 AM   6209248   2097432     25.25    189024   1796544    141372      0.85   1921060     88204
Average:      6209248   2097432     25.25    189024   1796544    141372      0.85   1921060     88204

Following are few variations:

  • sar -r

  • sar -r 1 3

  • sar -r -f /var/log/sa/sa10

4. Swap Space Used (sar -S)

This reports the swap statistics. “1 3″ reports for every 1 seconds a total of 3 times. If the “kbswpused” and “%swpused” are at 0, then your system is not swapping.

$ sar -S 1 3
Linux 2.6.18-194.el5PAE (dev-db)        03/26/2011      _i686_  (8 CPU)

07:31:06 AM kbswpfree kbswpused  %swpused  kbswpcad   %swpcad
07:31:07 AM   8385920         0      0.00         0      0.00
07:31:08 AM   8385920         0      0.00         0      0.00
07:31:09 AM   8385920         0      0.00         0      0.00
Average:      8385920         0      0.00         0      0.00

Following are few variations:

  • sar -S

  • sar -S 1 3

  • sar -S -f /var/log/sa/sa10

Notes:

  • Use “sar -R” to identify number of memory pages freed, used, and cached per second by the system.

  • Use “sar -H” to identify the hugepages (in KB) that are used and available.

  • Use “sar -B” to generate paging statistics. i.e Number of KB paged in (and out) from disk per second.

  • Use “sar -W” to generate page swap statistics. i.e Page swap in (and out) per second.

5. Overall I/O Activities (sar -b)

This reports I/O statistics. “1 3″ reports for every 1 seconds a total of 3 times.

Following fields are displays in the example below.

  • tps �C Transactions per second (this includes both read and write)

  • rtps �C Read transactions per second

  • wtps �C Write transactions per second

  • bread/s �C Bytes read per second

  • bwrtn/s �C Bytes written per second

$ sar -b 1 3
Linux 2.6.18-194.el5PAE (dev-db)        03/26/2011      _i686_  (8 CPU)

01:56:28 PM       tps      rtps      wtps   bread/s   bwrtn/s
01:56:29 PM    346.00    264.00     82.00   2208.00    768.00
01:56:30 PM    100.00     36.00     64.00    304.00    816.00
01:56:31 PM    282.83     32.32    250.51    258.59   2537.37
Average:       242.81    111.04    131.77    925.75   1369.90

Following are few variations:

  • sar -b

  • sar -b 1 3

  • sar -b -f /var/log/sa/sa10

Note: Use “sar -v” to display number of inode handlers, file handlers, and pseudo-terminals used by the system.

6. Individual Block Device I/O Activities (sar -d)

To identify the activities by the individual block devices (i.e a specific mount point, or LUN, or partition), use “sar -d”

$ sar -d 1 1
Linux 2.6.18-194.el5PAE (dev-db)        03/26/2011      _i686_  (8 CPU)

01:59:45 PM       DEV       tps  rd_sec/s  wr_sec/s  avgrq-sz  avgqu-sz     await     svctm     %util
01:59:46 PM    dev8-0      1.01      0.00      0.00      0.00      0.00      4.00      1.00      0.10
01:59:46 PM    dev8-1      1.01      0.00      0.00      0.00      0.00      4.00      1.00      0.10
01:59:46 PM dev120-64      3.03     64.65      0.00     21.33      0.03      9.33      5.33      1.62
01:59:46 PM dev120-65      3.03     64.65      0.00     21.33      0.03      9.33      5.33      1.62
01:59:46 PM  dev120-0      8.08      0.00    105.05     13.00      0.00      0.38      0.38      0.30
01:59:46 PM  dev120-1      8.08      0.00    105.05     13.00      0.00      0.38      0.38      0.30
01:59:46 PM dev120-96      1.01      8.08      0.00      8.00      0.01      9.00      9.00      0.91
01:59:46 PM dev120-97      1.01      8.08      0.00      8.00      0.01      9.00      9.00      0.91

In the above example “DEV” indicates the specific block device.

For example: “dev53-1″ means a block device with 53 as major number, and 1 as minor number.

The device name (DEV column) can display the actual device name (for example: sda, sda1, sdb1 etc.,), if you use the -p option (pretty print) as shown below.

$ sar -p -d 1 1
Linux 2.6.18-194.el5PAE (dev-db)        03/26/2011      _i686_  (8 CPU)

01:59:45 PM       DEV       tps  rd_sec/s  wr_sec/s  avgrq-sz  avgqu-sz     await     svctm     %util
01:59:46 PM       sda      1.01      0.00      0.00      0.00      0.00      4.00      1.00      0.10
01:59:46 PM      sda1      1.01      0.00      0.00      0.00      0.00      4.00      1.00      0.10
01:59:46 PM      sdb1      3.03     64.65      0.00     21.33      0.03      9.33      5.33      1.62
01:59:46 PM      sdc1      3.03     64.65      0.00     21.33      0.03      9.33      5.33      1.62
01:59:46 PM      sde1      8.08      0.00    105.05     13.00      0.00      0.38      0.38      0.30
01:59:46 PM      sdf1      8.08      0.00    105.05     13.00      0.00      0.38      0.38      0.30
01:59:46 PM      sda2      1.01      8.08      0.00      8.00      0.01      9.00      9.00      0.91
01:59:46 PM      sdb2      1.01      8.08      0.00      8.00      0.01      9.00      9.00      0.91

Following are few variations:

  • sar -d

  • sar -d 1 3

  • sar -d -f /var/log/sa/sa10

  • sar -p -d

7. Display context switch per second (sar -w)

This reports the total number of processes created per second, and total number of context switches per second. “1 3″ reports for every 1 seconds a total of 3 times.

$ sar -w 1 3
Linux 2.6.18-194.el5PAE (dev-db)        03/26/2011      _i686_  (8 CPU)

08:32:24 AM    proc/s   cswch/s
08:32:25 AM      3.00     53.00
08:32:26 AM      4.00     61.39
08:32:27 AM      2.00     57.00

Following are few variations:

  • sar -w

  • sar -w 1 3

  • sar -w -f /var/log/sa/sa10

8. Reports run queue and load average (sar -q)

This reports the run queue size and load average of last 1 minute, 5 minutes, and 15 minutes. “1 3″ reports for every 1 seconds a total of 3 times.

$ sar -q 1 3
Linux 2.6.18-194.el5PAE (dev-db)        03/26/2011      _i686_  (8 CPU)

06:28:53 AM   runq-sz  plist-sz   ldavg-1   ldavg-5  ldavg-15   blocked
06:28:54 AM         0       230      2.00      3.00      5.00         0
06:28:55 AM         2       210      2.01      3.15      5.15         0
06:28:56 AM         2       230      2.12      3.12      5.12         0
Average:            3       230      3.12      3.12      5.12         0

Note: The “blocked” column displays the number of tasks that are currently blocked and waiting for I/O operation to complete.

Following are few variations:

  • sar -q

  • sar -q 1 3

  • sar -q -f /var/log/sa/sa10

9. Report network statistics (sar -n)

This reports various network statistics. For example: number of packets received (transmitted) through the network card, statistics of packet failure etc.,. “1 3″ reports for every 1 seconds a total of 3 times.

sar -n KEYWORD

KEYWORD can be one of the following:

  • DEV �C Displays network devices vital statistics for eth0, eth1, etc.,

  • EDEV �C Display network device failure statistics

  • NFS �C Displays NFS client activities

  • NFSD �C Displays NFS server activities

  • SOCK �C Displays sockets in use for IPv4

  • IP �C Displays IPv4 network traffic

  • EIP �C Displays IPv4 network errors

  • ICMP �C Displays ICMPv4 network traffic

  • EICMP �C Displays ICMPv4 network errors

  • TCP �C Displays TCPv4 network traffic

  • ETCP �C Displays TCPv4 network errors

  • UDP �C Displays UDPv4 network traffic

  • SOCK6, IP6, EIP6, ICMP6, UDP6 are for IPv6

  • ALL �C This displays all of the above information. The output will be very long.

$ sar -n DEV 1 1
Linux 2.6.18-194.el5PAE (dev-db)        03/26/2011      _i686_  (8 CPU)

01:11:13 PM     IFACE   rxpck/s   txpck/s   rxbyt/s   txbyt/s   rxcmp/s   txcmp/s  rxmcst/s
01:11:14 PM        lo      0.00      0.00      0.00      0.00      0.00      0.00      0.00
01:11:14 PM      eth0    342.57    342.57  93923.76 141773.27      0.00      0.00      0.00
01:11:14 PM      eth1      0.00      0.00      0.00      0.00      0.00      0.00      0.00

10. Report Sar Data Using Start Time (sar -s)

When you view historic sar data from the /var/log/sa/saXX file using “sar -f” option, it displays all the sar data for that specific day starting from 12:00 a.m for that day.

Using “-s hh:mi:ss” option, you can specify the start time. For example, if you specify “sar -s 10:00:00″, it will display the sar data starting from 10 a.m (instead of starting from midnight) as shown below.

You can combine -s option with other sar option.

For example, to report the load average on 26th of this month starting from 10 a.m in the morning, combine the -q and -s option as shown below.

$ sar -q -f /var/log/sa/sa23 -s 10:00:01
Linux 2.6.18-194.el5PAE (dev-db)        03/26/2011      _i686_  (8 CPU)

10:00:01 AM   runq-sz  plist-sz   ldavg-1   ldavg-5  ldavg-15   blocked
10:10:01 AM         0       127      2.00      3.00      5.00         0
10:20:01 AM         0       127      2.00      3.00      5.00         0
...
11:20:01 AM         0       127      5.00      3.00      3.00         0
12:00:01 PM         0       127      4.00      2.00      1.00         0

There is no option to limit the end-time. You just have to get creative and use head command as shown below.

For example, starting from 10 a.m, if you want to see 7 entries, you have to pipe the above output to “head -n 10″.

$ sar -q -f /var/log/sa/sa23 -s 10:00:01 | head -n 10
Linux 2.6.18-194.el5PAE (dev-db)        03/26/2011      _i686_  (8 CPU)

10:00:01 AM   runq-sz  plist-sz   ldavg-1   ldavg-5  ldavg-15   blocked
10:10:01 AM         0       127      2.00      3.00      5.00         0
10:20:01 AM         0       127      2.00      3.00      5.00         0
10:30:01 AM         0       127      3.00      5.00      2.00         0
10:40:01 AM         0       127      4.00      2.00      1.00         2
10:50:01 AM         0       127      3.00      5.00      5.00         0
11:00:01 AM         0       127      2.00      1.00      6.00         0
11:10:01 AM         0       127      1.00      3.00      7.00         2

There is lot more to cover in Linux performance monitoring and tuning. We are only getting started. More articles to come in the performance series.


你可能感兴趣的:(Monitor,tool)