Linux PCIe SSD NVME 性能调优篇

直接来干货!怎么调优 PBlaze IV PCIe SSD NVMe。  Go!

1. 中断绑定

 在Redhat 6.5中的NVMe驱动会自动把全部的中断向量绑定到core0上,如果有多个SSD, core0将会成为瓶颈。

 (1) Turns off the IRQ balancer:

[root@memblaze-lyk2 ~]# service irqbalance stop

(2) Check nvme device IRQ vector number:

[root@memblaze-lyk2 ~]# cat /proc/interrupts  | grep nvme

160:          0    1265046          0          0          0          0          0          0          0          0          0          0          0          0          0          0  IR-PCI-MSI-edge      nvme admin, nvme

 161:          0   10750058          0          0          0          0          0          0          0          0          0          0          0          0          0          0  IR-PCI-MSI-edge      nvme

 162:          0     917319          0          0          0          0          0          0          0          0          0          0          0          0          0          0  IR-PCI-MSI-edge      nvme

………

……….

(3) Assignand check the affinity of a single interrupt vector

[root@memblaze-lyk2~]# echo 0000aaaa > /proc/irq/161/smp_affinity

[root@memblaze-lyk2~]# cat /proc/irq/161/smp_affinity

0000,0000aaaa

[root@memblaze-lyk2~]# cat /proc/irq/161/smp_affinity_list

1,3,5,7,9,11,13,15

(4) You cat make a script to affinity all interrupt vector to all cores.


2. Multi device with Numa IO

PCIe的设备现在要考虑numa IO 架构啦。这里不详细介绍啦。简单方向就是让你应用的设备都在一个node上面。

(1) Plug multi Pblaze SSD to only one CPU PCIe slot or average plug to each CPU PCIe slot, itdependence on your requirement.

(2) Application running on which numa node. If SSD plug in node0, run application on node0 can get better performance, you can usenumactl to control it.

(3) Network and storage IO device on same node.

a) Check younetwork and SSD PCIe slot, they are on same node.

[root@memblaze-lyk2 numactl]# cat /sys/block/nvme0n1/device/numa_node

1

[root@memblaze-lyk2 numactl]# lspci  | grep Mell

42:00.0 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3]

[root@memblaze-lyk2 numactl]# lspci  -t

[root@memblaze-lyk2 numactl]# cat /sys/devices/pci0000\:40/0000\:40\:02.0/numa_node

1

b) Check both device interrupt bind to node 1

c) Try numctl to bind application run on node 1, you can use cmd “ps -eo pid,args,psr”to check process running on which core.


你可能感兴趣的:(ssd)