veth-pair 配置网卡多队列

参考: https://www.spinics.net/lists/netdev/msg753445.html


> real_num_tx_queue > 1 will makes the xmit path slower, so we likely
> want to keep that to 1 by default - unless the userspace explicitly set
> numtxqueues via netlink.

Right, that's fine by me :)

> Finally, a default large num_tx_queue slows down device creation:
>
> cat << ENDL > run.sh
> #!/bin/sh
> MAX=$1
> for I in `seq 1 $MAX`; do
>   ip link add name v$I type veth peer name pv$I
> done
> for I in `seq 1 $MAX`; do
>   ip link del dev v$I
> done
> ENDL
> chmod a+x run.sh
>
> # with num_tx_queue == 1
> time ./run.sh 100 
> real  0m2.276s
> user  0m0.107s
> sys   0m0.162s
>
> # with num_tx_queue == 128
> time ./run.sh 100 1
> real  0m4.199s
> user  0m0.091s
> sys   0m1.419s
>
> # with num_tx_queue == 4096
> time ./run.sh 100 
> real  0m24.519s
> user  0m0.089s
> sys   0m21.711s

So ~42 ms to create a device if there are 128 CPUs? And ~245 when
there's 4k CPUs? Doesn't seem too onerous to me...

> Still, if there is agreement I can switch to num_possible_cpus default,
> plus some trickery to keep real_num_{r,t}x_queue unchanged.
>
> WDYT?

SGTM :)

-Toke


你可能感兴趣的:(veth-pair 配置网卡多队列)