Getting Started with ConnectX-5 100Gb/s Adapters for Linux

Getting Started with ConnectX-5 100Gb/s Adapters for Linux

 

Dec 5, 2018•Knowledge Article

getting-started-with-connectx-5-100gb-s-adapters-for-linux

This post provides basic steps on how to configure and set up basic parameters for the Mellanox ConnectX-5 100Gb/s adapter.

This post is basic and is meant for beginners. The procedure is very similar to the one for the ConnectX-4 adapter (in fact, it uses the same mlx5 driver).

 

Note: ConnectX-5 adapters can be used only with MLNX_OFED rel. 4.0 or later installed.

 

 

    • References
    • Setup
    • Prerequisites
    • Configuration
    • Troubleshooting

 

References

  • MLNX_OFED User Manual
  • Getting started with ConnectX-4 100Gb/s Adapter for Linux

 

Setup

The basic setup consists of:

  • Two servers equipped with PCI gen3x16 slots
  • Two Mellanox ConnectX-5 adapter cards
  • One 100Gb/s cable

 

In this specific setup, CentOS 7.2 was installed on the servers.

 

Prerequisites

If you plan to run performance tests, we recommend that you tune the BIOS to high performance.

Refer to Mellanox Tuning Guide and see this example: BIOS Performance Tuning Example.

 

Configuration

1. Install the latest MLNX_OFED (rel. 4.0 and later).

 

2. Check that the adapters are "recognized" by running the lspci command:

# lspci | grep Mellanox 82:00.0 Infiniband controller: Mellanox Technologies MT27800 Family [ConnectX-5] 82:00.1 Infiniband controller: Mellanox Technologies MT27800 Family [ConnectX-5]

 

Note: In ConnectX-5, each port is identified by a unique number.

 

3. Change the link protocol to Ethernet using the MFT mlxconfig tool.

Note: The default link protocol for ConnectX-4 is InfiniBand.

 

a. Start MFT.

# mst start

Starting MST (Mellanox Software Tools) driver set

Loading MST PCI module - Success

Loading MST PCI configuration module - Success

Create devices

Unloading MST PCI module (unused) - Success

 

b. Extract the vendor_part_id parameter. Note: ConnectX-5's ID is 4119.

# ibv_devinfo | grep vendor_part_id

vendor_part_id: 4119

vendor_part_id: 4119

 

c. Query the Host about ConnectX-4 adapters:

# mlxconfig -d /dev/mst/mt4119_pciconf0 q

 

Device #1:

----------

 

Device type: ConnectX5

PCI device: /dev/mst/mt4119_pciconf0

 

 

Configurations: Current

...

LINK_TYPE_P1 1

LINK_TYPE_P2 1

....

 

Note that the LINK_TYPE_P1 and LINK_TYPE_P2 equal 1 (InfiniBand) by default.

 

d. Change the port type to Ethernet (LINK_TYPE = 2):

#mlxconfig -d /dev/mst/mt4119_pciconf0 set LINK_TYPE_P1=2 LINK_TYPE_P2=2

 

Device #1:

----------

 

Device type: ConnectX5

PCI device: /dev/mst/mt4119_pciconf0

 

Configurations: Current New

LINK_TYPE_P1 1 2

LINK_TYPE_P2 1 2

 

Apply new Configuration? ? (y/n) [n] : y

Applying... Done!

-I- Please reboot machine to load new configurations.

 

e. Reboot the server.

 

4. Configure IPs and MTUs on both servers.

 

For Server S5:

# ifconfig ens801f0 15.15.15.5/24 up
# ifconfig ens801f0 mtu 9000

 

For Server S6:

# ifconfig ens801f0 15.15.15.6/24 up
# ifconfig ens801f0 mtu 9000

 

5. After you reboot, check that the port type was changed to Ethernet for each:

# ibdev2netdev

mlx5_0 port 1 ==> ens801f0 (Up)

mlx5_1 port 1 ==> ens801f0 (Up)

 

6. Make sure that you disable the firewall, iptables, SELINUX, and other security processes that might block the traffic.

# service firewalld stop

# systemctl disable firewalld

# service iptables stop

 

Disable SELINUX in the config file located at: /etc/selinux/config.

 

7. Run the basic iperf test again.

 

The following output is displayed using the automation iperf script described in HowTo Install iperf and Test Mellanox Adapters Performance.

 

Run the iperf client process on one host with the iperf server:

# iperf -s -P8

 

Run the iperf client process on the other host with the iperf client:

# iperf -c 15.15.15.6 -P8

------------------------------------------------------------

Client connecting to 15.15.15.6, TCP port 5001

TCP window size: 325 KByte (default)

------------------------------------------------------------

[ 10] local 15.15.15.5 port 57522 connected with 15.15.15.6 port 5001

[ 4] local 15.15.15.5 port 57508 connected with 15.15.15.6 port 5001

[ 3] local 15.15.15.5 port 57510 connected with 15.15.15.6 port 5001

[ 6] local 15.15.15.5 port 57512 connected with 15.15.15.6 port 5001

[ 7] local 15.15.15.5 port 57514 connected with 15.15.15.6 port 5001

[ 5] local 15.15.15.5 port 57516 connected with 15.15.15.6 port 5001

[ 8] local 15.15.15.5 port 57518 connected with 15.15.15.6 port 5001

[ 9] local 15.15.15.5 port 57520 connected with 15.15.15.6 port 5001

[ ID] Interval Transfer Bandwidth

[ 10] 0.0-10.0 sec 13.6 GBytes 11.7 Gbits/sec

[ 3] 0.0-10.0 sec 13.9 GBytes 12.0 Gbits/sec

[ 6] 0.0-10.0 sec 18.6 GBytes 16.0 Gbits/sec

[ 7] 0.0-10.0 sec 10.9 GBytes 9.38 Gbits/sec

[ 5] 0.0-10.0 sec 14.7 GBytes 12.6 Gbits/sec

[ 8] 0.0-10.0 sec 16.0 GBytes 13.7 Gbits/sec

[ 9] 0.0-10.0 sec 17.2 GBytes 14.8 Gbits/sec

[ 4] 0.0-10.0 sec 9.92 GBytes 8.52 Gbits/sec

[SUM] 0.0-10.0 sec 115 GBytes 98.6 Gbits/sec

 

Troubleshooting

1. If MLNX_OFED rel. 4.0 or later is not used, the card will be identified as a ConnectX-4 adapter by default.

# ofed_info -s

MLNX_OFED_LINUX-3.4-2.0.0.0:

 

# lspci | grep Mellanox

81:00.0 Infiniband controller: Mellanox Technologies MT28800 Family [ConnectX-4]

81:00.1 Infiniband controller: Mellanox Technologies MT28800 Family [ConnectX-4]

 

 

To correct this, install MLNX_OFED rel. 4.0 or later.

# ofed_info -s

MLNX_OFED_LINUX-4.0-0.1.5.0:

 

# lspci | grep Mel

81:00.0 Infiniband controller: Mellanox Technologies MT27800 Family [ConnectX-5]

81:00.1 Infiniband controller: Mellanox Technologies MT27800 Family [ConnectX-5]

 

2. Make sure that you run the iperf process from the root "/" folder.

你可能感兴趣的:(dpdk)