由於手邊沒有RDMA的實體網卡,因此利用兩個VM建立SoftRoCE 的Environment,好編譯與執行RDMA程式(參考reference:SoftRoCE Environment Setup)
- kernel distro version:
Ubuntu 20.04 LTS
分別爲csproj_rdma
(作爲server)和csproj_rdma_cli
(作爲client)
設定在同一個NAT network(同一subnet底下):
ibverb and RDMA library
sudo apt install libibverbs-dev librdmacm-dev
rdma-core
# Download the libary and install prerequisites
git clone https://github.com/linux-rdma/rdma-core.git
sudo apt install build-essential cmake gcc libudev-dev libnl-3-dev libnl-route-3-dev
sudo apt install ninja-build pkg-config valgrind python3-dev cython3 python3-docutils pandoc
# Complie the library
cd rdma-core
bash build.sh
command:
rdma link add <RDMA_NIC_NAME> type <TYPE> netdev <DEVICE>
- TYPE: rxe (for RoCE)
- DEVICE: 實體網卡名稱
實際VM操作:
-
TOPO:
B is switch others are hosts C is receiver A & A' are senders $
$A--B--C $ ~~~~$/$$\ $ ~$ A' sinkScenario 1:
- A-B-C
- A'-B-C two TCP flows
Scenario 2:
- A-B-C
- A'-B-sink (link B - sink is unlimited) two TCP flows
Scenario 3:
- (TCP) A-B-C
- (UDP) A'-B -C (edited)
注意要點:
要在意的點有 throughput, bandwidth, congestion window 應該需要 parse 出封包來看才會比較準確 (也可以試著看 congestion control 中不同 phase 的狀況, e.g. slow start) link 需要有 bandwidth 的限制,調整 bandwidth 來做測試 看能不能跑滿 bottleneck
-
- 網路環境:用Mininet+python建置custom topology,流量利用iperf由A,APr發送給C
- (原本用docker container,但發現太花時間,且Mininet可以簡單快速滿足上述需求,也可使用Wireshark圖形化界面截取封包)
- A,C,sink為host,B為switch
- 網路環境:用Mininet+python建置custom topology,流量利用iperf由A,APr發送給C
RDMA tutorial sample code: The geek in the corner SoftRoCE Setup(中) RDMA (Remote Directly Memory Access) Building an RDMA-capable application with IB verbs, part 1: basics
Congestion Control for Large-Scale RDMA Deployments Introducing RDMA into computer networks course: design and experience