目录

简略

详细

提升性能的原理

sr-iov中两种功能

查看sriov端口,sriov查看pf-vf对应关系脚本

DPDK vs SR-IOV for NFV? – Why a wrong decision can impact performance!

What is DPDK?

DPDK with OVS

DPDK ( OVS + VNF)

SR-IOV

When to use DPDK and/or SR-IOV

if Traffic is East-West, DPDK wins against SR-IOV

If traffic is North-South, SR-IOV wins against DPDK

Conclusion with an Example


简略

SR-IOV=PF+VF

SR-IOV有两个重要组件:VF和 PF。每个PF有标准的PCIe功能,能关联到多个VF。而每个VF都有与性能相关的资源,共享一个物理设备。所以就是PF具有完整的PCIe功能,VF能独立使用关键功能。

详细

SR-IOV硬件虚拟化解决方案提高服务器里虚拟机收发报文的性能和伸缩性.

SR-IOV标准允许在虚拟机之间高效共享PCIe(快速外设组件互连)设备,并且它是在硬件中实现的,可以获得能够与本机性能接近的I/O性能。

提升性能的原理

SR-IOV之所以能够提升虚拟机性能,就是因为实现了IO虚拟化。虚拟机模拟软件VMM不再干预客户机的IO,IOMMU把客户机地址重映射为宿主机物理地址,这样能直接通过DMA在宿主机和VF设备之间进行高速数据搬移,并产生中断。当中断产生的时候,VMM根据中断向量识别出客户机,并将虚拟MSI中断通知给客户机。

那PF和VF之间又是怎么通信的?比如VF把客户机IO请求发给PF,PF也会把一些全局设备重置等事件发给VF。有的设备采用的是Doorbell机制,发送方把消息放入信箱,按一下门铃,产生中断通知接收方,接收方读到消息在共享寄存器做个标记,表示信息接收了。

(PCIe Switch之SR-IOV:http://www.ssdfans.com/?p=3873)

sr-iov中两种功能

1、物理功能:PF,用于支持SR-IOV的PCI功能,拥有完全配置或控制PCIe设备资源的能力。

2、虚拟功能VF,是一种轻量级的PCIe功能,与PF相关联,可以与物理功能以及同一物理功能关联的其他VF共享一个或多个物理资源。

SR-IOV功能需要硬件和软件都支持时才能使用,并且该功能可以提高性能,节省成本和耗能,简化与网络设备的适配、布线的工作。

参考:

1、《云计算网络珠玑》

2、https://blog.csdn.net/u011955950/article/details/19071551

3、https://blog.csdn.net/tiantao2012/article/details/68941479

查看sriov端口,sriov查看pf-vf对应关系脚本

$ cat pf-vf

echo "physfn is $1"echo "pf info:"ls /sys/class/net/$1 -lecho "vf info:"eth_dev=`ls /sys/class/net/$1/device/virtfn* -l | cut -d ">" -f 2 |cut -d "/" -f 2`for i in $eth_dev; do echo "`ls /sys/bus/pci/devices/$i/net` --> $i"; done

$ cat vf-pf

echo "vf info:"ls /sys/class/net/$1 -lNAME=`ls /sys/class/net/$1/device/physfn/net/`echo "pf info:"echo "physfn is $NAME"ls /sys/class/net/$NAME -l高性能网络 SR-IOV机制--VF与PF的通信

DPDK vs SR-IOV for NFV? – Why a wrong decision can impact performance!

By Faisal / Last Updated On: March 13, 2021

It is not easy to settle the debate for DPDK vs SR-IOV-the technologies used to optimize packet processing in NFV servers.

For one, you will find supporters on both sides with their claims and arguments.

However although both are used to increase the packet processing performance in servers, the decision on which one is better comes down to design rather than the technologies themselves.

So a wrong decision on DPDK vs SR-IOV can really impact the throughput performance as you will see towards the conclusion of the article.

To understand why design matters, it is a must to understand the technologies, starting from how Linux processes packets.

In particular, this article attempts to answer the following questions!

  1. What is DPDK
  2. What is SR-IOV
  3. How DPDK is different than SR-IOV
  4. What are the right use cases for both and how to position them properly?
  5. How DPDK/SR-IOV affects throughput performance.

I recommend that you start from the beginning until the end in order to understand the conclusion in a better way.

What is DPDK?

DPDK stands for Data Plane Development Kit.

In order to understand DPDK , we should know how Linux handles the networking part

By default Linux uses kernel to process packets, this puts pressure on kernel to process packets faster as the NICs (Network Interface Card) speeds are increasing at fast.

There have been many techniques to bypass kernel to achieve packet efficiency. This involves processing packets in the userspace instead of kernel space. DPDK is one such technology.

User space versus kernel space in Linux?
Kernel space is where the kernel (i.e., the core of the operating system) runs and provides its services.  It sets things up so separate user processes see and manipulate only their own memory space.
User space is that portion of system memory in which user processes run . Kernel space can be accessed by user processes only through the use of system calls.

Let’s see how Linux networking uses kernel space:

For normal packet processing, packets from NIC are pushed to Linux kernel before reaching the application.

However, the introduction of DPDK (Data Plane Developer Kit), changes the landscape, as the application can talk directly to the NIC completely bypassing the Linux kernel.

Indeed fast switching, isn’t it?

Without DPDK, packet processing is through the kernel network stack which is interrupt-driven. Each time NIC receives incoming packets, there is a kernel interrupt to process the packets and a context switch from kernel space to user space. This creates delay.

With the DPDK, there is no need for interrupts, as the processing happens in user space using Poll mode drivers. These poll mode drivers can poll data directly from NIC, thus provide fast switching by completely bypassing kernel space. This improves the throughput rate of data.

DPDK with OVS

Now after we know the basics of how Linux networking stack works and what is the role of DPDK, we turn our attention on how OVS (Open vSwitch ) works with and without DPDK.

What is OVS (Open vSwitch)?
Open vSwitch is a production quality, multilayer virtual switch licensed under the open source Apache 2.0 license. This runs as software in hypervisor and enables virtual networking of Virtual Machines.
Main components include:
Forwarding path: Datapath/Forwarding path is the main packet forwarding module of OVS, implemented in kernel space for high performance
Vswitchid is the main Open vSwitch userspace program

An OVS is shown as part of the VNF implementation. OVS sits in the hypervisor. Traffic can easily transfer from one VNF to another VNF through the OVS as shown

In fact, OVS was never designed to work in the telco workloads of NFV. The traditional web applications are not throughput intensive and OVS can get away with it.

Now let’s try to dig deeper into how OVS processes traffic.

OVS, no matter how good it is, faces the same problem as the Linux networking stack discussed earlier. The forwarding plane of OVS is part of the kernel as shown below, therefore a potential bottleneck as the throughput speed increases.

Open vSwitch can be combined with DPDK for better performance, resulting in a DPDK-accelerated OVS (OVS+DPDK). The goal is to replace the standard OVS kernel forwarding path with a DPDK-based forwarding path, creating a user-space vSwitch on the host, which uses DPDK internally for its packet forwarding. This increases the performance of OVS switch as it is entirely running in user space as shown below.

DPDK ( OVS + VNF)

It is also possible to run DPDK in VNF instead of OVS. Here the application is taking advantage of DPDK, instead of standard Linux networking stack as described in the first section.

While this implementation can be combined with DPDK in OVS but this is another level of optimization. However, both are not dependent on one another and one can be implemented without the other.

SR-IOV

SR-IOV stands for “Single Root I/O Virtualization”. This takes the performance of the compute hardware to the next level.

The trick here is to avoid hypervisor altogether and have VNF access NIC directly, thus enabling almost line throughput.

But to understand this concept properly, let’s introduce an intermediate step, where hypervisor pass- through is possible even without using SR-IOV.

This is called PCI pass through. It is possible to present a complete NIC to the guest OS without using a hypervisor. The VM thinks that it is directly connected to NIC. As shown here there are two NIC cards and two of the VNFs, each has exclusive access to one of the NIC cards.

However the downside: As the two NICs below are occupied exclusively by the VNF1 and VNF3. And there is no third dedicated NIC, the VNF2 below is left without any access.

SR-IOV solves exactly this issue:

The SR-IOV specification defines a standardized mechanism to virtualize PCIe devices.  This mechanism can virtualize a single PCIe Ethernet controller to appear as multiple PCIe devices.

By creating virtual slices of PCIe devices, each virtual slice can be assigned to a single VM/VNF thereby eliminating the issue that happened because of limited NICs

Multiple Virtual Functions ( VFs) are created on a shared NIC. These virtual slices are created and presented to the VNFs.

(The PF stands for Physical function, This is the physical function that supports SR-IOV)

This can be further coupled with DPDK as part of VNF, thus taking combined advantage of DPDK and SR-IOV.

When to use DPDK and/or SR-IOV

The earlier discussion shows two clear cases. One using a pure DPDK solution without SR-IOV and the other based on SR-IOV. ( while there could be a mix of two in which SR-IOV can be combined with DPDK) The earlier uses OVS and the later does not need OVS. For understanding the positioning of DPDK vs SR-IOV, we will use just these two cases.

On the face of it, it may appear that SR-IOV is a better solution as it uses hardware-based switching and not constrained by the OVS that is a purely software-based solution. However, this is not as simple as that.

To understand there positioning, we should understand what is East-West vs North-South traffic in Datacenters.

There is a good study done by intel on DPDK vs SR-IOV; they found out two different scenarios where one is better than the other.

if Traffic is East-West, DPDK wins against SR-IOV

In a situation where the traffic is East-West within the same server ( and I repeat same server), DPDK wins against SR-IOV. The situation is shown in the diagram below.

This is clear from this test report of Intel study as shown below the throughput comparison

It is very simple to understand this: If traffic is routed/switched within the server and not going to the NIC. There is NO advantage of bringing SR-IOV. Rather SR-IOV can become a bottle neck ( Traffic path can become long and NIC resources utilized) so better to route the traffic within the server using DPDK.

If traffic is North-South, SR-IOV wins against DPDK

In a scenario where traffic in North-South ( also including traffic that is East-West but from one server to another server ), SR-IOV wins against DPDK. The correct label for this scenario would be the traffic going from one server to another server.

(DPDK vs SR-IOV for NFV? - Why a wrong decision can impact performance! -https://telcocloudbridge.com/blog/dpdk-vs-sr-iov-for-nfv-why-a-wrong-decision-can-impact-performance/)

The following report from the Intel test report clearly shows that SR-IOV throughput wins in such case

It is also easy to interpret this as the traffic has to pass through the NIC anyway so why involve DPDK based OVS and create more bottlenecks. SR-IOV is a much better solution here

Conclusion with an Example

So lets summarize DPDK vs SR-IOV discussion

I will make it very easy. If traffic is switched within a server ( VNFs are within the server), DPDK is better. If traffic is switched from one server to another server, SR-IOV performs better.

It is apparent thus that you should know your design and traffic flow. Making a wrong decision would definitely impact the performance in terms of low throughput as the graphs above show.

So let say you have a service chaining application for microservices within one server, DPDK is the solution for you. On the other hand, if you have a service chaining service, where applications reside on different servers, SR-IOV should be your selection. But don’t forget that you can always combine SR-IOV with DPDK in VNF ( not the DPDK in OVS case as explained above) to further optimize the SR-IOV based design.

What’s your opinion here. Leave a comment below?

【网络】什么是SR-IOV、PF、VF|DPDK vs SR-IOV for NFV相关推荐

  1. Linux网络设计之用户态协议栈与dpdk

    用户态协议栈设计与dpdk dpdk环境开启 Windowe下配置静态IP表 DPDK API介绍 struct rte_memzone结构体 struct rte_mempool结构体 struct ...

  2. matlab中sr锁存器,VHDL中的简单SR锁存器仿真(带Xilinx)不会振荡

    我已经了解到,当S和R在它们在下面的电路VHDL代码中仅为'1'时均为'0'时,SR锁存器会发生振荡.VHDL中的简单SR锁存器仿真(带Xilinx)不会振荡 这里是SRLATCH library I ...

  3. 《深入浅出DPDK》读书笔记(十四):DPDK应用篇(DPDK与网络功能虚拟化:NFV、VNF、IVSHMEM、Virtual BRAS“商业案例”)

    Table of Contents DPDK应用篇 DPDK与网络功能虚拟化 157.网络功能虚拟化 13.1.1起源 158.发展 159.OPNFV与DPDK NFV的部署 160.NFV的部署 ...

  4. dpdk SR-IOV 创建VF失败

    PF绑定到igb_uio ./dpdk-devbind.py -b igb_uio 0000:17:00.0 创建VF 修改max_vfs,报Input/output error [root@loca ...

  5. dpdk 网络协议栈 vpp OvS DDos SDN NFV 虚拟化 高性能专家之路

    DPDK核心代码 public void string main(String agrs[]){//获取课程vx 80407290 } 一.什么是DPDK 对于用户来说,它可能是一个性能出色的包数据处 ...

  6. 段路由SR(Segment Routing)是基于源路由理念而设计的在网络上转发数据包的一种技术架构

    一.SR背景 段路由SR(Segment Routing)是基于源路由理念而设计的在网络上转发数据包的一种技术架构. SR-MPLS可以通过多个MPLS形成路径(基于标签转发)     SRv6可以通 ...

  7. mininet编程实现交换机规则的插入、删除与修改。_可编程网卡芯片在滴滴云网络的应用实践...

    桔妹导读:随着云规模不断扩大以及业务层面对延迟.带宽的要求越来越高,采用DPDK 加速网络报文处理的方式在横向纵向扩展都出现了局限性.可编程芯片成为业界热点.本文主要讲述了可编程网卡芯片在滴滴云网络中 ...

  8. 智能网卡的网络加速技术

    2021年9月25日,由"科创中国"未来网络专业科技服务团指导,江苏省未来网络创新研究院.网络通信与安全紫金山实验室联合主办.SDNLAB社区承办的2021中国智能网卡研讨会中,多 ...

  9. DPDK Release 22.11

    新功能 添加了初始LoongArch架构支持. 添加了对 LoongArch 架构的 EAL 实现.在 Loongson 3A5000, Loongson 3C5000 和 Loongson 3C50 ...

最新文章

  1. 我用 YOLOv5 做情感识别!
  2. java如何调用thrift_java – 我想在一个传输上使用多个服务(Thrift)
  3. stm32影子寄存器、预装载寄存器,TIM_OC1PreloadConfig和TIM_ARRPreloadConfig的作用
  4. 【EasyUI】EasyUI学习笔记
  5. 改变图标颜色_LOL设计师宣布修改装备图标:提高清晰度、颜色差异化
  6. 实时的毛发绘制 szlongman
  7. VFP访问外部数据源的几种方法
  8. html自动跳转到锚点,html中的锚点
  9. python3之线程
  10. 常用编程语言简介大全
  11. 计算机网络实验:netmeeting 在局域网上的应用
  12. linux端口利用入侵,利用samba服务漏洞入侵linux主机
  13. 基于android的车牌识别程序,基于Android平台车牌识别算法,实现手机识别车牌
  14. 鸿蒙蕴含的哲理,苏轼最不该被忽视哲理名句:“人生到处知何似,应似飞鸿踏雪泥”...
  15. JAVAOooooo。。。。。ooo0000OOOOO
  16. 重度抑郁症患者的脑龄
  17. python中的开根、取整、求对数
  18. php 绘制斜线,css怎么绘制斜线
  19. P4128 [SHOI2006]有色图
  20. tail -f和tail -F的区别

热门文章

  1. 《软技能2》读书感悟
  2. 龚关铭:7.14黄金原油低多进场,日内关注OPEC月报
  3. Graphviz:决策树.dot文件可视化
  4. 类似微信发送语音,按下录音,松开结束录音;并且可播放
  5. EF框架—Lambda表达式
  6. C++11 - std::string - stod/stof/stoi/stol/stold/stoll/stoul/stoull
  7. 记一次图层合并算法设计
  8. macbook搭建java环境_MacBook从零开始搭建java环境
  9. python数据分析与挖掘实践大作业_数据挖掘大作业最终报告.PDF
  10. 从“青年创业基金”说起