VM for dpdk: различия между версиями

Материал из noname.com.ua
Перейти к навигацииПерейти к поиску
Строка 9: Строка 9:
   
 
Edit /etc/default/grub
 
Edit /etc/default/grub
  +
 
<PRE>
 
<PRE>
 
GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on iommu=pt default_hugepagesz=1G hugepagesz=1GB hugepages=30"
 
GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on iommu=pt default_hugepagesz=1G hugepagesz=1GB hugepages=30"
 
</PRE>
 
</PRE>
  +
Reboot and check that all params have been applyed
+
Reboot and check that all params have been applied
  +
<PRE>
 
# cat /proc/cmdline
 
# cat /proc/cmdline
 
BOOT_IMAGE=/boot/vmlinuz-4.15.0-43-generic root=UUID=5f20cc6c-ff53-11e8-8f82-ecf4bbc26e30 ro maybe-ubiquity intel_iommu=on iommu=pt default_hugepagesz=1G hugepagesz=1G hugepages=128.0 transparent_hugepage=never isolcpu=2-11 rcu_nocbs=2-11 nohz=off
 
BOOT_IMAGE=/boot/vmlinuz-4.15.0-43-generic root=UUID=5f20cc6c-ff53-11e8-8f82-ecf4bbc26e30 ro maybe-ubiquity intel_iommu=on iommu=pt default_hugepagesz=1G hugepagesz=1G hugepages=128.0 transparent_hugepage=never isolcpu=2-11 rcu_nocbs=2-11 nohz=off
Строка 24: Строка 27:
 
HugePages_Surp: 0
 
HugePages_Surp: 0
 
Hugepagesize: 1048576 kB
 
Hugepagesize: 1048576 kB
  +
</PRE>
  +
 
Get NIC NUMA node:
 
Get NIC NUMA node:
  +
<PRE>
 
# cat /sys/bus/pci/devices/0000\:d8\:00.1/numa_node
 
# cat /sys/bus/pci/devices/0000\:d8\:00.1/numa_node
 
1
 
1
  +
</PRE>
  +
 
where device ID can be found with command
 
where device ID can be found with command
  +
<PRE>
 
lshw -class network -businfo
 
lshw -class network -businfo
  +
</PRE>
  +
 
Get CPUs with needed NUMA node for VM CPU pinning
 
Get CPUs with needed NUMA node for VM CPU pinning
  +
<PRE>
 
# lscpu | grep NUMA
 
# lscpu | grep NUMA
 
NUMA node(s): 2
 
NUMA node(s): 2
 
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46
 
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46
 
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47
 
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47
  +
</PRE>
   
VMs:
+
==VMs==
Outside VM
+
===Outside VM===
   
 
Edit VM settings with virsh using command:
 
Edit VM settings with virsh using command:
  +
<PRE>
 
virsh edit <VM NAME>
 
virsh edit <VM NAME>
  +
</PRE>
   
 
CPU Pinning (manual configuration, need to add cputune section
 
CPU Pinning (manual configuration, need to add cputune section
  +
<PRE>
 
<vcpu placement='static' cpuset='1,3,5,7'>4</vcpu>
 
<vcpu placement='static' cpuset='1,3,5,7'>4</vcpu>
 
<cputune>
 
<cputune>
Строка 57: Строка 73:
 
<model fallback='allow'/>
 
<model fallback='allow'/>
 
</cpu>
 
</cpu>
  +
</PRE>
  +
   
 
Video Card MUST be set to virtio (may be configured using virt-manager)
 
Video Card MUST be set to virtio (may be configured using virt-manager)
  +
<PRE>
 
<video>
 
<video>
 
<model type='virtio' heads='1' primary='yes'>
 
<model type='virtio' heads='1' primary='yes'>
Строка 65: Строка 84:
 
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
 
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
 
</video>
 
</video>
  +
</PRE>
 
   
 
Inside VM
 
Inside VM
   
 
manual module load
 
manual module load
modprobe vfio enable_unsafe_noiommu_mode=Y
 
   
  +
<PRE>
 
modprobe vfio enable_unsafe_noiommu_mode=Y
  +
</PRE>
 
OR
 
OR
 
add to configuration  in file /etc/modprobe.d/vfio.conf
 
add to configuration  in file /etc/modprobe.d/vfio.conf
  +
  +
<PRE>
 
options vfio enable_unsafe_noiommu_mode=Y
 
options vfio enable_unsafe_noiommu_mode=Y
  +
</PRE>
   
 
Kernel settings 
 
Kernel settings 
   
  +
<PRE>
 
modprobe vfio enable_unsafe_noiommu_mode=Y
 
modprobe vfio enable_unsafe_noiommu_mode=Y
 
modprobe uio
 
modprobe uio
  +
</PRE>
  +
   
 
/etc/grub.d/50-curtin-settings.cfg 
 
/etc/grub.d/50-curtin-settings.cfg 
  +
<PRE>
 
GRUB_CMDLINE_LINUX_DEFAULT="maybe-ubiquity default_hugepagesz=1G hugepagesz=1G hugepages=128.0 transparent_hugepage=never intel_iommu=on iommu=pt"
 
GRUB_CMDLINE_LINUX_DEFAULT="maybe-ubiquity default_hugepagesz=1G hugepagesz=1G hugepages=128.0 transparent_hugepage=never intel_iommu=on iommu=pt"
  +
</PRE>
  +
 
Check that hugepages have been applyed
 
Check that hugepages have been applyed
  +
<PRE>
 
AnonHugePages: 0 kB
 
AnonHugePages: 0 kB
 
ShmemHugePages: 0 kB
 
ShmemHugePages: 0 kB
Строка 91: Строка 122:
 
HugePages_Surp: 0
 
HugePages_Surp: 0
 
Hugepagesize: 1048576 kB
 
Hugepagesize: 1048576 kB
  +
</PRE>

Версия 17:20, 3 апреля 2020

VM Setup for DPDK testing

Host

Check that host CPU has 1Gb hugepages flag:

grep pdpe1gb /proc/cpuinfo | uniq
flags           : [...] pdpe1gb [...]
<PRE>

Edit /etc/default/grub

<PRE>
GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on iommu=pt default_hugepagesz=1G hugepagesz=1GB hugepages=30"

Reboot and check that all params have been applied

# cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-4.15.0-43-generic root=UUID=5f20cc6c-ff53-11e8-8f82-ecf4bbc26e30 ro maybe-ubiquity intel_iommu=on iommu=pt default_hugepagesz=1G hugepagesz=1G hugepages=128.0 transparent_hugepage=never isolcpu=2-11 rcu_nocbs=2-11 nohz=off
Check hugepages
# cat /proc/meminfo | grep Huge
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
HugePages_Total: 100
HugePages_Free: 99
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 1048576 kB

Get NIC NUMA node:

# cat /sys/bus/pci/devices/0000\:d8\:00.1/numa_node
1

where device ID can be found with command

lshw -class network -businfo

Get CPUs with needed NUMA node for VM CPU pinning

# lscpu  | grep NUMA
NUMA node(s):          2
NUMA node0 CPU(s):     0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46
NUMA node1 CPU(s):     1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47

VMs

Outside VM

Edit VM settings with virsh using command:

virsh edit <VM NAME>

CPU Pinning (manual configuration, need to add cputune section

<vcpu placement='static' cpuset='1,3,5,7'>4</vcpu>
<cputune>
<vcpupin vcpu='0' cpuset='1'/>
<vcpupin vcpu='1' cpuset='2'/>
<vcpupin vcpu='2' cpuset='5'/>
<vcpupin vcpu='3' cpuset='7'/>
</cputune>
Add hugepages configuration
  <memoryBacking>
    <hugepages/>
  </memoryBacking>
Enable CPU flags as on host CPU
  <cpu mode='host-model' check='partial'>
    <model fallback='allow'/>
  </cpu>


Video Card MUST be set to virtio (may be configured using virt-manager)

<video>
<model type='virtio' heads='1' primary='yes'>
<acceleration accel3d='no'/>
</model>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
</video>

Inside VM

manual module load

modprobe vfio enable_unsafe_noiommu_mode=Y

OR add to configuration  in file /etc/modprobe.d/vfio.conf

options vfio enable_unsafe_noiommu_mode=Y

Kernel settings 

modprobe vfio enable_unsafe_noiommu_mode=Y
modprobe uio


/etc/grub.d/50-curtin-settings.cfg 

GRUB_CMDLINE_LINUX_DEFAULT="maybe-ubiquity default_hugepagesz=1G hugepagesz=1G hugepages=128.0 transparent_hugepage=never intel_iommu=on iommu=pt"

Check that hugepages have been applyed

AnonHugePages:         0 kB
ShmemHugePages:        0 kB
HugePages_Total:       3
HugePages_Free:        2
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:    1048576 kB