I assume you have basic knowledge about libvirt. If not refer to https://libvirt.org/formatdomain.html to learn some basic concept.
First of all, we need to define a vm. It’s OK to use IaaS or define it by yourself.
You can easily get your vm domain-id by this command
virsh list --all
Then execute
virsh edit <domain-id>
Libvirt will open the domain xml file of the vm you defined before. And find cpu tag, you might got something like this
<vcpu placement='static'>2vcpu>
<cpu>
<topology sockets='1' cores='2' threads='1'/>
cpu>
At this point we have to ways to do hot plug.
Using vpus related virsh command to help with cpu hot plug.
step 1 : shutdown vm
virsh destroy <domain-id>
step 2 : edit the domain xml to ensure hotplug on it
virsh edit <vm name>
in this step, we need to change the vcpu config from
<vcpu placement='static'>1vcpu>
to
<vcpu placement='auto' current='1'>4vcpu>
The value of current is the cpu number of the vm when it just started.
And you need to change cpu config at the same time from
<cpu>
<topology sockets='1' cores='1' threads='1'/>
cpu>
to
<cpu>
<topology sockets='4' cores='1' threads='1'/>
cpu>
notice: without changing cpu topology, you might got error
error: Maximum CPUs greater than topology limit
After update the config, start the vm
virsh start <vm name>
Check the stats using
virsh console <vm name>
and use virsh cpuinfo
to check if the cpu working after start vm is the same as the current we set in vcpu config.
step 3 : do hot plug
this step will talk about the using of virsh setvcpu
. If you already know this just skip this step.
virsh setvcpus <vm name> 2 --config --live
will set current value to 2 and you can check the changed value by
virsh edit <vm name>
or
virsh cpu-stats <vm name>
And use
virsh console <vm name>
and execute
ls /sys/devices/system/cpu
you can see the cpu device is successfully attached to the vm but it not working when you use top because the new cpu are still offline.
If qemu guest agent is installed change the hot plug command to
virsh setvcpus domain 2 --guest
you will got two running cpus.
refer to https://earlruby.org/2014/05/increase-a-vms-available-memory-with-virsh/ , confirm you have balloon driver on your vm.
notice: use need to update the domain xml first to set the
memory and currentMemory a suitable value.
The feature use balloon driver to implements the memory increase or decrease. But problem is that when you use
free -m
in the vm to check the memory, for example on Centos 7.2 when you set memory to 3 GB you actually see 2751 MB (expected is 3072 MB) there is a decrease of 321 MB (the vm is created with memory of 1024 MB) about 30%-40% memory will be reserved for some safety consideration.
To tell libvirt that we want to have memory hotplug there has to be a maxMemory element and a NUMA node declaration. If not present, the following lines should be added:
<domain type='kvm'>
<maxMemory slots='16' unit='KiB'>16777216maxMemory>
<cpu>
<numa>
<cell id='0' cpus='0-2' memory='1048576' unit='KiB'/>
numa>
cpu>
domain>
maxMemory specifies the maximum amount of memory that is allowed to be plugged in and behaves the same way as the memory attribute.
The second limitation is the amount of modules that can be plugged. I currently stick to 16 slots as that allows more small increases and smaller steps at unplugging if needed.
Libvirt currently enforces a specified NUMA node for memory hotplug. id specifies the node number and cpus which vcpus belong to this node. Even if not all vcpus are active, the maximum amount of vcpus should be specified here.
In this case, I have 3 vcpus and libvirt uses a zero-based numbering here. Memory and unit should be as the omitted memory attribute.
virsh requires an XML file with a memory device. To add 128 Megabyte, the file would look like the following.
<memory model='dimm'>
<target>
<size unit='MiB'>128size>
<node>0node>
target>
memory>
To add it to the running guest and also add it to the config the following is needed.
virsh attach-device <vm name> <xml filename> --config --live
And hot plug memory function wiht libvirt api
import libvirt
conn = libvirt.open()
vm = conn.lookupByName("vm_name")
xml = "<memory model='dimm'><target><size unit='MiB'>128size><node>0node>target>memory>"
vm.attachDeviceFlags(xml,
libvirt.VIR_DOMAIN_AFFECT_LIVE|libvirt.VIR_DOMAIN_AFFECT_CONFIG)