virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
/*
|
|
|
|
* Virtio MEM device
|
|
|
|
*
|
|
|
|
* Copyright (C) 2020 Red Hat, Inc.
|
|
|
|
*
|
|
|
|
* Authors:
|
|
|
|
* David Hildenbrand <david@redhat.com>
|
|
|
|
*
|
|
|
|
* This work is licensed under the terms of the GNU GPL, version 2.
|
|
|
|
* See the COPYING file in the top-level directory.
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include "qemu/osdep.h"
|
|
|
|
#include "qemu/iov.h"
|
|
|
|
#include "qemu/cutils.h"
|
|
|
|
#include "qemu/error-report.h"
|
|
|
|
#include "qemu/units.h"
|
|
|
|
#include "sysemu/numa.h"
|
|
|
|
#include "sysemu/sysemu.h"
|
|
|
|
#include "sysemu/reset.h"
|
2023-07-06 10:56:09 +03:00
|
|
|
#include "sysemu/runstate.h"
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
#include "hw/virtio/virtio.h"
|
|
|
|
#include "hw/virtio/virtio-bus.h"
|
|
|
|
#include "hw/virtio/virtio-mem.h"
|
|
|
|
#include "qapi/error.h"
|
|
|
|
#include "qapi/visitor.h"
|
|
|
|
#include "exec/ram_addr.h"
|
|
|
|
#include "migration/misc.h"
|
|
|
|
#include "hw/boards.h"
|
|
|
|
#include "hw/qdev-properties.h"
|
2020-02-03 13:42:03 +03:00
|
|
|
#include CONFIG_DEVICES
|
2020-06-26 10:22:46 +03:00
|
|
|
#include "trace.h"
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
|
2023-01-17 14:22:48 +03:00
|
|
|
static const VMStateDescription vmstate_virtio_mem_device_early;
|
|
|
|
|
virtio-mem: Support VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE
With VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE, we signal the VM that reading
unplugged memory is not supported. We have to fail feature negotiation
in case the guest does not support VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE.
First, VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE is required to properly handle
memory backends (or architectures) without support for the shared zeropage
in the hypervisor cleanly. Without the shared zeropage, even reading an
unpopulated virtual memory location can populate real memory and
consequently consume memory in the hypervisor. We have a guaranteed shared
zeropage only on MAP_PRIVATE anonymous memory.
Second, we want VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE to be the default
long-term as even populating the shared zeropage can be problematic: for
example, without THP support (possible) or without support for the shared
huge zeropage with THP (unlikely), the PTE page tables to hold the shared
zeropage entries can consume quite some memory that cannot be reclaimed
easily.
Third, there are other optimizations+features (e.g., protection of
unplugged memory, reducing the total memory slot size and bitmap sizes)
that will require VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE.
We really only support x86 targets with virtio-mem for now (and
Linux similarly only support x86), but that might change soon, so prepare
for different targets already.
Add a new "unplugged-inaccessible" tristate property for x86 targets:
- "off" will keep VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE unset and legacy
guests working.
- "on" will set VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE and stop legacy guests
from using the device.
- "auto" selects the default based on support for the shared zeropage.
Warn in case the property is set to "off" and we don't have support for the
shared zeropage.
For existing compat machines, the property will default to "off", to
not change the behavior but eventually warn about a problematic setup.
Short-term, we'll set the property default to "auto" for new QEMU machines.
Mid-term, we'll set the property default to "on" for new QEMU machines.
Long-term, we'll deprecate the parameter and disallow legacy
guests completely.
The property has to match on the migration source and destination. "auto"
will result in the same VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE setting as long
as the qemu command line (esp. memdev) match -- so "auto" is good enough
for migration purposes and the parameter doesn't have to be migrated
explicitly.
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20211217134039.29670-3-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2021-12-17 16:40:38 +03:00
|
|
|
/*
|
|
|
|
* We only had legacy x86 guests that did not support
|
|
|
|
* VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE. Other targets don't have legacy guests.
|
|
|
|
*/
|
|
|
|
#if defined(TARGET_X86_64) || defined(TARGET_I386)
|
|
|
|
#define VIRTIO_MEM_HAS_LEGACY_GUESTS
|
|
|
|
#endif
|
|
|
|
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
/*
|
2020-10-08 11:30:26 +03:00
|
|
|
* Let's not allow blocks smaller than 1 MiB, for example, to keep the tracking
|
|
|
|
* bitmap small.
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
*/
|
2020-10-08 11:30:26 +03:00
|
|
|
#define VIRTIO_MEM_MIN_BLOCK_SIZE ((uint32_t)(1 * MiB))
|
|
|
|
|
2022-01-11 09:33:28 +03:00
|
|
|
static uint32_t virtio_mem_default_thp_size(void)
|
|
|
|
{
|
|
|
|
uint32_t default_thp_size = VIRTIO_MEM_MIN_BLOCK_SIZE;
|
|
|
|
|
|
|
|
#if defined(__x86_64__) || defined(__arm__) || defined(__powerpc64__)
|
|
|
|
default_thp_size = 2 * MiB;
|
|
|
|
#elif defined(__aarch64__)
|
2022-03-23 18:57:22 +03:00
|
|
|
if (qemu_real_host_page_size() == 4 * KiB) {
|
2022-01-11 09:33:28 +03:00
|
|
|
default_thp_size = 2 * MiB;
|
2022-03-23 18:57:22 +03:00
|
|
|
} else if (qemu_real_host_page_size() == 16 * KiB) {
|
2022-01-11 09:33:28 +03:00
|
|
|
default_thp_size = 32 * MiB;
|
2022-03-23 18:57:22 +03:00
|
|
|
} else if (qemu_real_host_page_size() == 64 * KiB) {
|
2022-01-11 09:33:28 +03:00
|
|
|
default_thp_size = 512 * MiB;
|
|
|
|
}
|
2020-10-08 11:30:26 +03:00
|
|
|
#endif
|
|
|
|
|
2022-01-11 09:33:28 +03:00
|
|
|
return default_thp_size;
|
|
|
|
}
|
|
|
|
|
2020-10-08 11:30:26 +03:00
|
|
|
/*
|
|
|
|
* We want to have a reasonable default block size such that
|
|
|
|
* 1. We avoid splitting THPs when unplugging memory, which degrades
|
|
|
|
* performance.
|
|
|
|
* 2. We avoid placing THPs for plugged blocks that also cover unplugged
|
|
|
|
* blocks.
|
|
|
|
*
|
|
|
|
* The actual THP size might differ between Linux kernels, so we try to probe
|
|
|
|
* it. In the future (if we ever run into issues regarding 2.), we might want
|
|
|
|
* to disable THP in case we fail to properly probe the THP size, or if the
|
|
|
|
* block size is configured smaller than the THP size.
|
|
|
|
*/
|
|
|
|
static uint32_t thp_size;
|
|
|
|
|
|
|
|
#define HPAGE_PMD_SIZE_PATH "/sys/kernel/mm/transparent_hugepage/hpage_pmd_size"
|
|
|
|
static uint32_t virtio_mem_thp_size(void)
|
|
|
|
{
|
|
|
|
gchar *content = NULL;
|
|
|
|
const char *endptr;
|
|
|
|
uint64_t tmp;
|
|
|
|
|
|
|
|
if (thp_size) {
|
|
|
|
return thp_size;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Try to probe the actual THP size, fallback to (sane but eventually
|
|
|
|
* incorrect) default sizes.
|
|
|
|
*/
|
|
|
|
if (g_file_get_contents(HPAGE_PMD_SIZE_PATH, &content, NULL, NULL) &&
|
|
|
|
!qemu_strtou64(content, &endptr, 0, &tmp) &&
|
|
|
|
(!endptr || *endptr == '\n')) {
|
2022-01-11 09:33:28 +03:00
|
|
|
/* Sanity-check the value and fallback to something reasonable. */
|
|
|
|
if (!tmp || !is_power_of_2(tmp)) {
|
2020-10-08 11:30:26 +03:00
|
|
|
warn_report("Read unsupported THP size: %" PRIx64, tmp);
|
|
|
|
} else {
|
|
|
|
thp_size = tmp;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!thp_size) {
|
2022-01-11 09:33:28 +03:00
|
|
|
thp_size = virtio_mem_default_thp_size();
|
2020-10-08 11:30:26 +03:00
|
|
|
warn_report("Could not detect THP size, falling back to %" PRIx64
|
|
|
|
" MiB.", thp_size / MiB);
|
|
|
|
}
|
|
|
|
|
|
|
|
g_free(content);
|
|
|
|
return thp_size;
|
|
|
|
}
|
|
|
|
|
|
|
|
static uint64_t virtio_mem_default_block_size(RAMBlock *rb)
|
|
|
|
{
|
|
|
|
const uint64_t page_size = qemu_ram_pagesize(rb);
|
|
|
|
|
|
|
|
/* We can have hugetlbfs with a page size smaller than the THP size. */
|
2022-03-23 18:57:22 +03:00
|
|
|
if (page_size == qemu_real_host_page_size()) {
|
2020-10-08 11:30:26 +03:00
|
|
|
return MAX(page_size, virtio_mem_thp_size());
|
|
|
|
}
|
|
|
|
return MAX(page_size, VIRTIO_MEM_MIN_BLOCK_SIZE);
|
|
|
|
}
|
|
|
|
|
virtio-mem: Support VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE
With VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE, we signal the VM that reading
unplugged memory is not supported. We have to fail feature negotiation
in case the guest does not support VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE.
First, VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE is required to properly handle
memory backends (or architectures) without support for the shared zeropage
in the hypervisor cleanly. Without the shared zeropage, even reading an
unpopulated virtual memory location can populate real memory and
consequently consume memory in the hypervisor. We have a guaranteed shared
zeropage only on MAP_PRIVATE anonymous memory.
Second, we want VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE to be the default
long-term as even populating the shared zeropage can be problematic: for
example, without THP support (possible) or without support for the shared
huge zeropage with THP (unlikely), the PTE page tables to hold the shared
zeropage entries can consume quite some memory that cannot be reclaimed
easily.
Third, there are other optimizations+features (e.g., protection of
unplugged memory, reducing the total memory slot size and bitmap sizes)
that will require VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE.
We really only support x86 targets with virtio-mem for now (and
Linux similarly only support x86), but that might change soon, so prepare
for different targets already.
Add a new "unplugged-inaccessible" tristate property for x86 targets:
- "off" will keep VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE unset and legacy
guests working.
- "on" will set VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE and stop legacy guests
from using the device.
- "auto" selects the default based on support for the shared zeropage.
Warn in case the property is set to "off" and we don't have support for the
shared zeropage.
For existing compat machines, the property will default to "off", to
not change the behavior but eventually warn about a problematic setup.
Short-term, we'll set the property default to "auto" for new QEMU machines.
Mid-term, we'll set the property default to "on" for new QEMU machines.
Long-term, we'll deprecate the parameter and disallow legacy
guests completely.
The property has to match on the migration source and destination. "auto"
will result in the same VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE setting as long
as the qemu command line (esp. memdev) match -- so "auto" is good enough
for migration purposes and the parameter doesn't have to be migrated
explicitly.
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20211217134039.29670-3-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2021-12-17 16:40:38 +03:00
|
|
|
#if defined(VIRTIO_MEM_HAS_LEGACY_GUESTS)
|
|
|
|
static bool virtio_mem_has_shared_zeropage(RAMBlock *rb)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* We only have a guaranteed shared zeropage on ordinary MAP_PRIVATE
|
|
|
|
* anonymous RAM. In any other case, reading unplugged *can* populate a
|
|
|
|
* fresh page, consuming actual memory.
|
|
|
|
*/
|
2023-05-24 12:37:40 +03:00
|
|
|
return !qemu_ram_is_shared(rb) && qemu_ram_get_fd(rb) < 0 &&
|
2022-03-23 18:57:22 +03:00
|
|
|
qemu_ram_pagesize(rb) == qemu_real_host_page_size();
|
virtio-mem: Support VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE
With VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE, we signal the VM that reading
unplugged memory is not supported. We have to fail feature negotiation
in case the guest does not support VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE.
First, VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE is required to properly handle
memory backends (or architectures) without support for the shared zeropage
in the hypervisor cleanly. Without the shared zeropage, even reading an
unpopulated virtual memory location can populate real memory and
consequently consume memory in the hypervisor. We have a guaranteed shared
zeropage only on MAP_PRIVATE anonymous memory.
Second, we want VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE to be the default
long-term as even populating the shared zeropage can be problematic: for
example, without THP support (possible) or without support for the shared
huge zeropage with THP (unlikely), the PTE page tables to hold the shared
zeropage entries can consume quite some memory that cannot be reclaimed
easily.
Third, there are other optimizations+features (e.g., protection of
unplugged memory, reducing the total memory slot size and bitmap sizes)
that will require VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE.
We really only support x86 targets with virtio-mem for now (and
Linux similarly only support x86), but that might change soon, so prepare
for different targets already.
Add a new "unplugged-inaccessible" tristate property for x86 targets:
- "off" will keep VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE unset and legacy
guests working.
- "on" will set VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE and stop legacy guests
from using the device.
- "auto" selects the default based on support for the shared zeropage.
Warn in case the property is set to "off" and we don't have support for the
shared zeropage.
For existing compat machines, the property will default to "off", to
not change the behavior but eventually warn about a problematic setup.
Short-term, we'll set the property default to "auto" for new QEMU machines.
Mid-term, we'll set the property default to "on" for new QEMU machines.
Long-term, we'll deprecate the parameter and disallow legacy
guests completely.
The property has to match on the migration source and destination. "auto"
will result in the same VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE setting as long
as the qemu command line (esp. memdev) match -- so "auto" is good enough
for migration purposes and the parameter doesn't have to be migrated
explicitly.
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20211217134039.29670-3-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2021-12-17 16:40:38 +03:00
|
|
|
}
|
|
|
|
#endif /* VIRTIO_MEM_HAS_LEGACY_GUESTS */
|
|
|
|
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
/*
|
|
|
|
* Size the usable region bigger than the requested size if possible. Esp.
|
|
|
|
* Linux guests will only add (aligned) memory blocks in case they fully
|
|
|
|
* fit into the usable region, but plug+online only a subset of the pages.
|
|
|
|
* The memory block size corresponds mostly to the section size.
|
|
|
|
*
|
|
|
|
* This allows e.g., to add 20MB with a section size of 128MB on x86_64, and
|
2022-01-11 09:33:29 +03:00
|
|
|
* a section size of 512MB on arm64 (as long as the start address is properly
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
* aligned, similar to ordinary DIMMs).
|
|
|
|
*
|
|
|
|
* We can change this at any time and maybe even make it configurable if
|
|
|
|
* necessary (as the section size can change). But it's more likely that the
|
|
|
|
* section size will rather get smaller and not bigger over time.
|
|
|
|
*/
|
|
|
|
#if defined(TARGET_X86_64) || defined(TARGET_I386)
|
|
|
|
#define VIRTIO_MEM_USABLE_EXTENT (2 * (128 * MiB))
|
2022-01-11 09:33:29 +03:00
|
|
|
#elif defined(TARGET_ARM)
|
|
|
|
#define VIRTIO_MEM_USABLE_EXTENT (2 * (512 * MiB))
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
#else
|
|
|
|
#error VIRTIO_MEM_USABLE_EXTENT not defined
|
|
|
|
#endif
|
|
|
|
|
|
|
|
static bool virtio_mem_is_busy(void)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Postcopy cannot handle concurrent discards and we don't want to migrate
|
|
|
|
* pages on-demand with stale content when plugging new blocks.
|
2020-06-26 10:22:47 +03:00
|
|
|
*
|
|
|
|
* For precopy, we don't want unplugged blocks in our migration stream, and
|
|
|
|
* when plugging new blocks, the page content might differ between source
|
|
|
|
* and destination (observable by the guest when not initializing pages
|
|
|
|
* after plugging them) until we're running on the destination (as we didn't
|
|
|
|
* migrate these blocks when they were unplugged).
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
*/
|
2020-06-26 10:22:47 +03:00
|
|
|
return migration_in_incoming_postcopy() || !migration_is_idle();
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
}
|
|
|
|
|
2021-04-13 12:55:21 +03:00
|
|
|
typedef int (*virtio_mem_range_cb)(const VirtIOMEM *vmem, void *arg,
|
|
|
|
uint64_t offset, uint64_t size);
|
|
|
|
|
|
|
|
static int virtio_mem_for_each_unplugged_range(const VirtIOMEM *vmem, void *arg,
|
|
|
|
virtio_mem_range_cb cb)
|
|
|
|
{
|
|
|
|
unsigned long first_zero_bit, last_zero_bit;
|
|
|
|
uint64_t offset, size;
|
|
|
|
int ret = 0;
|
|
|
|
|
|
|
|
first_zero_bit = find_first_zero_bit(vmem->bitmap, vmem->bitmap_size);
|
|
|
|
while (first_zero_bit < vmem->bitmap_size) {
|
|
|
|
offset = first_zero_bit * vmem->block_size;
|
|
|
|
last_zero_bit = find_next_bit(vmem->bitmap, vmem->bitmap_size,
|
|
|
|
first_zero_bit + 1) - 1;
|
|
|
|
size = (last_zero_bit - first_zero_bit + 1) * vmem->block_size;
|
|
|
|
|
|
|
|
ret = cb(vmem, arg, offset, size);
|
|
|
|
if (ret) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
first_zero_bit = find_next_zero_bit(vmem->bitmap, vmem->bitmap_size,
|
|
|
|
last_zero_bit + 2);
|
|
|
|
}
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
virtio-mem: Proper support for preallocation with migration
Ordinary memory preallocation runs when QEMU starts up and creates the
memory backends, before processing the incoming migration stream. With
virtio-mem, we don't know which memory blocks to preallocate before
migration started. Now that we migrate the virtio-mem bitmap early, before
migrating any RAM content, we can safely preallocate memory for all plugged
memory blocks before migrating any RAM content.
This is especially relevant for the following cases:
(1) User errors
With hugetlb/files, if we don't have sufficient backend memory available on
the migration destination, we'll crash QEMU (SIGBUS) during RAM migration
when running out of backend memory. Preallocating memory before actual
RAM migration allows for failing gracefully and informing the user about
the setup problem.
(2) Excluded memory ranges during migration
For example, virtio-balloon free page hinting will exclude some pages
from getting migrated. In that case, we won't crash during RAM
migration, but later, when running the VM on the destination, which is
bad.
To fix this for new QEMU machines that migrate the bitmap early,
preallocate the memory early, before any RAM migration. Warn with old
QEMU machines.
Getting postcopy right is a bit tricky, but we essentially now implement
the same (problematic) preallocation logic as ordinary preallocation:
preallocate memory early and discard it again before precopy starts. During
ordinary preallocation, discarding of RAM happens when postcopy is advised.
As the state (bitmap) is loaded after postcopy was advised but before
postcopy starts listening, we have to discard memory we preallocated
immediately again ourselves.
Note that nothing (not even hugetlb reservations) guarantees for postcopy
that backend memory (especially, hugetlb pages) are still free after they
were freed ones while discarding RAM. Still, allocating that memory at
least once helps catching some basic setup problems.
Before this change, trying to restore a VM when insufficient hugetlb
pages are around results in the process crashing to to a "Bus error"
(SIGBUS). With this change, QEMU fails gracefully:
qemu-system-x86_64: qemu_prealloc_mem: preallocating memory failed: Bad address
qemu-system-x86_64: error while loading state for instance 0x0 of device '0000:00:03.0/virtio-mem-device-early'
qemu-system-x86_64: load of migration failed: Cannot allocate memory
And we can even introspect the early migration data, including the
bitmap:
$ ./scripts/analyze-migration.py -f STATEFILE
{
"ram (2)": {
"section sizes": {
"0000:00:03.0/mem0": "0x0000000780000000",
"0000:00:04.0/mem1": "0x0000000780000000",
"pc.ram": "0x0000000100000000",
"/rom@etc/acpi/tables": "0x0000000000020000",
"pc.bios": "0x0000000000040000",
"0000:00:02.0/e1000.rom": "0x0000000000040000",
"pc.rom": "0x0000000000020000",
"/rom@etc/table-loader": "0x0000000000001000",
"/rom@etc/acpi/rsdp": "0x0000000000001000"
}
},
"0000:00:03.0/virtio-mem-device-early (51)": {
"tmp": "00 00 00 01 40 00 00 00 00 00 00 07 80 00 00 00 00 00 00 00 00 20 00 00 00 00 00 00",
"size": "0x0000000040000000",
"bitmap": "ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff [...]
},
"0000:00:04.0/virtio-mem-device-early (53)": {
"tmp": "00 00 00 08 c0 00 00 00 00 00 00 07 80 00 00 00 00 00 00 00 00 20 00 00 00 00 00 00",
"size": "0x00000001fa400000",
"bitmap": "ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff [...]
},
[...]
Reported-by: Jing Qi <jinqi@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>S
Signed-off-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-01-17 14:22:49 +03:00
|
|
|
static int virtio_mem_for_each_plugged_range(const VirtIOMEM *vmem, void *arg,
|
|
|
|
virtio_mem_range_cb cb)
|
|
|
|
{
|
|
|
|
unsigned long first_bit, last_bit;
|
|
|
|
uint64_t offset, size;
|
|
|
|
int ret = 0;
|
|
|
|
|
|
|
|
first_bit = find_first_bit(vmem->bitmap, vmem->bitmap_size);
|
|
|
|
while (first_bit < vmem->bitmap_size) {
|
|
|
|
offset = first_bit * vmem->block_size;
|
|
|
|
last_bit = find_next_zero_bit(vmem->bitmap, vmem->bitmap_size,
|
|
|
|
first_bit + 1) - 1;
|
|
|
|
size = (last_bit - first_bit + 1) * vmem->block_size;
|
|
|
|
|
|
|
|
ret = cb(vmem, arg, offset, size);
|
|
|
|
if (ret) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
first_bit = find_next_bit(vmem->bitmap, vmem->bitmap_size,
|
|
|
|
last_bit + 2);
|
|
|
|
}
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2021-04-13 12:55:23 +03:00
|
|
|
/*
|
|
|
|
* Adjust the memory section to cover the intersection with the given range.
|
|
|
|
*
|
|
|
|
* Returns false if the intersection is empty, otherwise returns true.
|
|
|
|
*/
|
2022-12-28 16:09:56 +03:00
|
|
|
static bool virtio_mem_intersect_memory_section(MemoryRegionSection *s,
|
2021-04-13 12:55:23 +03:00
|
|
|
uint64_t offset, uint64_t size)
|
|
|
|
{
|
|
|
|
uint64_t start = MAX(s->offset_within_region, offset);
|
|
|
|
uint64_t end = MIN(s->offset_within_region + int128_get64(s->size),
|
|
|
|
offset + size);
|
|
|
|
|
|
|
|
if (end <= start) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
s->offset_within_address_space += start - s->offset_within_region;
|
|
|
|
s->offset_within_region = start;
|
|
|
|
s->size = int128_make64(end - start);
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
typedef int (*virtio_mem_section_cb)(MemoryRegionSection *s, void *arg);
|
|
|
|
|
|
|
|
static int virtio_mem_for_each_plugged_section(const VirtIOMEM *vmem,
|
|
|
|
MemoryRegionSection *s,
|
|
|
|
void *arg,
|
|
|
|
virtio_mem_section_cb cb)
|
|
|
|
{
|
|
|
|
unsigned long first_bit, last_bit;
|
|
|
|
uint64_t offset, size;
|
|
|
|
int ret = 0;
|
|
|
|
|
2022-12-16 09:22:31 +03:00
|
|
|
first_bit = s->offset_within_region / vmem->block_size;
|
2021-04-13 12:55:23 +03:00
|
|
|
first_bit = find_next_bit(vmem->bitmap, vmem->bitmap_size, first_bit);
|
|
|
|
while (first_bit < vmem->bitmap_size) {
|
|
|
|
MemoryRegionSection tmp = *s;
|
|
|
|
|
|
|
|
offset = first_bit * vmem->block_size;
|
|
|
|
last_bit = find_next_zero_bit(vmem->bitmap, vmem->bitmap_size,
|
|
|
|
first_bit + 1) - 1;
|
|
|
|
size = (last_bit - first_bit + 1) * vmem->block_size;
|
|
|
|
|
2022-12-28 16:09:56 +03:00
|
|
|
if (!virtio_mem_intersect_memory_section(&tmp, offset, size)) {
|
2021-04-13 12:55:23 +03:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
ret = cb(&tmp, arg);
|
|
|
|
if (ret) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
first_bit = find_next_bit(vmem->bitmap, vmem->bitmap_size,
|
|
|
|
last_bit + 2);
|
|
|
|
}
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2021-10-11 20:53:39 +03:00
|
|
|
static int virtio_mem_for_each_unplugged_section(const VirtIOMEM *vmem,
|
|
|
|
MemoryRegionSection *s,
|
|
|
|
void *arg,
|
|
|
|
virtio_mem_section_cb cb)
|
|
|
|
{
|
|
|
|
unsigned long first_bit, last_bit;
|
|
|
|
uint64_t offset, size;
|
|
|
|
int ret = 0;
|
|
|
|
|
2022-12-16 09:22:31 +03:00
|
|
|
first_bit = s->offset_within_region / vmem->block_size;
|
2021-10-11 20:53:39 +03:00
|
|
|
first_bit = find_next_zero_bit(vmem->bitmap, vmem->bitmap_size, first_bit);
|
|
|
|
while (first_bit < vmem->bitmap_size) {
|
|
|
|
MemoryRegionSection tmp = *s;
|
|
|
|
|
|
|
|
offset = first_bit * vmem->block_size;
|
|
|
|
last_bit = find_next_bit(vmem->bitmap, vmem->bitmap_size,
|
|
|
|
first_bit + 1) - 1;
|
|
|
|
size = (last_bit - first_bit + 1) * vmem->block_size;
|
|
|
|
|
2022-12-28 16:09:56 +03:00
|
|
|
if (!virtio_mem_intersect_memory_section(&tmp, offset, size)) {
|
2021-10-11 20:53:39 +03:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
ret = cb(&tmp, arg);
|
|
|
|
if (ret) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
first_bit = find_next_zero_bit(vmem->bitmap, vmem->bitmap_size,
|
|
|
|
last_bit + 2);
|
|
|
|
}
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2021-04-13 12:55:23 +03:00
|
|
|
static int virtio_mem_notify_populate_cb(MemoryRegionSection *s, void *arg)
|
|
|
|
{
|
|
|
|
RamDiscardListener *rdl = arg;
|
|
|
|
|
|
|
|
return rdl->notify_populate(rdl, s);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int virtio_mem_notify_discard_cb(MemoryRegionSection *s, void *arg)
|
|
|
|
{
|
|
|
|
RamDiscardListener *rdl = arg;
|
|
|
|
|
|
|
|
rdl->notify_discard(rdl, s);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void virtio_mem_notify_unplug(VirtIOMEM *vmem, uint64_t offset,
|
|
|
|
uint64_t size)
|
|
|
|
{
|
|
|
|
RamDiscardListener *rdl;
|
|
|
|
|
|
|
|
QLIST_FOREACH(rdl, &vmem->rdl_list, next) {
|
|
|
|
MemoryRegionSection tmp = *rdl->section;
|
|
|
|
|
2022-12-28 16:09:56 +03:00
|
|
|
if (!virtio_mem_intersect_memory_section(&tmp, offset, size)) {
|
2021-04-13 12:55:23 +03:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
rdl->notify_discard(rdl, &tmp);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static int virtio_mem_notify_plug(VirtIOMEM *vmem, uint64_t offset,
|
|
|
|
uint64_t size)
|
|
|
|
{
|
|
|
|
RamDiscardListener *rdl, *rdl2;
|
|
|
|
int ret = 0;
|
|
|
|
|
|
|
|
QLIST_FOREACH(rdl, &vmem->rdl_list, next) {
|
|
|
|
MemoryRegionSection tmp = *rdl->section;
|
|
|
|
|
2022-12-28 16:09:56 +03:00
|
|
|
if (!virtio_mem_intersect_memory_section(&tmp, offset, size)) {
|
2021-04-13 12:55:23 +03:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
ret = rdl->notify_populate(rdl, &tmp);
|
|
|
|
if (ret) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (ret) {
|
|
|
|
/* Notify all already-notified listeners. */
|
|
|
|
QLIST_FOREACH(rdl2, &vmem->rdl_list, next) {
|
2022-12-28 12:03:12 +03:00
|
|
|
MemoryRegionSection tmp = *rdl2->section;
|
2021-04-13 12:55:23 +03:00
|
|
|
|
|
|
|
if (rdl2 == rdl) {
|
|
|
|
break;
|
|
|
|
}
|
2022-12-28 16:09:56 +03:00
|
|
|
if (!virtio_mem_intersect_memory_section(&tmp, offset, size)) {
|
2021-04-13 12:55:23 +03:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
rdl2->notify_discard(rdl2, &tmp);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void virtio_mem_notify_unplug_all(VirtIOMEM *vmem)
|
|
|
|
{
|
|
|
|
RamDiscardListener *rdl;
|
|
|
|
|
|
|
|
if (!vmem->size) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
QLIST_FOREACH(rdl, &vmem->rdl_list, next) {
|
|
|
|
if (rdl->double_discard_supported) {
|
|
|
|
rdl->notify_discard(rdl, rdl->section);
|
|
|
|
} else {
|
|
|
|
virtio_mem_for_each_plugged_section(vmem, rdl->section, rdl,
|
|
|
|
virtio_mem_notify_discard_cb);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-05-23 21:30:36 +03:00
|
|
|
static bool virtio_mem_is_range_plugged(const VirtIOMEM *vmem,
|
|
|
|
uint64_t start_gpa, uint64_t size)
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
{
|
|
|
|
const unsigned long first_bit = (start_gpa - vmem->addr) / vmem->block_size;
|
|
|
|
const unsigned long last_bit = first_bit + (size / vmem->block_size) - 1;
|
|
|
|
unsigned long found_bit;
|
|
|
|
|
|
|
|
/* We fake a shorter bitmap to avoid searching too far. */
|
2023-05-23 21:30:36 +03:00
|
|
|
found_bit = find_next_zero_bit(vmem->bitmap, last_bit + 1, first_bit);
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
return found_bit > last_bit;
|
|
|
|
}
|
|
|
|
|
2023-05-23 21:30:36 +03:00
|
|
|
static bool virtio_mem_is_range_unplugged(const VirtIOMEM *vmem,
|
|
|
|
uint64_t start_gpa, uint64_t size)
|
|
|
|
{
|
|
|
|
const unsigned long first_bit = (start_gpa - vmem->addr) / vmem->block_size;
|
|
|
|
const unsigned long last_bit = first_bit + (size / vmem->block_size) - 1;
|
|
|
|
unsigned long found_bit;
|
|
|
|
|
|
|
|
/* We fake a shorter bitmap to avoid searching too far. */
|
|
|
|
found_bit = find_next_bit(vmem->bitmap, last_bit + 1, first_bit);
|
|
|
|
return found_bit > last_bit;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void virtio_mem_set_range_plugged(VirtIOMEM *vmem, uint64_t start_gpa,
|
|
|
|
uint64_t size)
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
{
|
|
|
|
const unsigned long bit = (start_gpa - vmem->addr) / vmem->block_size;
|
|
|
|
const unsigned long nbits = size / vmem->block_size;
|
|
|
|
|
2023-05-23 21:30:36 +03:00
|
|
|
bitmap_set(vmem->bitmap, bit, nbits);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void virtio_mem_set_range_unplugged(VirtIOMEM *vmem, uint64_t start_gpa,
|
|
|
|
uint64_t size)
|
|
|
|
{
|
|
|
|
const unsigned long bit = (start_gpa - vmem->addr) / vmem->block_size;
|
|
|
|
const unsigned long nbits = size / vmem->block_size;
|
|
|
|
|
|
|
|
bitmap_clear(vmem->bitmap, bit, nbits);
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static void virtio_mem_send_response(VirtIOMEM *vmem, VirtQueueElement *elem,
|
|
|
|
struct virtio_mem_resp *resp)
|
|
|
|
{
|
|
|
|
VirtIODevice *vdev = VIRTIO_DEVICE(vmem);
|
|
|
|
VirtQueue *vq = vmem->vq;
|
|
|
|
|
2020-06-26 10:22:46 +03:00
|
|
|
trace_virtio_mem_send_response(le16_to_cpu(resp->type));
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
iov_from_buf(elem->in_sg, elem->in_num, 0, resp, sizeof(*resp));
|
|
|
|
|
|
|
|
virtqueue_push(vq, elem, sizeof(*resp));
|
|
|
|
virtio_notify(vdev, vq);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void virtio_mem_send_response_simple(VirtIOMEM *vmem,
|
|
|
|
VirtQueueElement *elem,
|
|
|
|
uint16_t type)
|
|
|
|
{
|
|
|
|
struct virtio_mem_resp resp = {
|
|
|
|
.type = cpu_to_le16(type),
|
|
|
|
};
|
|
|
|
|
|
|
|
virtio_mem_send_response(vmem, elem, &resp);
|
|
|
|
}
|
|
|
|
|
2021-04-13 12:55:23 +03:00
|
|
|
static bool virtio_mem_valid_range(const VirtIOMEM *vmem, uint64_t gpa,
|
|
|
|
uint64_t size)
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
{
|
|
|
|
if (!QEMU_IS_ALIGNED(gpa, vmem->block_size)) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
if (gpa + size < gpa || !size) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
if (gpa < vmem->addr || gpa >= vmem->addr + vmem->usable_region_size) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
if (gpa + size > vmem->addr + vmem->usable_region_size) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int virtio_mem_set_block_state(VirtIOMEM *vmem, uint64_t start_gpa,
|
|
|
|
uint64_t size, bool plug)
|
|
|
|
{
|
|
|
|
const uint64_t offset = start_gpa - vmem->addr;
|
2021-04-13 12:55:22 +03:00
|
|
|
RAMBlock *rb = vmem->memdev->mr.ram_block;
|
2023-05-23 21:30:36 +03:00
|
|
|
int ret = 0;
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
|
|
|
|
if (virtio_mem_is_busy()) {
|
|
|
|
return -EBUSY;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!plug) {
|
2021-04-13 12:55:22 +03:00
|
|
|
if (ram_block_discard_range(rb, offset, size)) {
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
return -EBUSY;
|
|
|
|
}
|
2021-04-13 12:55:23 +03:00
|
|
|
virtio_mem_notify_unplug(vmem, offset, size);
|
2023-05-23 21:30:36 +03:00
|
|
|
virtio_mem_set_range_unplugged(vmem, start_gpa, size);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (vmem->prealloc) {
|
|
|
|
void *area = memory_region_get_ram_ptr(&vmem->memdev->mr) + offset;
|
|
|
|
int fd = memory_region_get_fd(&vmem->memdev->mr);
|
|
|
|
Error *local_err = NULL;
|
|
|
|
|
|
|
|
qemu_prealloc_mem(fd, area, size, 1, NULL, &local_err);
|
|
|
|
if (local_err) {
|
|
|
|
static bool warned;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Warn only once, we don't want to fill the log with these
|
|
|
|
* warnings.
|
|
|
|
*/
|
|
|
|
if (!warned) {
|
|
|
|
warn_report_err(local_err);
|
|
|
|
warned = true;
|
|
|
|
} else {
|
|
|
|
error_free(local_err);
|
virtio-mem: Support "prealloc=on" option
For scarce memory resources, such as hugetlb, we want to be able to
prealloc such memory resources in order to not crash later on access. On
simple user errors we could otherwise easily run out of memory resources
an crash the VM -- pretty much undesired.
For ordinary memory devices, such as DIMMs, we preallocate memory via the
memory backend for such use cases; however, with virtio-mem we're dealing
with sparse memory backends; preallocating the whole memory backend
destroys the whole purpose of virtio-mem.
Instead, we want to preallocate memory when actually exposing memory to the
VM dynamically, and fail plugging memory gracefully + warn the user in case
preallocation fails.
A common use case for hugetlb will be using "reserve=off,prealloc=off" for
the memory backend and "prealloc=on" for the virtio-mem device. This
way, no huge pages will be reserved for the process, but we can recover
if there are no actual huge pages when plugging memory. Libvirt is
already prepared for this.
Note that preallocation cannot protect from the OOM killer -- which
holds true for any kind of preallocation in QEMU. It's primarily useful
only for scarce memory resources such as hugetlb, or shared file-backed
memory. It's of little use for ordinary anonymous memory that can be
swapped, KSM merged, ... but we won't forbid it.
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20211217134611.31172-9-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2021-12-17 16:46:11 +03:00
|
|
|
}
|
2023-05-23 21:30:36 +03:00
|
|
|
ret = -EBUSY;
|
virtio-mem: Support "prealloc=on" option
For scarce memory resources, such as hugetlb, we want to be able to
prealloc such memory resources in order to not crash later on access. On
simple user errors we could otherwise easily run out of memory resources
an crash the VM -- pretty much undesired.
For ordinary memory devices, such as DIMMs, we preallocate memory via the
memory backend for such use cases; however, with virtio-mem we're dealing
with sparse memory backends; preallocating the whole memory backend
destroys the whole purpose of virtio-mem.
Instead, we want to preallocate memory when actually exposing memory to the
VM dynamically, and fail plugging memory gracefully + warn the user in case
preallocation fails.
A common use case for hugetlb will be using "reserve=off,prealloc=off" for
the memory backend and "prealloc=on" for the virtio-mem device. This
way, no huge pages will be reserved for the process, but we can recover
if there are no actual huge pages when plugging memory. Libvirt is
already prepared for this.
Note that preallocation cannot protect from the OOM killer -- which
holds true for any kind of preallocation in QEMU. It's primarily useful
only for scarce memory resources such as hugetlb, or shared file-backed
memory. It's of little use for ordinary anonymous memory that can be
swapped, KSM merged, ... but we won't forbid it.
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20211217134611.31172-9-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2021-12-17 16:46:11 +03:00
|
|
|
}
|
2023-05-23 21:30:36 +03:00
|
|
|
}
|
virtio-mem: Support "prealloc=on" option
For scarce memory resources, such as hugetlb, we want to be able to
prealloc such memory resources in order to not crash later on access. On
simple user errors we could otherwise easily run out of memory resources
an crash the VM -- pretty much undesired.
For ordinary memory devices, such as DIMMs, we preallocate memory via the
memory backend for such use cases; however, with virtio-mem we're dealing
with sparse memory backends; preallocating the whole memory backend
destroys the whole purpose of virtio-mem.
Instead, we want to preallocate memory when actually exposing memory to the
VM dynamically, and fail plugging memory gracefully + warn the user in case
preallocation fails.
A common use case for hugetlb will be using "reserve=off,prealloc=off" for
the memory backend and "prealloc=on" for the virtio-mem device. This
way, no huge pages will be reserved for the process, but we can recover
if there are no actual huge pages when plugging memory. Libvirt is
already prepared for this.
Note that preallocation cannot protect from the OOM killer -- which
holds true for any kind of preallocation in QEMU. It's primarily useful
only for scarce memory resources such as hugetlb, or shared file-backed
memory. It's of little use for ordinary anonymous memory that can be
swapped, KSM merged, ... but we won't forbid it.
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20211217134611.31172-9-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2021-12-17 16:46:11 +03:00
|
|
|
|
2023-05-23 21:30:36 +03:00
|
|
|
if (!ret) {
|
|
|
|
ret = virtio_mem_notify_plug(vmem, offset, size);
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
}
|
2023-05-23 21:30:36 +03:00
|
|
|
if (ret) {
|
|
|
|
/* Could be preallocation or a notifier populated memory. */
|
|
|
|
ram_block_discard_range(vmem->memdev->mr.ram_block, offset, size);
|
|
|
|
return -EBUSY;
|
|
|
|
}
|
|
|
|
|
|
|
|
virtio_mem_set_range_plugged(vmem, start_gpa, size);
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int virtio_mem_state_change_request(VirtIOMEM *vmem, uint64_t gpa,
|
|
|
|
uint16_t nb_blocks, bool plug)
|
|
|
|
{
|
|
|
|
const uint64_t size = nb_blocks * vmem->block_size;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (!virtio_mem_valid_range(vmem, gpa, size)) {
|
|
|
|
return VIRTIO_MEM_RESP_ERROR;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (plug && (vmem->size + size > vmem->requested_size)) {
|
|
|
|
return VIRTIO_MEM_RESP_NACK;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* test if really all blocks are in the opposite state */
|
2023-05-23 21:30:36 +03:00
|
|
|
if ((plug && !virtio_mem_is_range_unplugged(vmem, gpa, size)) ||
|
|
|
|
(!plug && !virtio_mem_is_range_plugged(vmem, gpa, size))) {
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
return VIRTIO_MEM_RESP_ERROR;
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = virtio_mem_set_block_state(vmem, gpa, size, plug);
|
|
|
|
if (ret) {
|
|
|
|
return VIRTIO_MEM_RESP_BUSY;
|
|
|
|
}
|
|
|
|
if (plug) {
|
|
|
|
vmem->size += size;
|
|
|
|
} else {
|
|
|
|
vmem->size -= size;
|
|
|
|
}
|
2020-06-26 10:22:43 +03:00
|
|
|
notifier_list_notify(&vmem->size_change_notifiers, &vmem->size);
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
return VIRTIO_MEM_RESP_ACK;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void virtio_mem_plug_request(VirtIOMEM *vmem, VirtQueueElement *elem,
|
|
|
|
struct virtio_mem_req *req)
|
|
|
|
{
|
|
|
|
const uint64_t gpa = le64_to_cpu(req->u.plug.addr);
|
|
|
|
const uint16_t nb_blocks = le16_to_cpu(req->u.plug.nb_blocks);
|
|
|
|
uint16_t type;
|
|
|
|
|
2020-06-26 10:22:46 +03:00
|
|
|
trace_virtio_mem_plug_request(gpa, nb_blocks);
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
type = virtio_mem_state_change_request(vmem, gpa, nb_blocks, true);
|
|
|
|
virtio_mem_send_response_simple(vmem, elem, type);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void virtio_mem_unplug_request(VirtIOMEM *vmem, VirtQueueElement *elem,
|
|
|
|
struct virtio_mem_req *req)
|
|
|
|
{
|
|
|
|
const uint64_t gpa = le64_to_cpu(req->u.unplug.addr);
|
|
|
|
const uint16_t nb_blocks = le16_to_cpu(req->u.unplug.nb_blocks);
|
|
|
|
uint16_t type;
|
|
|
|
|
2020-06-26 10:22:46 +03:00
|
|
|
trace_virtio_mem_unplug_request(gpa, nb_blocks);
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
type = virtio_mem_state_change_request(vmem, gpa, nb_blocks, false);
|
|
|
|
virtio_mem_send_response_simple(vmem, elem, type);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void virtio_mem_resize_usable_region(VirtIOMEM *vmem,
|
|
|
|
uint64_t requested_size,
|
|
|
|
bool can_shrink)
|
|
|
|
{
|
|
|
|
uint64_t newsize = MIN(memory_region_size(&vmem->memdev->mr),
|
|
|
|
requested_size + VIRTIO_MEM_USABLE_EXTENT);
|
|
|
|
|
2020-10-08 11:30:25 +03:00
|
|
|
/* The usable region size always has to be multiples of the block size. */
|
|
|
|
newsize = QEMU_ALIGN_UP(newsize, vmem->block_size);
|
|
|
|
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
if (!requested_size) {
|
|
|
|
newsize = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (newsize < vmem->usable_region_size && !can_shrink) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2020-06-26 10:22:46 +03:00
|
|
|
trace_virtio_mem_resized_usable_region(vmem->usable_region_size, newsize);
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
vmem->usable_region_size = newsize;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int virtio_mem_unplug_all(VirtIOMEM *vmem)
|
|
|
|
{
|
|
|
|
RAMBlock *rb = vmem->memdev->mr.ram_block;
|
|
|
|
|
2020-06-26 10:22:43 +03:00
|
|
|
if (vmem->size) {
|
2023-07-06 10:56:07 +03:00
|
|
|
if (virtio_mem_is_busy()) {
|
|
|
|
return -EBUSY;
|
|
|
|
}
|
|
|
|
if (ram_block_discard_range(rb, 0, qemu_ram_get_used_length(rb))) {
|
|
|
|
return -EBUSY;
|
|
|
|
}
|
|
|
|
virtio_mem_notify_unplug_all(vmem);
|
|
|
|
|
|
|
|
bitmap_clear(vmem->bitmap, 0, vmem->bitmap_size);
|
2020-06-26 10:22:43 +03:00
|
|
|
vmem->size = 0;
|
|
|
|
notifier_list_notify(&vmem->size_change_notifiers, &vmem->size);
|
|
|
|
}
|
2023-07-06 10:56:07 +03:00
|
|
|
|
2020-06-26 10:22:46 +03:00
|
|
|
trace_virtio_mem_unplugged_all();
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
virtio_mem_resize_usable_region(vmem, vmem->requested_size, true);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void virtio_mem_unplug_all_request(VirtIOMEM *vmem,
|
|
|
|
VirtQueueElement *elem)
|
|
|
|
{
|
2020-06-26 10:22:46 +03:00
|
|
|
trace_virtio_mem_unplug_all_request();
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
if (virtio_mem_unplug_all(vmem)) {
|
|
|
|
virtio_mem_send_response_simple(vmem, elem, VIRTIO_MEM_RESP_BUSY);
|
|
|
|
} else {
|
|
|
|
virtio_mem_send_response_simple(vmem, elem, VIRTIO_MEM_RESP_ACK);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void virtio_mem_state_request(VirtIOMEM *vmem, VirtQueueElement *elem,
|
|
|
|
struct virtio_mem_req *req)
|
|
|
|
{
|
|
|
|
const uint16_t nb_blocks = le16_to_cpu(req->u.state.nb_blocks);
|
|
|
|
const uint64_t gpa = le64_to_cpu(req->u.state.addr);
|
|
|
|
const uint64_t size = nb_blocks * vmem->block_size;
|
|
|
|
struct virtio_mem_resp resp = {
|
|
|
|
.type = cpu_to_le16(VIRTIO_MEM_RESP_ACK),
|
|
|
|
};
|
|
|
|
|
2020-06-26 10:22:46 +03:00
|
|
|
trace_virtio_mem_state_request(gpa, nb_blocks);
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
if (!virtio_mem_valid_range(vmem, gpa, size)) {
|
|
|
|
virtio_mem_send_response_simple(vmem, elem, VIRTIO_MEM_RESP_ERROR);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2023-05-23 21:30:36 +03:00
|
|
|
if (virtio_mem_is_range_plugged(vmem, gpa, size)) {
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
resp.u.state.state = cpu_to_le16(VIRTIO_MEM_STATE_PLUGGED);
|
2023-05-23 21:30:36 +03:00
|
|
|
} else if (virtio_mem_is_range_unplugged(vmem, gpa, size)) {
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
resp.u.state.state = cpu_to_le16(VIRTIO_MEM_STATE_UNPLUGGED);
|
|
|
|
} else {
|
|
|
|
resp.u.state.state = cpu_to_le16(VIRTIO_MEM_STATE_MIXED);
|
|
|
|
}
|
2020-06-26 10:22:46 +03:00
|
|
|
trace_virtio_mem_state_response(le16_to_cpu(resp.u.state.state));
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
virtio_mem_send_response(vmem, elem, &resp);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void virtio_mem_handle_request(VirtIODevice *vdev, VirtQueue *vq)
|
|
|
|
{
|
|
|
|
const int len = sizeof(struct virtio_mem_req);
|
|
|
|
VirtIOMEM *vmem = VIRTIO_MEM(vdev);
|
|
|
|
VirtQueueElement *elem;
|
|
|
|
struct virtio_mem_req req;
|
|
|
|
uint16_t type;
|
|
|
|
|
|
|
|
while (true) {
|
|
|
|
elem = virtqueue_pop(vq, sizeof(VirtQueueElement));
|
|
|
|
if (!elem) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (iov_to_buf(elem->out_sg, elem->out_num, 0, &req, len) < len) {
|
|
|
|
virtio_error(vdev, "virtio-mem protocol violation: invalid request"
|
|
|
|
" size: %d", len);
|
2020-08-16 17:22:45 +03:00
|
|
|
virtqueue_detach_element(vq, elem, 0);
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
g_free(elem);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (iov_size(elem->in_sg, elem->in_num) <
|
|
|
|
sizeof(struct virtio_mem_resp)) {
|
|
|
|
virtio_error(vdev, "virtio-mem protocol violation: not enough space"
|
|
|
|
" for response: %zu",
|
|
|
|
iov_size(elem->in_sg, elem->in_num));
|
2020-08-16 17:22:45 +03:00
|
|
|
virtqueue_detach_element(vq, elem, 0);
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
g_free(elem);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
type = le16_to_cpu(req.type);
|
|
|
|
switch (type) {
|
|
|
|
case VIRTIO_MEM_REQ_PLUG:
|
|
|
|
virtio_mem_plug_request(vmem, elem, &req);
|
|
|
|
break;
|
|
|
|
case VIRTIO_MEM_REQ_UNPLUG:
|
|
|
|
virtio_mem_unplug_request(vmem, elem, &req);
|
|
|
|
break;
|
|
|
|
case VIRTIO_MEM_REQ_UNPLUG_ALL:
|
|
|
|
virtio_mem_unplug_all_request(vmem, elem);
|
|
|
|
break;
|
|
|
|
case VIRTIO_MEM_REQ_STATE:
|
|
|
|
virtio_mem_state_request(vmem, elem, &req);
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
virtio_error(vdev, "virtio-mem protocol violation: unknown request"
|
|
|
|
" type: %d", type);
|
2020-08-16 17:22:45 +03:00
|
|
|
virtqueue_detach_element(vq, elem, 0);
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
g_free(elem);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
g_free(elem);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void virtio_mem_get_config(VirtIODevice *vdev, uint8_t *config_data)
|
|
|
|
{
|
|
|
|
VirtIOMEM *vmem = VIRTIO_MEM(vdev);
|
|
|
|
struct virtio_mem_config *config = (void *) config_data;
|
|
|
|
|
|
|
|
config->block_size = cpu_to_le64(vmem->block_size);
|
|
|
|
config->node_id = cpu_to_le16(vmem->node);
|
|
|
|
config->requested_size = cpu_to_le64(vmem->requested_size);
|
|
|
|
config->plugged_size = cpu_to_le64(vmem->size);
|
|
|
|
config->addr = cpu_to_le64(vmem->addr);
|
|
|
|
config->region_size = cpu_to_le64(memory_region_size(&vmem->memdev->mr));
|
|
|
|
config->usable_region_size = cpu_to_le64(vmem->usable_region_size);
|
|
|
|
}
|
|
|
|
|
|
|
|
static uint64_t virtio_mem_get_features(VirtIODevice *vdev, uint64_t features,
|
|
|
|
Error **errp)
|
|
|
|
{
|
|
|
|
MachineState *ms = MACHINE(qdev_get_machine());
|
virtio-mem: Support VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE
With VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE, we signal the VM that reading
unplugged memory is not supported. We have to fail feature negotiation
in case the guest does not support VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE.
First, VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE is required to properly handle
memory backends (or architectures) without support for the shared zeropage
in the hypervisor cleanly. Without the shared zeropage, even reading an
unpopulated virtual memory location can populate real memory and
consequently consume memory in the hypervisor. We have a guaranteed shared
zeropage only on MAP_PRIVATE anonymous memory.
Second, we want VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE to be the default
long-term as even populating the shared zeropage can be problematic: for
example, without THP support (possible) or without support for the shared
huge zeropage with THP (unlikely), the PTE page tables to hold the shared
zeropage entries can consume quite some memory that cannot be reclaimed
easily.
Third, there are other optimizations+features (e.g., protection of
unplugged memory, reducing the total memory slot size and bitmap sizes)
that will require VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE.
We really only support x86 targets with virtio-mem for now (and
Linux similarly only support x86), but that might change soon, so prepare
for different targets already.
Add a new "unplugged-inaccessible" tristate property for x86 targets:
- "off" will keep VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE unset and legacy
guests working.
- "on" will set VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE and stop legacy guests
from using the device.
- "auto" selects the default based on support for the shared zeropage.
Warn in case the property is set to "off" and we don't have support for the
shared zeropage.
For existing compat machines, the property will default to "off", to
not change the behavior but eventually warn about a problematic setup.
Short-term, we'll set the property default to "auto" for new QEMU machines.
Mid-term, we'll set the property default to "on" for new QEMU machines.
Long-term, we'll deprecate the parameter and disallow legacy
guests completely.
The property has to match on the migration source and destination. "auto"
will result in the same VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE setting as long
as the qemu command line (esp. memdev) match -- so "auto" is good enough
for migration purposes and the parameter doesn't have to be migrated
explicitly.
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20211217134039.29670-3-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2021-12-17 16:40:38 +03:00
|
|
|
VirtIOMEM *vmem = VIRTIO_MEM(vdev);
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
|
|
|
|
if (ms->numa_state) {
|
|
|
|
#if defined(CONFIG_ACPI)
|
|
|
|
virtio_add_feature(&features, VIRTIO_MEM_F_ACPI_PXM);
|
|
|
|
#endif
|
|
|
|
}
|
virtio-mem: Support VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE
With VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE, we signal the VM that reading
unplugged memory is not supported. We have to fail feature negotiation
in case the guest does not support VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE.
First, VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE is required to properly handle
memory backends (or architectures) without support for the shared zeropage
in the hypervisor cleanly. Without the shared zeropage, even reading an
unpopulated virtual memory location can populate real memory and
consequently consume memory in the hypervisor. We have a guaranteed shared
zeropage only on MAP_PRIVATE anonymous memory.
Second, we want VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE to be the default
long-term as even populating the shared zeropage can be problematic: for
example, without THP support (possible) or without support for the shared
huge zeropage with THP (unlikely), the PTE page tables to hold the shared
zeropage entries can consume quite some memory that cannot be reclaimed
easily.
Third, there are other optimizations+features (e.g., protection of
unplugged memory, reducing the total memory slot size and bitmap sizes)
that will require VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE.
We really only support x86 targets with virtio-mem for now (and
Linux similarly only support x86), but that might change soon, so prepare
for different targets already.
Add a new "unplugged-inaccessible" tristate property for x86 targets:
- "off" will keep VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE unset and legacy
guests working.
- "on" will set VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE and stop legacy guests
from using the device.
- "auto" selects the default based on support for the shared zeropage.
Warn in case the property is set to "off" and we don't have support for the
shared zeropage.
For existing compat machines, the property will default to "off", to
not change the behavior but eventually warn about a problematic setup.
Short-term, we'll set the property default to "auto" for new QEMU machines.
Mid-term, we'll set the property default to "on" for new QEMU machines.
Long-term, we'll deprecate the parameter and disallow legacy
guests completely.
The property has to match on the migration source and destination. "auto"
will result in the same VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE setting as long
as the qemu command line (esp. memdev) match -- so "auto" is good enough
for migration purposes and the parameter doesn't have to be migrated
explicitly.
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20211217134039.29670-3-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2021-12-17 16:40:38 +03:00
|
|
|
assert(vmem->unplugged_inaccessible != ON_OFF_AUTO_AUTO);
|
|
|
|
if (vmem->unplugged_inaccessible == ON_OFF_AUTO_ON) {
|
|
|
|
virtio_add_feature(&features, VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE);
|
|
|
|
}
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
return features;
|
|
|
|
}
|
|
|
|
|
virtio-mem: Support VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE
With VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE, we signal the VM that reading
unplugged memory is not supported. We have to fail feature negotiation
in case the guest does not support VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE.
First, VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE is required to properly handle
memory backends (or architectures) without support for the shared zeropage
in the hypervisor cleanly. Without the shared zeropage, even reading an
unpopulated virtual memory location can populate real memory and
consequently consume memory in the hypervisor. We have a guaranteed shared
zeropage only on MAP_PRIVATE anonymous memory.
Second, we want VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE to be the default
long-term as even populating the shared zeropage can be problematic: for
example, without THP support (possible) or without support for the shared
huge zeropage with THP (unlikely), the PTE page tables to hold the shared
zeropage entries can consume quite some memory that cannot be reclaimed
easily.
Third, there are other optimizations+features (e.g., protection of
unplugged memory, reducing the total memory slot size and bitmap sizes)
that will require VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE.
We really only support x86 targets with virtio-mem for now (and
Linux similarly only support x86), but that might change soon, so prepare
for different targets already.
Add a new "unplugged-inaccessible" tristate property for x86 targets:
- "off" will keep VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE unset and legacy
guests working.
- "on" will set VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE and stop legacy guests
from using the device.
- "auto" selects the default based on support for the shared zeropage.
Warn in case the property is set to "off" and we don't have support for the
shared zeropage.
For existing compat machines, the property will default to "off", to
not change the behavior but eventually warn about a problematic setup.
Short-term, we'll set the property default to "auto" for new QEMU machines.
Mid-term, we'll set the property default to "on" for new QEMU machines.
Long-term, we'll deprecate the parameter and disallow legacy
guests completely.
The property has to match on the migration source and destination. "auto"
will result in the same VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE setting as long
as the qemu command line (esp. memdev) match -- so "auto" is good enough
for migration purposes and the parameter doesn't have to be migrated
explicitly.
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20211217134039.29670-3-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2021-12-17 16:40:38 +03:00
|
|
|
static int virtio_mem_validate_features(VirtIODevice *vdev)
|
|
|
|
{
|
|
|
|
if (virtio_host_has_feature(vdev, VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE) &&
|
|
|
|
!virtio_vdev_has_feature(vdev, VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE)) {
|
|
|
|
return -EFAULT;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
static void virtio_mem_system_reset(void *opaque)
|
|
|
|
{
|
|
|
|
VirtIOMEM *vmem = VIRTIO_MEM(opaque);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* During usual resets, we will unplug all memory and shrink the usable
|
|
|
|
* region size. This is, however, not possible in all scenarios. Then,
|
|
|
|
* the guest has to deal with this manually (VIRTIO_MEM_REQ_UNPLUG_ALL).
|
|
|
|
*/
|
|
|
|
virtio_mem_unplug_all(vmem);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void virtio_mem_device_realize(DeviceState *dev, Error **errp)
|
|
|
|
{
|
|
|
|
MachineState *ms = MACHINE(qdev_get_machine());
|
|
|
|
int nb_numa_nodes = ms->numa_state ? ms->numa_state->num_nodes : 0;
|
|
|
|
VirtIODevice *vdev = VIRTIO_DEVICE(dev);
|
|
|
|
VirtIOMEM *vmem = VIRTIO_MEM(dev);
|
|
|
|
uint64_t page_size;
|
|
|
|
RAMBlock *rb;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (!vmem->memdev) {
|
|
|
|
error_setg(errp, "'%s' property is not set", VIRTIO_MEM_MEMDEV_PROP);
|
|
|
|
return;
|
|
|
|
} else if (host_memory_backend_is_mapped(vmem->memdev)) {
|
|
|
|
error_setg(errp, "'%s' property specifies a busy memdev: %s",
|
2020-07-14 19:02:00 +03:00
|
|
|
VIRTIO_MEM_MEMDEV_PROP,
|
|
|
|
object_get_canonical_path_component(OBJECT(vmem->memdev)));
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
return;
|
|
|
|
} else if (!memory_region_is_ram(&vmem->memdev->mr) ||
|
|
|
|
memory_region_is_rom(&vmem->memdev->mr) ||
|
|
|
|
!vmem->memdev->mr.ram_block) {
|
|
|
|
error_setg(errp, "'%s' property specifies an unsupported memdev",
|
|
|
|
VIRTIO_MEM_MEMDEV_PROP);
|
|
|
|
return;
|
2023-01-17 14:22:47 +03:00
|
|
|
} else if (vmem->memdev->prealloc) {
|
|
|
|
error_setg(errp, "'%s' property specifies a memdev with preallocation"
|
|
|
|
" enabled: %s. Instead, specify 'prealloc=on' for the"
|
|
|
|
" virtio-mem device. ", VIRTIO_MEM_MEMDEV_PROP,
|
|
|
|
object_get_canonical_path_component(OBJECT(vmem->memdev)));
|
|
|
|
return;
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
if ((nb_numa_nodes && vmem->node >= nb_numa_nodes) ||
|
|
|
|
(!nb_numa_nodes && vmem->node)) {
|
|
|
|
error_setg(errp, "'%s' property has value '%" PRIu32 "', which exceeds"
|
|
|
|
"the number of numa nodes: %d", VIRTIO_MEM_NODE_PROP,
|
|
|
|
vmem->node, nb_numa_nodes ? nb_numa_nodes : 1);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (enable_mlock) {
|
|
|
|
error_setg(errp, "Incompatible with mlock");
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
rb = vmem->memdev->mr.ram_block;
|
|
|
|
page_size = qemu_ram_pagesize(rb);
|
|
|
|
|
virtio-mem: Support VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE
With VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE, we signal the VM that reading
unplugged memory is not supported. We have to fail feature negotiation
in case the guest does not support VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE.
First, VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE is required to properly handle
memory backends (or architectures) without support for the shared zeropage
in the hypervisor cleanly. Without the shared zeropage, even reading an
unpopulated virtual memory location can populate real memory and
consequently consume memory in the hypervisor. We have a guaranteed shared
zeropage only on MAP_PRIVATE anonymous memory.
Second, we want VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE to be the default
long-term as even populating the shared zeropage can be problematic: for
example, without THP support (possible) or without support for the shared
huge zeropage with THP (unlikely), the PTE page tables to hold the shared
zeropage entries can consume quite some memory that cannot be reclaimed
easily.
Third, there are other optimizations+features (e.g., protection of
unplugged memory, reducing the total memory slot size and bitmap sizes)
that will require VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE.
We really only support x86 targets with virtio-mem for now (and
Linux similarly only support x86), but that might change soon, so prepare
for different targets already.
Add a new "unplugged-inaccessible" tristate property for x86 targets:
- "off" will keep VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE unset and legacy
guests working.
- "on" will set VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE and stop legacy guests
from using the device.
- "auto" selects the default based on support for the shared zeropage.
Warn in case the property is set to "off" and we don't have support for the
shared zeropage.
For existing compat machines, the property will default to "off", to
not change the behavior but eventually warn about a problematic setup.
Short-term, we'll set the property default to "auto" for new QEMU machines.
Mid-term, we'll set the property default to "on" for new QEMU machines.
Long-term, we'll deprecate the parameter and disallow legacy
guests completely.
The property has to match on the migration source and destination. "auto"
will result in the same VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE setting as long
as the qemu command line (esp. memdev) match -- so "auto" is good enough
for migration purposes and the parameter doesn't have to be migrated
explicitly.
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20211217134039.29670-3-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2021-12-17 16:40:38 +03:00
|
|
|
#if defined(VIRTIO_MEM_HAS_LEGACY_GUESTS)
|
|
|
|
switch (vmem->unplugged_inaccessible) {
|
|
|
|
case ON_OFF_AUTO_AUTO:
|
|
|
|
if (virtio_mem_has_shared_zeropage(rb)) {
|
|
|
|
vmem->unplugged_inaccessible = ON_OFF_AUTO_OFF;
|
|
|
|
} else {
|
|
|
|
vmem->unplugged_inaccessible = ON_OFF_AUTO_ON;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
case ON_OFF_AUTO_OFF:
|
|
|
|
if (!virtio_mem_has_shared_zeropage(rb)) {
|
|
|
|
warn_report("'%s' property set to 'off' with a memdev that does"
|
|
|
|
" not support the shared zeropage.",
|
|
|
|
VIRTIO_MEM_UNPLUGGED_INACCESSIBLE_PROP);
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
#else /* VIRTIO_MEM_HAS_LEGACY_GUESTS */
|
|
|
|
vmem->unplugged_inaccessible = ON_OFF_AUTO_ON;
|
|
|
|
#endif /* VIRTIO_MEM_HAS_LEGACY_GUESTS */
|
|
|
|
|
2020-10-08 11:30:26 +03:00
|
|
|
/*
|
|
|
|
* If the block size wasn't configured by the user, use a sane default. This
|
|
|
|
* allows using hugetlbfs backends of any page size without manual
|
|
|
|
* intervention.
|
|
|
|
*/
|
|
|
|
if (!vmem->block_size) {
|
|
|
|
vmem->block_size = virtio_mem_default_block_size(rb);
|
|
|
|
}
|
|
|
|
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
if (vmem->block_size < page_size) {
|
|
|
|
error_setg(errp, "'%s' property has to be at least the page size (0x%"
|
|
|
|
PRIx64 ")", VIRTIO_MEM_BLOCK_SIZE_PROP, page_size);
|
|
|
|
return;
|
2020-10-08 11:30:26 +03:00
|
|
|
} else if (vmem->block_size < virtio_mem_default_block_size(rb)) {
|
|
|
|
warn_report("'%s' property is smaller than the default block size (%"
|
|
|
|
PRIx64 " MiB)", VIRTIO_MEM_BLOCK_SIZE_PROP,
|
|
|
|
virtio_mem_default_block_size(rb) / MiB);
|
2021-10-11 20:33:05 +03:00
|
|
|
}
|
|
|
|
if (!QEMU_IS_ALIGNED(vmem->requested_size, vmem->block_size)) {
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
error_setg(errp, "'%s' property has to be multiples of '%s' (0x%" PRIx64
|
|
|
|
")", VIRTIO_MEM_REQUESTED_SIZE_PROP,
|
|
|
|
VIRTIO_MEM_BLOCK_SIZE_PROP, vmem->block_size);
|
|
|
|
return;
|
2020-10-08 11:30:24 +03:00
|
|
|
} else if (!QEMU_IS_ALIGNED(vmem->addr, vmem->block_size)) {
|
|
|
|
error_setg(errp, "'%s' property has to be multiples of '%s' (0x%" PRIx64
|
|
|
|
")", VIRTIO_MEM_ADDR_PROP, VIRTIO_MEM_BLOCK_SIZE_PROP,
|
|
|
|
vmem->block_size);
|
|
|
|
return;
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
} else if (!QEMU_IS_ALIGNED(memory_region_size(&vmem->memdev->mr),
|
|
|
|
vmem->block_size)) {
|
|
|
|
error_setg(errp, "'%s' property memdev size has to be multiples of"
|
|
|
|
"'%s' (0x%" PRIx64 ")", VIRTIO_MEM_MEMDEV_PROP,
|
|
|
|
VIRTIO_MEM_BLOCK_SIZE_PROP, vmem->block_size);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2021-04-13 12:55:30 +03:00
|
|
|
if (ram_block_coordinated_discard_require(true)) {
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
error_setg(errp, "Discarding RAM is disabled");
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2023-07-06 10:56:09 +03:00
|
|
|
/*
|
|
|
|
* We don't know at this point whether shared RAM is migrated using
|
|
|
|
* QEMU or migrated using the file content. "x-ignore-shared" will be
|
|
|
|
* configured after realizing the device. So in case we have an
|
|
|
|
* incoming migration, simply always skip the discard step.
|
|
|
|
*
|
|
|
|
* Otherwise, make sure that we start with a clean slate: either the
|
|
|
|
* memory backend might get reused or the shared file might still have
|
|
|
|
* memory allocated.
|
|
|
|
*/
|
|
|
|
if (!runstate_check(RUN_STATE_INMIGRATE)) {
|
|
|
|
ret = ram_block_discard_range(rb, 0, qemu_ram_get_used_length(rb));
|
|
|
|
if (ret) {
|
|
|
|
error_setg_errno(errp, -ret, "Unexpected error discarding RAM");
|
|
|
|
ram_block_coordinated_discard_require(false);
|
|
|
|
return;
|
|
|
|
}
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
virtio_mem_resize_usable_region(vmem, vmem->requested_size, true);
|
|
|
|
|
|
|
|
vmem->bitmap_size = memory_region_size(&vmem->memdev->mr) /
|
|
|
|
vmem->block_size;
|
|
|
|
vmem->bitmap = bitmap_new(vmem->bitmap_size);
|
|
|
|
|
2022-04-01 16:23:18 +03:00
|
|
|
virtio_init(vdev, VIRTIO_ID_MEM, sizeof(struct virtio_mem_config));
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
vmem->vq = virtio_add_queue(vdev, 128, virtio_mem_handle_request);
|
|
|
|
|
|
|
|
host_memory_backend_set_mapped(vmem->memdev, true);
|
|
|
|
vmstate_register_ram(&vmem->memdev->mr, DEVICE(vmem));
|
2023-01-17 14:22:48 +03:00
|
|
|
if (vmem->early_migration) {
|
|
|
|
vmstate_register(VMSTATE_IF(vmem), VMSTATE_INSTANCE_ID_ANY,
|
|
|
|
&vmstate_virtio_mem_device_early, vmem);
|
|
|
|
}
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
qemu_register_reset(virtio_mem_system_reset, vmem);
|
2021-04-13 12:55:23 +03:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Set ourselves as RamDiscardManager before the plug handler maps the
|
|
|
|
* memory region and exposes it via an address space.
|
|
|
|
*/
|
|
|
|
memory_region_set_ram_discard_manager(&vmem->memdev->mr,
|
|
|
|
RAM_DISCARD_MANAGER(vmem));
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static void virtio_mem_device_unrealize(DeviceState *dev)
|
|
|
|
{
|
|
|
|
VirtIODevice *vdev = VIRTIO_DEVICE(dev);
|
|
|
|
VirtIOMEM *vmem = VIRTIO_MEM(dev);
|
|
|
|
|
2021-04-13 12:55:23 +03:00
|
|
|
/*
|
|
|
|
* The unplug handler unmapped the memory region, it cannot be
|
|
|
|
* found via an address space anymore. Unset ourselves.
|
|
|
|
*/
|
|
|
|
memory_region_set_ram_discard_manager(&vmem->memdev->mr, NULL);
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
qemu_unregister_reset(virtio_mem_system_reset, vmem);
|
2023-01-17 14:22:48 +03:00
|
|
|
if (vmem->early_migration) {
|
|
|
|
vmstate_unregister(VMSTATE_IF(vmem), &vmstate_virtio_mem_device_early,
|
|
|
|
vmem);
|
|
|
|
}
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
vmstate_unregister_ram(&vmem->memdev->mr, DEVICE(vmem));
|
|
|
|
host_memory_backend_set_mapped(vmem->memdev, false);
|
|
|
|
virtio_del_queue(vdev, 0);
|
|
|
|
virtio_cleanup(vdev);
|
|
|
|
g_free(vmem->bitmap);
|
2021-04-13 12:55:30 +03:00
|
|
|
ram_block_coordinated_discard_require(false);
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
}
|
|
|
|
|
2021-04-13 12:55:21 +03:00
|
|
|
static int virtio_mem_discard_range_cb(const VirtIOMEM *vmem, void *arg,
|
|
|
|
uint64_t offset, uint64_t size)
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
{
|
|
|
|
RAMBlock *rb = vmem->memdev->mr.ram_block;
|
|
|
|
|
2021-04-13 12:55:22 +03:00
|
|
|
return ram_block_discard_range(rb, offset, size) ? -EINVAL : 0;
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
}
|
|
|
|
|
2021-04-13 12:55:21 +03:00
|
|
|
static int virtio_mem_restore_unplugged(VirtIOMEM *vmem)
|
|
|
|
{
|
|
|
|
/* Make sure all memory is really discarded after migration. */
|
|
|
|
return virtio_mem_for_each_unplugged_range(vmem, NULL,
|
|
|
|
virtio_mem_discard_range_cb);
|
|
|
|
}
|
|
|
|
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
static int virtio_mem_post_load(void *opaque, int version_id)
|
|
|
|
{
|
2021-04-13 12:55:23 +03:00
|
|
|
VirtIOMEM *vmem = VIRTIO_MEM(opaque);
|
|
|
|
RamDiscardListener *rdl;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We started out with all memory discarded and our memory region is mapped
|
|
|
|
* into an address space. Replay, now that we updated the bitmap.
|
|
|
|
*/
|
|
|
|
QLIST_FOREACH(rdl, &vmem->rdl_list, next) {
|
|
|
|
ret = virtio_mem_for_each_plugged_section(vmem, rdl->section, rdl,
|
|
|
|
virtio_mem_notify_populate_cb);
|
|
|
|
if (ret) {
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-07-06 10:56:09 +03:00
|
|
|
/*
|
|
|
|
* If shared RAM is migrated using the file content and not using QEMU,
|
|
|
|
* don't mess with preallocation and postcopy.
|
|
|
|
*/
|
|
|
|
if (migrate_ram_is_ignored(vmem->memdev->mr.ram_block)) {
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (vmem->prealloc && !vmem->early_migration) {
|
|
|
|
warn_report("Proper preallocation with migration requires a newer QEMU machine");
|
|
|
|
}
|
|
|
|
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
if (migration_in_incoming_postcopy()) {
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2021-04-13 12:55:23 +03:00
|
|
|
return virtio_mem_restore_unplugged(vmem);
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
}
|
|
|
|
|
virtio-mem: Proper support for preallocation with migration
Ordinary memory preallocation runs when QEMU starts up and creates the
memory backends, before processing the incoming migration stream. With
virtio-mem, we don't know which memory blocks to preallocate before
migration started. Now that we migrate the virtio-mem bitmap early, before
migrating any RAM content, we can safely preallocate memory for all plugged
memory blocks before migrating any RAM content.
This is especially relevant for the following cases:
(1) User errors
With hugetlb/files, if we don't have sufficient backend memory available on
the migration destination, we'll crash QEMU (SIGBUS) during RAM migration
when running out of backend memory. Preallocating memory before actual
RAM migration allows for failing gracefully and informing the user about
the setup problem.
(2) Excluded memory ranges during migration
For example, virtio-balloon free page hinting will exclude some pages
from getting migrated. In that case, we won't crash during RAM
migration, but later, when running the VM on the destination, which is
bad.
To fix this for new QEMU machines that migrate the bitmap early,
preallocate the memory early, before any RAM migration. Warn with old
QEMU machines.
Getting postcopy right is a bit tricky, but we essentially now implement
the same (problematic) preallocation logic as ordinary preallocation:
preallocate memory early and discard it again before precopy starts. During
ordinary preallocation, discarding of RAM happens when postcopy is advised.
As the state (bitmap) is loaded after postcopy was advised but before
postcopy starts listening, we have to discard memory we preallocated
immediately again ourselves.
Note that nothing (not even hugetlb reservations) guarantees for postcopy
that backend memory (especially, hugetlb pages) are still free after they
were freed ones while discarding RAM. Still, allocating that memory at
least once helps catching some basic setup problems.
Before this change, trying to restore a VM when insufficient hugetlb
pages are around results in the process crashing to to a "Bus error"
(SIGBUS). With this change, QEMU fails gracefully:
qemu-system-x86_64: qemu_prealloc_mem: preallocating memory failed: Bad address
qemu-system-x86_64: error while loading state for instance 0x0 of device '0000:00:03.0/virtio-mem-device-early'
qemu-system-x86_64: load of migration failed: Cannot allocate memory
And we can even introspect the early migration data, including the
bitmap:
$ ./scripts/analyze-migration.py -f STATEFILE
{
"ram (2)": {
"section sizes": {
"0000:00:03.0/mem0": "0x0000000780000000",
"0000:00:04.0/mem1": "0x0000000780000000",
"pc.ram": "0x0000000100000000",
"/rom@etc/acpi/tables": "0x0000000000020000",
"pc.bios": "0x0000000000040000",
"0000:00:02.0/e1000.rom": "0x0000000000040000",
"pc.rom": "0x0000000000020000",
"/rom@etc/table-loader": "0x0000000000001000",
"/rom@etc/acpi/rsdp": "0x0000000000001000"
}
},
"0000:00:03.0/virtio-mem-device-early (51)": {
"tmp": "00 00 00 01 40 00 00 00 00 00 00 07 80 00 00 00 00 00 00 00 00 20 00 00 00 00 00 00",
"size": "0x0000000040000000",
"bitmap": "ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff [...]
},
"0000:00:04.0/virtio-mem-device-early (53)": {
"tmp": "00 00 00 08 c0 00 00 00 00 00 00 07 80 00 00 00 00 00 00 00 00 20 00 00 00 00 00 00",
"size": "0x00000001fa400000",
"bitmap": "ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff [...]
},
[...]
Reported-by: Jing Qi <jinqi@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>S
Signed-off-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-01-17 14:22:49 +03:00
|
|
|
static int virtio_mem_prealloc_range_cb(const VirtIOMEM *vmem, void *arg,
|
|
|
|
uint64_t offset, uint64_t size)
|
|
|
|
{
|
|
|
|
void *area = memory_region_get_ram_ptr(&vmem->memdev->mr) + offset;
|
|
|
|
int fd = memory_region_get_fd(&vmem->memdev->mr);
|
|
|
|
Error *local_err = NULL;
|
|
|
|
|
|
|
|
qemu_prealloc_mem(fd, area, size, 1, NULL, &local_err);
|
|
|
|
if (local_err) {
|
|
|
|
error_report_err(local_err);
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int virtio_mem_post_load_early(void *opaque, int version_id)
|
|
|
|
{
|
|
|
|
VirtIOMEM *vmem = VIRTIO_MEM(opaque);
|
|
|
|
RAMBlock *rb = vmem->memdev->mr.ram_block;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (!vmem->prealloc) {
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2023-07-06 10:56:09 +03:00
|
|
|
/*
|
|
|
|
* If shared RAM is migrated using the file content and not using QEMU,
|
|
|
|
* don't mess with preallocation and postcopy.
|
|
|
|
*/
|
|
|
|
if (migrate_ram_is_ignored(rb)) {
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
virtio-mem: Proper support for preallocation with migration
Ordinary memory preallocation runs when QEMU starts up and creates the
memory backends, before processing the incoming migration stream. With
virtio-mem, we don't know which memory blocks to preallocate before
migration started. Now that we migrate the virtio-mem bitmap early, before
migrating any RAM content, we can safely preallocate memory for all plugged
memory blocks before migrating any RAM content.
This is especially relevant for the following cases:
(1) User errors
With hugetlb/files, if we don't have sufficient backend memory available on
the migration destination, we'll crash QEMU (SIGBUS) during RAM migration
when running out of backend memory. Preallocating memory before actual
RAM migration allows for failing gracefully and informing the user about
the setup problem.
(2) Excluded memory ranges during migration
For example, virtio-balloon free page hinting will exclude some pages
from getting migrated. In that case, we won't crash during RAM
migration, but later, when running the VM on the destination, which is
bad.
To fix this for new QEMU machines that migrate the bitmap early,
preallocate the memory early, before any RAM migration. Warn with old
QEMU machines.
Getting postcopy right is a bit tricky, but we essentially now implement
the same (problematic) preallocation logic as ordinary preallocation:
preallocate memory early and discard it again before precopy starts. During
ordinary preallocation, discarding of RAM happens when postcopy is advised.
As the state (bitmap) is loaded after postcopy was advised but before
postcopy starts listening, we have to discard memory we preallocated
immediately again ourselves.
Note that nothing (not even hugetlb reservations) guarantees for postcopy
that backend memory (especially, hugetlb pages) are still free after they
were freed ones while discarding RAM. Still, allocating that memory at
least once helps catching some basic setup problems.
Before this change, trying to restore a VM when insufficient hugetlb
pages are around results in the process crashing to to a "Bus error"
(SIGBUS). With this change, QEMU fails gracefully:
qemu-system-x86_64: qemu_prealloc_mem: preallocating memory failed: Bad address
qemu-system-x86_64: error while loading state for instance 0x0 of device '0000:00:03.0/virtio-mem-device-early'
qemu-system-x86_64: load of migration failed: Cannot allocate memory
And we can even introspect the early migration data, including the
bitmap:
$ ./scripts/analyze-migration.py -f STATEFILE
{
"ram (2)": {
"section sizes": {
"0000:00:03.0/mem0": "0x0000000780000000",
"0000:00:04.0/mem1": "0x0000000780000000",
"pc.ram": "0x0000000100000000",
"/rom@etc/acpi/tables": "0x0000000000020000",
"pc.bios": "0x0000000000040000",
"0000:00:02.0/e1000.rom": "0x0000000000040000",
"pc.rom": "0x0000000000020000",
"/rom@etc/table-loader": "0x0000000000001000",
"/rom@etc/acpi/rsdp": "0x0000000000001000"
}
},
"0000:00:03.0/virtio-mem-device-early (51)": {
"tmp": "00 00 00 01 40 00 00 00 00 00 00 07 80 00 00 00 00 00 00 00 00 20 00 00 00 00 00 00",
"size": "0x0000000040000000",
"bitmap": "ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff [...]
},
"0000:00:04.0/virtio-mem-device-early (53)": {
"tmp": "00 00 00 08 c0 00 00 00 00 00 00 07 80 00 00 00 00 00 00 00 00 20 00 00 00 00 00 00",
"size": "0x00000001fa400000",
"bitmap": "ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff [...]
},
[...]
Reported-by: Jing Qi <jinqi@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>S
Signed-off-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-01-17 14:22:49 +03:00
|
|
|
/*
|
|
|
|
* We restored the bitmap and verified that the basic properties
|
|
|
|
* match on source and destination, so we can go ahead and preallocate
|
|
|
|
* memory for all plugged memory blocks, before actual RAM migration starts
|
|
|
|
* touching this memory.
|
|
|
|
*/
|
|
|
|
ret = virtio_mem_for_each_plugged_range(vmem, NULL,
|
|
|
|
virtio_mem_prealloc_range_cb);
|
|
|
|
if (ret) {
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This is tricky: postcopy wants to start with a clean slate. On
|
|
|
|
* POSTCOPY_INCOMING_ADVISE, postcopy code discards all (ordinarily
|
|
|
|
* preallocated) RAM such that postcopy will work as expected later.
|
|
|
|
*
|
|
|
|
* However, we run after POSTCOPY_INCOMING_ADVISE -- but before actual
|
|
|
|
* RAM migration. So let's discard all memory again. This looks like an
|
|
|
|
* expensive NOP, but actually serves a purpose: we made sure that we
|
|
|
|
* were able to allocate all required backend memory once. We cannot
|
|
|
|
* guarantee that the backend memory we will free will remain free
|
|
|
|
* until we need it during postcopy, but at least we can catch the
|
|
|
|
* obvious setup issues this way.
|
|
|
|
*/
|
|
|
|
if (migration_incoming_postcopy_advised()) {
|
|
|
|
if (ram_block_discard_range(rb, 0, qemu_ram_get_used_length(rb))) {
|
|
|
|
return -EBUSY;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2020-06-26 10:22:45 +03:00
|
|
|
typedef struct VirtIOMEMMigSanityChecks {
|
|
|
|
VirtIOMEM *parent;
|
|
|
|
uint64_t addr;
|
|
|
|
uint64_t region_size;
|
|
|
|
uint64_t block_size;
|
|
|
|
uint32_t node;
|
|
|
|
} VirtIOMEMMigSanityChecks;
|
|
|
|
|
|
|
|
static int virtio_mem_mig_sanity_checks_pre_save(void *opaque)
|
|
|
|
{
|
|
|
|
VirtIOMEMMigSanityChecks *tmp = opaque;
|
|
|
|
VirtIOMEM *vmem = tmp->parent;
|
|
|
|
|
|
|
|
tmp->addr = vmem->addr;
|
|
|
|
tmp->region_size = memory_region_size(&vmem->memdev->mr);
|
|
|
|
tmp->block_size = vmem->block_size;
|
|
|
|
tmp->node = vmem->node;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int virtio_mem_mig_sanity_checks_post_load(void *opaque, int version_id)
|
|
|
|
{
|
|
|
|
VirtIOMEMMigSanityChecks *tmp = opaque;
|
|
|
|
VirtIOMEM *vmem = tmp->parent;
|
|
|
|
const uint64_t new_region_size = memory_region_size(&vmem->memdev->mr);
|
|
|
|
|
|
|
|
if (tmp->addr != vmem->addr) {
|
|
|
|
error_report("Property '%s' changed from 0x%" PRIx64 " to 0x%" PRIx64,
|
|
|
|
VIRTIO_MEM_ADDR_PROP, tmp->addr, vmem->addr);
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
/*
|
|
|
|
* Note: Preparation for resizeable memory regions. The maximum size
|
|
|
|
* of the memory region must not change during migration.
|
|
|
|
*/
|
|
|
|
if (tmp->region_size != new_region_size) {
|
|
|
|
error_report("Property '%s' size changed from 0x%" PRIx64 " to 0x%"
|
|
|
|
PRIx64, VIRTIO_MEM_MEMDEV_PROP, tmp->region_size,
|
|
|
|
new_region_size);
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
if (tmp->block_size != vmem->block_size) {
|
|
|
|
error_report("Property '%s' changed from 0x%" PRIx64 " to 0x%" PRIx64,
|
|
|
|
VIRTIO_MEM_BLOCK_SIZE_PROP, tmp->block_size,
|
|
|
|
vmem->block_size);
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
if (tmp->node != vmem->node) {
|
|
|
|
error_report("Property '%s' changed from %" PRIu32 " to %" PRIu32,
|
|
|
|
VIRTIO_MEM_NODE_PROP, tmp->node, vmem->node);
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static const VMStateDescription vmstate_virtio_mem_sanity_checks = {
|
|
|
|
.name = "virtio-mem-device/sanity-checks",
|
|
|
|
.pre_save = virtio_mem_mig_sanity_checks_pre_save,
|
|
|
|
.post_load = virtio_mem_mig_sanity_checks_post_load,
|
|
|
|
.fields = (VMStateField[]) {
|
|
|
|
VMSTATE_UINT64(addr, VirtIOMEMMigSanityChecks),
|
|
|
|
VMSTATE_UINT64(region_size, VirtIOMEMMigSanityChecks),
|
|
|
|
VMSTATE_UINT64(block_size, VirtIOMEMMigSanityChecks),
|
|
|
|
VMSTATE_UINT32(node, VirtIOMEMMigSanityChecks),
|
|
|
|
VMSTATE_END_OF_LIST(),
|
|
|
|
},
|
|
|
|
};
|
|
|
|
|
2023-01-17 14:22:48 +03:00
|
|
|
static bool virtio_mem_vmstate_field_exists(void *opaque, int version_id)
|
|
|
|
{
|
|
|
|
const VirtIOMEM *vmem = VIRTIO_MEM(opaque);
|
|
|
|
|
|
|
|
/* With early migration, these fields were already migrated. */
|
|
|
|
return !vmem->early_migration;
|
|
|
|
}
|
|
|
|
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
static const VMStateDescription vmstate_virtio_mem_device = {
|
|
|
|
.name = "virtio-mem-device",
|
|
|
|
.minimum_version_id = 1,
|
|
|
|
.version_id = 1,
|
2021-04-13 12:55:27 +03:00
|
|
|
.priority = MIG_PRI_VIRTIO_MEM,
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
.post_load = virtio_mem_post_load,
|
2023-01-17 14:22:48 +03:00
|
|
|
.fields = (VMStateField[]) {
|
|
|
|
VMSTATE_WITH_TMP_TEST(VirtIOMEM, virtio_mem_vmstate_field_exists,
|
|
|
|
VirtIOMEMMigSanityChecks,
|
|
|
|
vmstate_virtio_mem_sanity_checks),
|
|
|
|
VMSTATE_UINT64(usable_region_size, VirtIOMEM),
|
|
|
|
VMSTATE_UINT64_TEST(size, VirtIOMEM, virtio_mem_vmstate_field_exists),
|
|
|
|
VMSTATE_UINT64(requested_size, VirtIOMEM),
|
|
|
|
VMSTATE_BITMAP_TEST(bitmap, VirtIOMEM, virtio_mem_vmstate_field_exists,
|
|
|
|
0, bitmap_size),
|
|
|
|
VMSTATE_END_OF_LIST()
|
|
|
|
},
|
|
|
|
};
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Transfer properties that are immutable while migration is active early,
|
|
|
|
* such that we have have this information around before migrating any RAM
|
|
|
|
* content.
|
|
|
|
*
|
|
|
|
* Note that virtio_mem_is_busy() makes sure these properties can no longer
|
|
|
|
* change on the migration source until migration completed.
|
|
|
|
*
|
|
|
|
* With QEMU compat machines, we transmit these properties later, via
|
|
|
|
* vmstate_virtio_mem_device instead -- see virtio_mem_vmstate_field_exists().
|
|
|
|
*/
|
|
|
|
static const VMStateDescription vmstate_virtio_mem_device_early = {
|
|
|
|
.name = "virtio-mem-device-early",
|
|
|
|
.minimum_version_id = 1,
|
|
|
|
.version_id = 1,
|
|
|
|
.early_setup = true,
|
virtio-mem: Proper support for preallocation with migration
Ordinary memory preallocation runs when QEMU starts up and creates the
memory backends, before processing the incoming migration stream. With
virtio-mem, we don't know which memory blocks to preallocate before
migration started. Now that we migrate the virtio-mem bitmap early, before
migrating any RAM content, we can safely preallocate memory for all plugged
memory blocks before migrating any RAM content.
This is especially relevant for the following cases:
(1) User errors
With hugetlb/files, if we don't have sufficient backend memory available on
the migration destination, we'll crash QEMU (SIGBUS) during RAM migration
when running out of backend memory. Preallocating memory before actual
RAM migration allows for failing gracefully and informing the user about
the setup problem.
(2) Excluded memory ranges during migration
For example, virtio-balloon free page hinting will exclude some pages
from getting migrated. In that case, we won't crash during RAM
migration, but later, when running the VM on the destination, which is
bad.
To fix this for new QEMU machines that migrate the bitmap early,
preallocate the memory early, before any RAM migration. Warn with old
QEMU machines.
Getting postcopy right is a bit tricky, but we essentially now implement
the same (problematic) preallocation logic as ordinary preallocation:
preallocate memory early and discard it again before precopy starts. During
ordinary preallocation, discarding of RAM happens when postcopy is advised.
As the state (bitmap) is loaded after postcopy was advised but before
postcopy starts listening, we have to discard memory we preallocated
immediately again ourselves.
Note that nothing (not even hugetlb reservations) guarantees for postcopy
that backend memory (especially, hugetlb pages) are still free after they
were freed ones while discarding RAM. Still, allocating that memory at
least once helps catching some basic setup problems.
Before this change, trying to restore a VM when insufficient hugetlb
pages are around results in the process crashing to to a "Bus error"
(SIGBUS). With this change, QEMU fails gracefully:
qemu-system-x86_64: qemu_prealloc_mem: preallocating memory failed: Bad address
qemu-system-x86_64: error while loading state for instance 0x0 of device '0000:00:03.0/virtio-mem-device-early'
qemu-system-x86_64: load of migration failed: Cannot allocate memory
And we can even introspect the early migration data, including the
bitmap:
$ ./scripts/analyze-migration.py -f STATEFILE
{
"ram (2)": {
"section sizes": {
"0000:00:03.0/mem0": "0x0000000780000000",
"0000:00:04.0/mem1": "0x0000000780000000",
"pc.ram": "0x0000000100000000",
"/rom@etc/acpi/tables": "0x0000000000020000",
"pc.bios": "0x0000000000040000",
"0000:00:02.0/e1000.rom": "0x0000000000040000",
"pc.rom": "0x0000000000020000",
"/rom@etc/table-loader": "0x0000000000001000",
"/rom@etc/acpi/rsdp": "0x0000000000001000"
}
},
"0000:00:03.0/virtio-mem-device-early (51)": {
"tmp": "00 00 00 01 40 00 00 00 00 00 00 07 80 00 00 00 00 00 00 00 00 20 00 00 00 00 00 00",
"size": "0x0000000040000000",
"bitmap": "ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff [...]
},
"0000:00:04.0/virtio-mem-device-early (53)": {
"tmp": "00 00 00 08 c0 00 00 00 00 00 00 07 80 00 00 00 00 00 00 00 00 20 00 00 00 00 00 00",
"size": "0x00000001fa400000",
"bitmap": "ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff [...]
},
[...]
Reported-by: Jing Qi <jinqi@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>S
Signed-off-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-01-17 14:22:49 +03:00
|
|
|
.post_load = virtio_mem_post_load_early,
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
.fields = (VMStateField[]) {
|
2020-06-26 10:22:45 +03:00
|
|
|
VMSTATE_WITH_TMP(VirtIOMEM, VirtIOMEMMigSanityChecks,
|
|
|
|
vmstate_virtio_mem_sanity_checks),
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
VMSTATE_UINT64(size, VirtIOMEM),
|
|
|
|
VMSTATE_BITMAP(bitmap, VirtIOMEM, 0, bitmap_size),
|
|
|
|
VMSTATE_END_OF_LIST()
|
|
|
|
},
|
|
|
|
};
|
|
|
|
|
|
|
|
static const VMStateDescription vmstate_virtio_mem = {
|
|
|
|
.name = "virtio-mem",
|
|
|
|
.minimum_version_id = 1,
|
|
|
|
.version_id = 1,
|
|
|
|
.fields = (VMStateField[]) {
|
|
|
|
VMSTATE_VIRTIO_DEVICE,
|
|
|
|
VMSTATE_END_OF_LIST()
|
|
|
|
},
|
|
|
|
};
|
|
|
|
|
|
|
|
static void virtio_mem_fill_device_info(const VirtIOMEM *vmem,
|
|
|
|
VirtioMEMDeviceInfo *vi)
|
|
|
|
{
|
|
|
|
vi->memaddr = vmem->addr;
|
|
|
|
vi->node = vmem->node;
|
|
|
|
vi->requested_size = vmem->requested_size;
|
|
|
|
vi->size = vmem->size;
|
|
|
|
vi->max_size = memory_region_size(&vmem->memdev->mr);
|
|
|
|
vi->block_size = vmem->block_size;
|
|
|
|
vi->memdev = object_get_canonical_path(OBJECT(vmem->memdev));
|
|
|
|
}
|
|
|
|
|
|
|
|
static MemoryRegion *virtio_mem_get_memory_region(VirtIOMEM *vmem, Error **errp)
|
|
|
|
{
|
|
|
|
if (!vmem->memdev) {
|
|
|
|
error_setg(errp, "'%s' property must be set", VIRTIO_MEM_MEMDEV_PROP);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
return &vmem->memdev->mr;
|
|
|
|
}
|
|
|
|
|
2020-06-26 10:22:43 +03:00
|
|
|
static void virtio_mem_add_size_change_notifier(VirtIOMEM *vmem,
|
|
|
|
Notifier *notifier)
|
|
|
|
{
|
|
|
|
notifier_list_add(&vmem->size_change_notifiers, notifier);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void virtio_mem_remove_size_change_notifier(VirtIOMEM *vmem,
|
|
|
|
Notifier *notifier)
|
|
|
|
{
|
|
|
|
notifier_remove(notifier);
|
|
|
|
}
|
|
|
|
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
static void virtio_mem_get_size(Object *obj, Visitor *v, const char *name,
|
|
|
|
void *opaque, Error **errp)
|
|
|
|
{
|
|
|
|
const VirtIOMEM *vmem = VIRTIO_MEM(obj);
|
|
|
|
uint64_t value = vmem->size;
|
|
|
|
|
|
|
|
visit_type_size(v, name, &value, errp);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void virtio_mem_get_requested_size(Object *obj, Visitor *v,
|
|
|
|
const char *name, void *opaque,
|
|
|
|
Error **errp)
|
|
|
|
{
|
|
|
|
const VirtIOMEM *vmem = VIRTIO_MEM(obj);
|
|
|
|
uint64_t value = vmem->requested_size;
|
|
|
|
|
|
|
|
visit_type_size(v, name, &value, errp);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void virtio_mem_set_requested_size(Object *obj, Visitor *v,
|
|
|
|
const char *name, void *opaque,
|
|
|
|
Error **errp)
|
|
|
|
{
|
|
|
|
VirtIOMEM *vmem = VIRTIO_MEM(obj);
|
|
|
|
uint64_t value;
|
|
|
|
|
2022-11-21 11:50:53 +03:00
|
|
|
if (!visit_type_size(v, name, &value, errp)) {
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The block size and memory backend are not fixed until the device was
|
|
|
|
* realized. realize() will verify these properties then.
|
|
|
|
*/
|
|
|
|
if (DEVICE(obj)->realized) {
|
|
|
|
if (!QEMU_IS_ALIGNED(value, vmem->block_size)) {
|
|
|
|
error_setg(errp, "'%s' has to be multiples of '%s' (0x%" PRIx64
|
|
|
|
")", name, VIRTIO_MEM_BLOCK_SIZE_PROP,
|
|
|
|
vmem->block_size);
|
|
|
|
return;
|
|
|
|
} else if (value > memory_region_size(&vmem->memdev->mr)) {
|
|
|
|
error_setg(errp, "'%s' cannot exceed the memory backend size"
|
|
|
|
"(0x%" PRIx64 ")", name,
|
|
|
|
memory_region_size(&vmem->memdev->mr));
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (value != vmem->requested_size) {
|
|
|
|
virtio_mem_resize_usable_region(vmem, value, false);
|
|
|
|
vmem->requested_size = value;
|
|
|
|
}
|
|
|
|
/*
|
|
|
|
* Trigger a config update so the guest gets notified. We trigger
|
|
|
|
* even if the size didn't change (especially helpful for debugging).
|
|
|
|
*/
|
|
|
|
virtio_notify_config(VIRTIO_DEVICE(vmem));
|
|
|
|
} else {
|
|
|
|
vmem->requested_size = value;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void virtio_mem_get_block_size(Object *obj, Visitor *v, const char *name,
|
|
|
|
void *opaque, Error **errp)
|
|
|
|
{
|
|
|
|
const VirtIOMEM *vmem = VIRTIO_MEM(obj);
|
|
|
|
uint64_t value = vmem->block_size;
|
|
|
|
|
2020-10-08 11:30:26 +03:00
|
|
|
/*
|
|
|
|
* If not configured by the user (and we're not realized yet), use the
|
|
|
|
* default block size we would use with the current memory backend.
|
|
|
|
*/
|
|
|
|
if (!value) {
|
|
|
|
if (vmem->memdev && memory_region_is_ram(&vmem->memdev->mr)) {
|
|
|
|
value = virtio_mem_default_block_size(vmem->memdev->mr.ram_block);
|
|
|
|
} else {
|
|
|
|
value = virtio_mem_thp_size();
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
visit_type_size(v, name, &value, errp);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void virtio_mem_set_block_size(Object *obj, Visitor *v, const char *name,
|
|
|
|
void *opaque, Error **errp)
|
|
|
|
{
|
|
|
|
VirtIOMEM *vmem = VIRTIO_MEM(obj);
|
|
|
|
uint64_t value;
|
|
|
|
|
|
|
|
if (DEVICE(obj)->realized) {
|
|
|
|
error_setg(errp, "'%s' cannot be changed", name);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2022-11-21 11:50:53 +03:00
|
|
|
if (!visit_type_size(v, name, &value, errp)) {
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (value < VIRTIO_MEM_MIN_BLOCK_SIZE) {
|
|
|
|
error_setg(errp, "'%s' property has to be at least 0x%" PRIx32, name,
|
|
|
|
VIRTIO_MEM_MIN_BLOCK_SIZE);
|
|
|
|
return;
|
|
|
|
} else if (!is_power_of_2(value)) {
|
|
|
|
error_setg(errp, "'%s' property has to be a power of two", name);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
vmem->block_size = value;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void virtio_mem_instance_init(Object *obj)
|
|
|
|
{
|
|
|
|
VirtIOMEM *vmem = VIRTIO_MEM(obj);
|
|
|
|
|
2020-06-26 10:22:43 +03:00
|
|
|
notifier_list_init(&vmem->size_change_notifiers);
|
2021-04-13 12:55:23 +03:00
|
|
|
QLIST_INIT(&vmem->rdl_list);
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
|
|
|
|
object_property_add(obj, VIRTIO_MEM_SIZE_PROP, "size", virtio_mem_get_size,
|
|
|
|
NULL, NULL, NULL);
|
|
|
|
object_property_add(obj, VIRTIO_MEM_REQUESTED_SIZE_PROP, "size",
|
|
|
|
virtio_mem_get_requested_size,
|
|
|
|
virtio_mem_set_requested_size, NULL, NULL);
|
|
|
|
object_property_add(obj, VIRTIO_MEM_BLOCK_SIZE_PROP, "size",
|
|
|
|
virtio_mem_get_block_size, virtio_mem_set_block_size,
|
|
|
|
NULL, NULL);
|
|
|
|
}
|
|
|
|
|
|
|
|
static Property virtio_mem_properties[] = {
|
|
|
|
DEFINE_PROP_UINT64(VIRTIO_MEM_ADDR_PROP, VirtIOMEM, addr, 0),
|
|
|
|
DEFINE_PROP_UINT32(VIRTIO_MEM_NODE_PROP, VirtIOMEM, node, 0),
|
virtio-mem: Support "prealloc=on" option
For scarce memory resources, such as hugetlb, we want to be able to
prealloc such memory resources in order to not crash later on access. On
simple user errors we could otherwise easily run out of memory resources
an crash the VM -- pretty much undesired.
For ordinary memory devices, such as DIMMs, we preallocate memory via the
memory backend for such use cases; however, with virtio-mem we're dealing
with sparse memory backends; preallocating the whole memory backend
destroys the whole purpose of virtio-mem.
Instead, we want to preallocate memory when actually exposing memory to the
VM dynamically, and fail plugging memory gracefully + warn the user in case
preallocation fails.
A common use case for hugetlb will be using "reserve=off,prealloc=off" for
the memory backend and "prealloc=on" for the virtio-mem device. This
way, no huge pages will be reserved for the process, but we can recover
if there are no actual huge pages when plugging memory. Libvirt is
already prepared for this.
Note that preallocation cannot protect from the OOM killer -- which
holds true for any kind of preallocation in QEMU. It's primarily useful
only for scarce memory resources such as hugetlb, or shared file-backed
memory. It's of little use for ordinary anonymous memory that can be
swapped, KSM merged, ... but we won't forbid it.
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20211217134611.31172-9-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2021-12-17 16:46:11 +03:00
|
|
|
DEFINE_PROP_BOOL(VIRTIO_MEM_PREALLOC_PROP, VirtIOMEM, prealloc, false),
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
DEFINE_PROP_LINK(VIRTIO_MEM_MEMDEV_PROP, VirtIOMEM, memdev,
|
|
|
|
TYPE_MEMORY_BACKEND, HostMemoryBackend *),
|
virtio-mem: Support VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE
With VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE, we signal the VM that reading
unplugged memory is not supported. We have to fail feature negotiation
in case the guest does not support VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE.
First, VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE is required to properly handle
memory backends (or architectures) without support for the shared zeropage
in the hypervisor cleanly. Without the shared zeropage, even reading an
unpopulated virtual memory location can populate real memory and
consequently consume memory in the hypervisor. We have a guaranteed shared
zeropage only on MAP_PRIVATE anonymous memory.
Second, we want VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE to be the default
long-term as even populating the shared zeropage can be problematic: for
example, without THP support (possible) or without support for the shared
huge zeropage with THP (unlikely), the PTE page tables to hold the shared
zeropage entries can consume quite some memory that cannot be reclaimed
easily.
Third, there are other optimizations+features (e.g., protection of
unplugged memory, reducing the total memory slot size and bitmap sizes)
that will require VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE.
We really only support x86 targets with virtio-mem for now (and
Linux similarly only support x86), but that might change soon, so prepare
for different targets already.
Add a new "unplugged-inaccessible" tristate property for x86 targets:
- "off" will keep VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE unset and legacy
guests working.
- "on" will set VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE and stop legacy guests
from using the device.
- "auto" selects the default based on support for the shared zeropage.
Warn in case the property is set to "off" and we don't have support for the
shared zeropage.
For existing compat machines, the property will default to "off", to
not change the behavior but eventually warn about a problematic setup.
Short-term, we'll set the property default to "auto" for new QEMU machines.
Mid-term, we'll set the property default to "on" for new QEMU machines.
Long-term, we'll deprecate the parameter and disallow legacy
guests completely.
The property has to match on the migration source and destination. "auto"
will result in the same VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE setting as long
as the qemu command line (esp. memdev) match -- so "auto" is good enough
for migration purposes and the parameter doesn't have to be migrated
explicitly.
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20211217134039.29670-3-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2021-12-17 16:40:38 +03:00
|
|
|
#if defined(VIRTIO_MEM_HAS_LEGACY_GUESTS)
|
|
|
|
DEFINE_PROP_ON_OFF_AUTO(VIRTIO_MEM_UNPLUGGED_INACCESSIBLE_PROP, VirtIOMEM,
|
2023-05-03 21:23:52 +03:00
|
|
|
unplugged_inaccessible, ON_OFF_AUTO_ON),
|
virtio-mem: Support VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE
With VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE, we signal the VM that reading
unplugged memory is not supported. We have to fail feature negotiation
in case the guest does not support VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE.
First, VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE is required to properly handle
memory backends (or architectures) without support for the shared zeropage
in the hypervisor cleanly. Without the shared zeropage, even reading an
unpopulated virtual memory location can populate real memory and
consequently consume memory in the hypervisor. We have a guaranteed shared
zeropage only on MAP_PRIVATE anonymous memory.
Second, we want VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE to be the default
long-term as even populating the shared zeropage can be problematic: for
example, without THP support (possible) or without support for the shared
huge zeropage with THP (unlikely), the PTE page tables to hold the shared
zeropage entries can consume quite some memory that cannot be reclaimed
easily.
Third, there are other optimizations+features (e.g., protection of
unplugged memory, reducing the total memory slot size and bitmap sizes)
that will require VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE.
We really only support x86 targets with virtio-mem for now (and
Linux similarly only support x86), but that might change soon, so prepare
for different targets already.
Add a new "unplugged-inaccessible" tristate property for x86 targets:
- "off" will keep VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE unset and legacy
guests working.
- "on" will set VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE and stop legacy guests
from using the device.
- "auto" selects the default based on support for the shared zeropage.
Warn in case the property is set to "off" and we don't have support for the
shared zeropage.
For existing compat machines, the property will default to "off", to
not change the behavior but eventually warn about a problematic setup.
Short-term, we'll set the property default to "auto" for new QEMU machines.
Mid-term, we'll set the property default to "on" for new QEMU machines.
Long-term, we'll deprecate the parameter and disallow legacy
guests completely.
The property has to match on the migration source and destination. "auto"
will result in the same VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE setting as long
as the qemu command line (esp. memdev) match -- so "auto" is good enough
for migration purposes and the parameter doesn't have to be migrated
explicitly.
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20211217134039.29670-3-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2021-12-17 16:40:38 +03:00
|
|
|
#endif
|
2023-01-17 14:22:48 +03:00
|
|
|
DEFINE_PROP_BOOL(VIRTIO_MEM_EARLY_MIGRATION_PROP, VirtIOMEM,
|
|
|
|
early_migration, true),
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
DEFINE_PROP_END_OF_LIST(),
|
|
|
|
};
|
|
|
|
|
2021-04-13 12:55:23 +03:00
|
|
|
static uint64_t virtio_mem_rdm_get_min_granularity(const RamDiscardManager *rdm,
|
|
|
|
const MemoryRegion *mr)
|
|
|
|
{
|
|
|
|
const VirtIOMEM *vmem = VIRTIO_MEM(rdm);
|
|
|
|
|
|
|
|
g_assert(mr == &vmem->memdev->mr);
|
|
|
|
return vmem->block_size;
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool virtio_mem_rdm_is_populated(const RamDiscardManager *rdm,
|
|
|
|
const MemoryRegionSection *s)
|
|
|
|
{
|
|
|
|
const VirtIOMEM *vmem = VIRTIO_MEM(rdm);
|
|
|
|
uint64_t start_gpa = vmem->addr + s->offset_within_region;
|
|
|
|
uint64_t end_gpa = start_gpa + int128_get64(s->size);
|
|
|
|
|
|
|
|
g_assert(s->mr == &vmem->memdev->mr);
|
|
|
|
|
|
|
|
start_gpa = QEMU_ALIGN_DOWN(start_gpa, vmem->block_size);
|
|
|
|
end_gpa = QEMU_ALIGN_UP(end_gpa, vmem->block_size);
|
|
|
|
|
|
|
|
if (!virtio_mem_valid_range(vmem, start_gpa, end_gpa - start_gpa)) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2023-05-23 21:30:36 +03:00
|
|
|
return virtio_mem_is_range_plugged(vmem, start_gpa, end_gpa - start_gpa);
|
2021-04-13 12:55:23 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
struct VirtIOMEMReplayData {
|
|
|
|
void *fn;
|
|
|
|
void *opaque;
|
|
|
|
};
|
|
|
|
|
|
|
|
static int virtio_mem_rdm_replay_populated_cb(MemoryRegionSection *s, void *arg)
|
|
|
|
{
|
|
|
|
struct VirtIOMEMReplayData *data = arg;
|
|
|
|
|
|
|
|
return ((ReplayRamPopulate)data->fn)(s, data->opaque);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int virtio_mem_rdm_replay_populated(const RamDiscardManager *rdm,
|
|
|
|
MemoryRegionSection *s,
|
|
|
|
ReplayRamPopulate replay_fn,
|
|
|
|
void *opaque)
|
|
|
|
{
|
|
|
|
const VirtIOMEM *vmem = VIRTIO_MEM(rdm);
|
|
|
|
struct VirtIOMEMReplayData data = {
|
|
|
|
.fn = replay_fn,
|
|
|
|
.opaque = opaque,
|
|
|
|
};
|
|
|
|
|
|
|
|
g_assert(s->mr == &vmem->memdev->mr);
|
|
|
|
return virtio_mem_for_each_plugged_section(vmem, s, &data,
|
|
|
|
virtio_mem_rdm_replay_populated_cb);
|
|
|
|
}
|
|
|
|
|
2021-10-11 20:53:39 +03:00
|
|
|
static int virtio_mem_rdm_replay_discarded_cb(MemoryRegionSection *s,
|
|
|
|
void *arg)
|
|
|
|
{
|
|
|
|
struct VirtIOMEMReplayData *data = arg;
|
|
|
|
|
|
|
|
((ReplayRamDiscard)data->fn)(s, data->opaque);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void virtio_mem_rdm_replay_discarded(const RamDiscardManager *rdm,
|
|
|
|
MemoryRegionSection *s,
|
|
|
|
ReplayRamDiscard replay_fn,
|
|
|
|
void *opaque)
|
|
|
|
{
|
|
|
|
const VirtIOMEM *vmem = VIRTIO_MEM(rdm);
|
|
|
|
struct VirtIOMEMReplayData data = {
|
|
|
|
.fn = replay_fn,
|
|
|
|
.opaque = opaque,
|
|
|
|
};
|
|
|
|
|
|
|
|
g_assert(s->mr == &vmem->memdev->mr);
|
|
|
|
virtio_mem_for_each_unplugged_section(vmem, s, &data,
|
|
|
|
virtio_mem_rdm_replay_discarded_cb);
|
|
|
|
}
|
|
|
|
|
2021-04-13 12:55:23 +03:00
|
|
|
static void virtio_mem_rdm_register_listener(RamDiscardManager *rdm,
|
|
|
|
RamDiscardListener *rdl,
|
|
|
|
MemoryRegionSection *s)
|
|
|
|
{
|
|
|
|
VirtIOMEM *vmem = VIRTIO_MEM(rdm);
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
g_assert(s->mr == &vmem->memdev->mr);
|
|
|
|
rdl->section = memory_region_section_new_copy(s);
|
|
|
|
|
|
|
|
QLIST_INSERT_HEAD(&vmem->rdl_list, rdl, next);
|
|
|
|
ret = virtio_mem_for_each_plugged_section(vmem, rdl->section, rdl,
|
|
|
|
virtio_mem_notify_populate_cb);
|
|
|
|
if (ret) {
|
|
|
|
error_report("%s: Replaying plugged ranges failed: %s", __func__,
|
|
|
|
strerror(-ret));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void virtio_mem_rdm_unregister_listener(RamDiscardManager *rdm,
|
|
|
|
RamDiscardListener *rdl)
|
|
|
|
{
|
|
|
|
VirtIOMEM *vmem = VIRTIO_MEM(rdm);
|
|
|
|
|
|
|
|
g_assert(rdl->section->mr == &vmem->memdev->mr);
|
|
|
|
if (vmem->size) {
|
|
|
|
if (rdl->double_discard_supported) {
|
|
|
|
rdl->notify_discard(rdl, rdl->section);
|
|
|
|
} else {
|
|
|
|
virtio_mem_for_each_plugged_section(vmem, rdl->section, rdl,
|
|
|
|
virtio_mem_notify_discard_cb);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
memory_region_section_free_copy(rdl->section);
|
|
|
|
rdl->section = NULL;
|
|
|
|
QLIST_REMOVE(rdl, next);
|
|
|
|
}
|
|
|
|
|
2023-07-11 18:34:44 +03:00
|
|
|
static void virtio_mem_unplug_request_check(VirtIOMEM *vmem, Error **errp)
|
|
|
|
{
|
|
|
|
if (vmem->unplugged_inaccessible == ON_OFF_AUTO_OFF) {
|
|
|
|
/*
|
|
|
|
* We could allow it with a usable region size of 0, but let's just
|
|
|
|
* not care about that legacy setting.
|
|
|
|
*/
|
|
|
|
error_setg(errp, "virtio-mem device cannot get unplugged while"
|
|
|
|
" '" VIRTIO_MEM_UNPLUGGED_INACCESSIBLE_PROP "' != 'on'");
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (vmem->size) {
|
|
|
|
error_setg(errp, "virtio-mem device cannot get unplugged while"
|
|
|
|
" '" VIRTIO_MEM_SIZE_PROP "' != '0'");
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
if (vmem->requested_size) {
|
|
|
|
error_setg(errp, "virtio-mem device cannot get unplugged while"
|
|
|
|
" '" VIRTIO_MEM_REQUESTED_SIZE_PROP "' != '0'");
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
static void virtio_mem_class_init(ObjectClass *klass, void *data)
|
|
|
|
{
|
|
|
|
DeviceClass *dc = DEVICE_CLASS(klass);
|
|
|
|
VirtioDeviceClass *vdc = VIRTIO_DEVICE_CLASS(klass);
|
|
|
|
VirtIOMEMClass *vmc = VIRTIO_MEM_CLASS(klass);
|
2021-04-13 12:55:23 +03:00
|
|
|
RamDiscardManagerClass *rdmc = RAM_DISCARD_MANAGER_CLASS(klass);
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
|
|
|
|
device_class_set_props(dc, virtio_mem_properties);
|
|
|
|
dc->vmsd = &vmstate_virtio_mem;
|
|
|
|
|
|
|
|
set_bit(DEVICE_CATEGORY_MISC, dc->categories);
|
|
|
|
vdc->realize = virtio_mem_device_realize;
|
|
|
|
vdc->unrealize = virtio_mem_device_unrealize;
|
|
|
|
vdc->get_config = virtio_mem_get_config;
|
|
|
|
vdc->get_features = virtio_mem_get_features;
|
virtio-mem: Support VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE
With VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE, we signal the VM that reading
unplugged memory is not supported. We have to fail feature negotiation
in case the guest does not support VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE.
First, VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE is required to properly handle
memory backends (or architectures) without support for the shared zeropage
in the hypervisor cleanly. Without the shared zeropage, even reading an
unpopulated virtual memory location can populate real memory and
consequently consume memory in the hypervisor. We have a guaranteed shared
zeropage only on MAP_PRIVATE anonymous memory.
Second, we want VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE to be the default
long-term as even populating the shared zeropage can be problematic: for
example, without THP support (possible) or without support for the shared
huge zeropage with THP (unlikely), the PTE page tables to hold the shared
zeropage entries can consume quite some memory that cannot be reclaimed
easily.
Third, there are other optimizations+features (e.g., protection of
unplugged memory, reducing the total memory slot size and bitmap sizes)
that will require VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE.
We really only support x86 targets with virtio-mem for now (and
Linux similarly only support x86), but that might change soon, so prepare
for different targets already.
Add a new "unplugged-inaccessible" tristate property for x86 targets:
- "off" will keep VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE unset and legacy
guests working.
- "on" will set VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE and stop legacy guests
from using the device.
- "auto" selects the default based on support for the shared zeropage.
Warn in case the property is set to "off" and we don't have support for the
shared zeropage.
For existing compat machines, the property will default to "off", to
not change the behavior but eventually warn about a problematic setup.
Short-term, we'll set the property default to "auto" for new QEMU machines.
Mid-term, we'll set the property default to "on" for new QEMU machines.
Long-term, we'll deprecate the parameter and disallow legacy
guests completely.
The property has to match on the migration source and destination. "auto"
will result in the same VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE setting as long
as the qemu command line (esp. memdev) match -- so "auto" is good enough
for migration purposes and the parameter doesn't have to be migrated
explicitly.
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20211217134039.29670-3-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2021-12-17 16:40:38 +03:00
|
|
|
vdc->validate_features = virtio_mem_validate_features;
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
vdc->vmsd = &vmstate_virtio_mem_device;
|
|
|
|
|
|
|
|
vmc->fill_device_info = virtio_mem_fill_device_info;
|
|
|
|
vmc->get_memory_region = virtio_mem_get_memory_region;
|
2020-06-26 10:22:43 +03:00
|
|
|
vmc->add_size_change_notifier = virtio_mem_add_size_change_notifier;
|
|
|
|
vmc->remove_size_change_notifier = virtio_mem_remove_size_change_notifier;
|
2023-07-11 18:34:44 +03:00
|
|
|
vmc->unplug_request_check = virtio_mem_unplug_request_check;
|
2021-04-13 12:55:23 +03:00
|
|
|
|
|
|
|
rdmc->get_min_granularity = virtio_mem_rdm_get_min_granularity;
|
|
|
|
rdmc->is_populated = virtio_mem_rdm_is_populated;
|
|
|
|
rdmc->replay_populated = virtio_mem_rdm_replay_populated;
|
2021-10-11 20:53:39 +03:00
|
|
|
rdmc->replay_discarded = virtio_mem_rdm_replay_discarded;
|
2021-04-13 12:55:23 +03:00
|
|
|
rdmc->register_listener = virtio_mem_rdm_register_listener;
|
|
|
|
rdmc->unregister_listener = virtio_mem_rdm_unregister_listener;
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static const TypeInfo virtio_mem_info = {
|
|
|
|
.name = TYPE_VIRTIO_MEM,
|
|
|
|
.parent = TYPE_VIRTIO_DEVICE,
|
|
|
|
.instance_size = sizeof(VirtIOMEM),
|
|
|
|
.instance_init = virtio_mem_instance_init,
|
|
|
|
.class_init = virtio_mem_class_init,
|
|
|
|
.class_size = sizeof(VirtIOMEMClass),
|
2021-04-13 12:55:23 +03:00
|
|
|
.interfaces = (InterfaceInfo[]) {
|
|
|
|
{ TYPE_RAM_DISCARD_MANAGER },
|
|
|
|
{ }
|
|
|
|
},
|
virtio-mem: Paravirtualized memory hot(un)plug
This is the very basic/initial version of virtio-mem. An introduction to
virtio-mem can be found in the Linux kernel driver [1]. While it can be
used in the current state for hotplug of a smaller amount of memory, it
will heavily benefit from resizeable memory regions in the future.
Each virtio-mem device manages a memory region (provided via a memory
backend). After requested by the hypervisor ("requested-size"), the
guest can try to plug/unplug blocks of memory within that region, in order
to reach the requested size. Initially, and after a reboot, all memory is
unplugged (except in special cases - reboot during postcopy).
The guest may only try to plug/unplug blocks of memory within the usable
region size. The usable region size is a little bigger than the
requested size, to give the device driver some flexibility. The usable
region size will only grow, except on reboots or when all memory is
requested to get unplugged. The guest can never plug more memory than
requested. Unplugged memory will get zapped/discarded, similar to in a
balloon device.
The block size is variable, however, it is always chosen in a way such that
THP splits are avoided (e.g., 2MB). The state of each block
(plugged/unplugged) is tracked in a bitmap.
As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now
expose "VirtioMEMDeviceInfo" via "query-memory-devices".
--------------------------------------------------------------------------
There are two important follow-up items that are in the works:
1. Resizeable memory regions: Use resizeable allocations/RAM blocks to
grow/shrink along with the usable region size. This avoids creating
initially very big VMAs, RAM blocks, and KVM slots.
2. Protection of unplugged memory: Make sure the gust cannot actually
make use of unplugged memory.
Other follow-up items that are in the works:
1. Exclude unplugged memory during migration (via precopy notifier).
2. Handle remapping of memory.
3. Support for other architectures.
--------------------------------------------------------------------------
Example usage (virtio-mem-pci is introduced in follow-up patches):
Start QEMU with two virtio-mem devices (one per NUMA node):
$ qemu-system-x86_64 -m 4G,maxmem=20G \
-smp sockets=2,cores=2 \
-numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
[...]
-object memory-backend-ram,id=mem0,size=8G \
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \
-object memory-backend-ram,id=mem1,size=8G \
-device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G
Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 0
size: 0
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 1073741824
size: 1073741824
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
Add some memory to node 0:
(qemu) qom-set vm0 requested-size 500M
Remove some memory from node 1:
(qemu) qom-set vm1 requested-size 200M
Query the configuration again:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
memaddr: 0x140000000
node: 0
requested-size: 524288000
size: 524288000
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
memaddr: 0x340000000
node: 1
requested-size: 209715200
size: 209715200
max-size: 8589934592
block-size: 2097152
memdev: /objects/mem1
[1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200626072248.78761-11-david@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
|
|
|
};
|
|
|
|
|
|
|
|
static void virtio_register_types(void)
|
|
|
|
{
|
|
|
|
type_register_static(&virtio_mem_info);
|
|
|
|
}
|
|
|
|
|
|
|
|
type_init(virtio_register_types)
|